Hacker News new | past | comments | ask | show | jobs | submit login
It's 2023, so of course I'm learning Common Lisp (schemescape.com)
373 points by behnamoh on July 27, 2023 | hide | past | favorite | 336 comments



Wow, wasn't expecting to see my post on here! Eventually, I want to write a follow-up, but I'm still a beginner.

Here's what I've liked about Common Lisp so far:

* The condition system is neat and I've never used anything like it -- you can easily control code from afar with restarts

* REPL-driven programming is handy in situations where you don't quite know what will happen and don't want to lose context -- for example parsing data from a source you're unfamiliar with, you can just update your code and continue on instead of having to save, possibly compile, and restart from the very beginning

* Common Lisp has a lot of implementations and there's a good deal of interoperability -- I was able to swap out implementations to trade speed (SBCL) for memory usage (CLISP) in one case (multiple compatible implementations is one of the reasons I've been leaning towards CL instead of Scheme for learning a Lisp)

* Even as an Emacs noob, the integration with Common Lisp is excellent, and it works great even on my super slow netbook where I've been developing -- this isn't as big of an advantage these days with fast computers, VS Code, and language servers, but it's definitely retrofuturistic

There's also a few things I don't like:

* The most popular package manager (QuickLisp) is nice, but not nearly as featureful as I've become accustomed to with newer languages/ecosystems

* Since the language itself is frozen in time, you need lots of interoperability libraries for threads, synchronization, command line arguments, and tons of other things

* I really, really wish SBCL could support fully static builds, to enable distributing binaries to non-glibc Linux distributions

I'm sure there are more pros/cons, but that's what came to mind just now.


I have some cons!

Last time I checked on it, QuickLisp doesn't support fetching packages over anything except for plain http, with no encryption and no verification mechanism in place to detect files that may have been tampered with during transmission.

I think not supporting encryption or authentication for something as important as fetching source code makes QL a non-starter for me and hopefully for anyone else who cares about security.

Another issue I have ran into, is that SBCL is hosted on sourceforge, which has in the past injected malware into projects downloadable archives! I consider this to also be a security issue, and sourceforge in general is not pleasant to work with. I don't think there are any valid reasons to continue to use sourceforge today, so why such an important project continues to use it confuses me a lot.

I don't see these issues mentioned by anyone else which is bizarre to me.

I really like lisps and common lisp specifically but things like this has driven me away from using it and it doesn't appear that anyone cares about fixing these things.


Solutions for the lack of https:

- add in https://github.com/rudolfochrist/ql-https (downloads packages with curl)

- use another package manager, CLPM: https://www.clpm.dev (or the newest ocicl)

> CLPM comes as a pre-built binary, supports HTTPS by default, supports installing multiple package versions, supports versioned systems, and more.

- use mitmproxy: https://hiphish.github.io/blog/2022/03/19/securing-quicklisp...


These issues get mentioned a lot, you just haven't noticed I guess. Sourceforge is also an issue with some C libraries too, I'm guessing because it was done a long time ago? not sure.

I use ECL because it has really good C interop. It actually lets you inline C and access macros directly, making it a great glue language for C libraries. It's what I'm using it for now. I think you might even be able to avoid the GC entirely and use it to script C programs together in a performant way, by using the C FFI to allocate and manage the memory, including the ECL types, instead of the GC. And that's actually doable because of how good the inspector/debugger for lisp is. You can even inline assembly. I'm working on a bunch of CL stuff around this sort of thing, I plan to do a writeup of it and share it once I've developed it more.

Lisp has it's downsides, but the C FFI/embeddability, along with the excellent low-level debugger/inspector, interactivity, and conditions and restarts, makes it worth the time for me to invest in it. And the stability of the language. My main gripe is the reader, but it's easy-ish enough to avoid the problems with named-readtables, or a simple lisp parser for `read` or whatever. I like Clojure, but it's missing some key stuff from the old lisp world that I'd love to see. Shadow-cljs is awesome.


Nix has a really convenient new CL libraries packaging upstream now. That verifies everything with sha256. It's quite complete because it's seeded from Quicklisp and had more packages added on (aswell as their native library dependcies.)

Nix isn't to everyone's taste but it demonstrates that you can treat security/reproducibility/etc as orthogonal to Quicklisp and Sourceforge (and to Lisp native tooling in general.)


Quicklisp doesn’t use TLS or signatures? How have I not heard this before?

That would be unbelievably irresponsible. Has this really not been addressed by the CL community?

Edit: here’s the issue: https://github.com/quicklisp/quicklisp-client/issues/167

Thanks for bringing this up!


The reason for this is quite simple: portability. Quicklisp also uses plain TAR files to distribute dists. Why? Because quicklisp has a built-in TAR extractor written in 100% standard/portable CL. This allows Quicklisp to run on just about everything, from your computer to real LispMs and operating systems like Mezzano.

TLS comes up every time someone discusses Quicklisp, but nobody bothers to go ahead and actually implement it portably (and even if they did, have fun with performance and side channel attacks, both of which require you to break portability to implement well for every platform you want to target).

If you would like a more stereotypical package manager, consider using CLPM. Though one of the big reasons to use CLPM is not encryption IMO, but versioning. ASDF supports locking versions of dependencies, but Quicklisp doesnt ever use this and instead constantly pushes latest of everything from git repositories. This IMO sucks a lot more than using plain HTTP given that this actually breaks code, whereas some MITM from plain HTTP connection to Quicklisp would require so much coordination (and specificity of target) that it's just not in my threat model at all.


This does keep coming up, and it's a few years old now. I think Quicklisp can easily still support https while supporting the older packages that are tar+http, which could easily be mirrored in a git repo. Quicklisp has unfortunately taken over the entire ecosystem, making it hard to use anything else, and you often need to depend on it to use a lot of tools in the ecosystem. It sort of reminds me of Systemd in that way.

I agree on the version pinning being a worse situation, and also not having something like "node_modules" for lisp. I haven't tried CLPM since a while back, it was kind of hard to setup back then.

I have a little package manager thing, cl-micropm, that just uses Quicklisp to fetch everything via docker (should probably support podman too), and an .envrc file to tell ASDF to look in the project directory (a project-local node_modules-like folder called "lisp-systems") for systems. That way I can pin my deps manually by picking the commits + git submodule in lisp-systems/, and it's isolated to my local project. I looked into using the Docker container to rewrite the requests to use https, bypassing whatever Quicklisp is doing, but I never got around to that.

I'm looking to switch it to something even simpler/explicit though, cl-pm, that'll only optionally need/use Quicklisp via podman _only_ to figure out what the dependencies are, and then just have a function that uses wget/curl/git-pull to conveniently explicitly pull them in on request. That way you can decide to add a git mirror for an old http library, or pin a specific version, etc. It's slightly more manual than Quicklisp or CLPM, not a big deal, but very easy for anyone with just a little bit of lisp knowledge to understand the whole thing in under an hour.


> Quicklisp has unfortunately taken over the entire ecosystem, making it hard to use anything else, and you often need to depend on it to use a lot of tools in the ecosystem. It sort of reminds me of Systemd in that way.

This is a strange statement.

What requires QL to work? In the "bad old days" you had to manually download the sources and drop them somewhere ASDF could find them[1]. This still works. You can blithely live as if QL does not exist and get that same experience.

1: Yes, there was asdf-install, but I think I managed to get that to work once with about half-a-dozen tries?


Ultralisp, Rowsell, Qlot, Quickdocs, etc. Virtually every modern project build/install instructions reference Quicklisp. You have no idea which dependencies you need to pull and from where, which can be a real PITA for a large project. A lot of project code I've looked at also has Quicklisp references in the actual code for whatever reason, usually for testing or building or whatever else, so to run those you need Quicklisp. It's really hard not to say it's taken over the ecosystem or that there isn't lock-in, I don't know what you mean to be honest. Quicklisp is also a curated list gatekept by one person, so whatever is on there isn't really representative of everything that's being worked on. You can publish on Ultralisp if you don't want to wait or if it wasn't accepted, but then you're still using Quicklisp under the hood. And it's hard to discover things on Github/Gitlab/etc because there's a lot of stub repos just trying things out, with little to no stars.

I'd love to see ecosystem support for other package managers. CLPM is still in beta and has been for a good while now. Quicklisp too. Quicklisp famously doesn't support HTTPS, version pinning, project locals, etc, which has really throttled any progress with the Common Lisp ecosystem. It's not like ASDF at all, which is become a standard that's built into a lot of the lisp compilers.


Before QL, you looked at the .asd and searched cliki for each name. You can still do that if you like (and can use Google as well). Sometimes the readme had better instructions, but often they didn't work.

In fact, each project in the QL repository includes a link to upstream, so you can use that to find your sources if you like

I literally once rewrote the 20% of a library that I personally needed because it was faster than tracking down all the dependencies.

Your original comment reads like there used to be all these awesome package managers, and QL came around and squashed them, but there was rather a giant vacuum that QL quickly filled.


> Before QL, you looked at the .asd and searched cliki for each name.

Only very few libraries are on Cliki, and the links "upstream" just link to the repo that almost always says to just use Quicklisp for installation. Quicklisp has a quicklisp-projects repo where the project sources are all in one place, but it's not very helpful for what I've been talking about.

> Your original comment reads like there used to be all these awesome package managers

Sorry I don't want to engage with flamebait... I commented on this thread to raise awareness for issues and interesting things that I think people here might find useful, because there's still a lot of interest in lisp.

If you really think I'm wrong, do a writeup and share it on HN with everyone. You can go to the awesome-cl repo to find the most popular libraries in the ecosystem, and show how easy it is to avoid using Quicklisp to install/build/find the deps/run tests for all those repos. It would really help and I think it would save a lot of people time. For something like the Nodejs ecosystem, for example, such a writeup would probably only take like an hour tops because of the maturity of the npm package manager.


> If you really think I'm wrong, do a writeup and share it on HN with everyone. You can go to the awesome-cl repo to find the most popular libraries in the ecosystem, and show how easy it is to avoid using Quicklisp to install/build/find the deps/run tests for all those repos. It would really help and I think it would save a lot of people time. For something like the Nodejs ecosystem, for example, such a writeup would probably only take like an hour tops because of the maturity of the npm package manager.

I picked dexador because Fukamachi likes lots of small projects, so it's going to have lots of deps; it took me about 50 minutes while watching baseball and chatting with family:

https://gist.github.com/jasom/474ba02bf3d4e0c02d8fc10feacd3b...

I should also note that, should you want to avoid reading the .asd file, you can skip steps 2-4 and just download dependencies as-needed.

This is literally what my workflow was for using 3rd party Lisp projects the day before QL came out. Prior to my discovery of Google it was even more of a pain.

I've never successfully gotten a Nodejs project working without NPM, but NPM vies with pypi for my second least favorite packaging ecosystem (haskell cabal "wins" this contest).


> Quicklisp doesnt ever use this and instead constantly pushes latest of everything from git repositories

Yeah, I didn't recall off hand, but this was one of my main complaints with Quicklisp vs. other package managers I've used (for other ecosystems--not CL).

> whereas some MITM from plain HTTP connection to Quicklisp would require so much coordination (and specificity of target) that it's just not in my threat model at all

I hope you're right, but it still seems like an unnecessary risk. Even if I can't imagine a scenario where someone is able to MITM me (or, more likely, a server I'm deploying code to), there's still the lingering feeling that it's possible. I certainly wouldn't download an executable over HTTP and run it, and downloading library code is fairly similar (although easier to inspect, at least).


Quicklisp doesn’t need to support TLS, but it does need to support authentication of some sort. Signing files would be good enough.


Are you proposing authentication over an insecure connection? If so, then the credentials could be compromised by a middle man. The same would be true for the signatures.


I don't really buy this argument at all.

There is no technical reason why quicklisp couldn't use the systems libcurl and openssl when its available and fallback to fetching with its portable http implementation when they aren't available.

Every other languages package manager has managed to solve this issue!

If the issue is that nobody has actually had time to work on it, that's fair, but I don't believe that optionally supporting libcurl would cause QL to be less portable.


Try ocicl instead of quicklisp. System tarballs are hosted in an OCI registry, and are downloaded via TLS connections (obeying proxies). Tarballs are signed and signatures are stored in the sigstore rekor transparency log for later inspection. https://github.com/ocicl/ocicl


Just wanted to say I did see your other comment and am intrigued by ocicl. Thanks!


> Last time I checked on it, QuickLisp doesn't support fetching packages over anything except for plain http, with no encryption and no verification mechanism in place to detect files that may have been tampered with during transmission.

I know it's not an excuse, but it was fun as heck booting up "capital M" MacOS (9.2.1) and loading Quicklisp into MCL without any trouble. I'm not even sure that's a supported platform by Quicklisp. https://code.google.com/archive/p/mcl/


I think there might have been bits and pieces somewhere to run Quicklisp on lisp machines... or at least ASDF, which is the core dependency


> I have some cons

I’m sure you do :°)


When I started using CL 20 years ago, libraries were stored on cliki and any malicious user could put malware there. Any source you asdf-installed was generally GPG signed and the installer automatically checked signatures against your personal trust-chain.

Learning CL back then was my first introduction to GPG (and Emacs, and Linux)


> When I started using CL 20 years ago, libraries were stored on cliki and any malicious user could put malware there. Any source you asdf-installed was generally GPG signed and the installer automatically checked signatures against your personal trust-chain.

Which, in practice, involved downloading GPG public keys from cliki because I didn't know every single CL developer.


For static builds, if you're willing to run a slightly older version of sbcl daewok's work on building and linking sbcl in a musl environment might be solution you're looking for. I've tried to port his patches to more recent versions but there are segfaults due to changes in upstream.

https://www.timmons.dev/posts/static-executables-with-sbcl.h... https://www.timmons.dev/posts/static-executables-with-sbcl-v...


Yes, I did see that, but I was scared off by having to apply patches :)


I will give you a cons. https://cons.io Gerbil/Gambit scheme are fully static binary generating alternative to CL.


I’ll take a look, thanks! My biggest concern with Scheme is that each implementation seems to have its own ecosystem due to subtle incompatibilities.

From an outsider’s perspective it seems a lot more fragmented than CL. Not necessarily a big deal if you have the libraries you want, but it gives me pause.


R7RS, which Gambit (mostly?) supports, helps mitigate this by making library code more portable across implementations. Gambit, in particular, can also very easily take advantage of the wide variety of C libraries; it has one of the easiest, most integrated FFIs of all Scheme implementations.


> lots of interoperability libraries

That's true. For cases when you want to start with a good set of libraries (json, csv, databases, HTTP client, CLI args, language extensions…), I am putting up this collection together: https://github.com/ciel-lang/CIEL/ It can be used as a normal Quicklisp library, or as a core image (it then starts up instantly) or as a binary.

It can run scripts nearly instantly too (so it isn't unlike Babashka). We are ironing out the details, not at v1.0 yet.

> handling a runtime error by just fixing the broken code--in-place, without any restarts [from the blog]

Also (second shameless plug) I should have illustrated this here: https://www.youtube.com/watch?v=jBBS4FeY7XM

We run a long and intensive computation and, bad luck, we get an error in the last step. Instead of re-running everything again from zero, we get the interactive debugger, we go to the erroneous line, we compile the fixed function, we come back to the debugger, we choose a point on the stackframe to resume execution from (the last step), and we see our program pass. Hope this illustrates the feature well!


Thanks for your write up. I am looking forward to the next installment.


Thank you for your Lisp books!

I like your pragmatic approach of using Lisp where it makes sense and not being afraid to shell out to something else where appropriate (among many other nuggets of wisdom).


Check out ocicl as an alternative to quicklisp!


I cdr car less about your cons. Seriously though, mad props for being diligent enough to spend your attention on this. There is a lot to learn from people who came before us and build on that.


SBCL supports static builds by saving core with runtime into an executable file you can then copy around at will.


Do they work across glibc verisons or musl libc? My understanding is that they do not.


I often use RHEL7-compatible binaries on RHEL8 and Debian (testing) machines, with no problems.


If you link against an old version, it'll generally work with a newer one.

Old versions are unfortunately not always compatible with new libraries...


LISP continues to be a very interesting language.

But REPL development is a mixed blessing. There are many situations where you want to start from a blank slate with no previous state.

LISP would be a more practical language if it included a trivial option to make that possible.


> LISP would be a more practical language if it included a trivial option to make that possible.

If you're using SLIME: M-x restart-inferior-lisp


In that case, can't you just restart the REPL? Or give the program a main function that you run?


Won't you also be more likely to write code based on data that you happen to have in the current situation, but not for data that covers every situation?

E.g. code that accesses an optional property as if it was always present, because it happens to be present when you're writng the code, etc.

That seems like a possible pitfall when relying on a REPL heavily, but I haven't used such a language myself, so can't speak from experience.


And with TDD, aren't you ore likely to write code based on the current tests you have, but not code that covers every situation?

Any time writing code, you (should) aim for the general situation and then test it with whatever edge-cases you think of at the time. The REPL lets you live-test. I know many people who dump their REPL history to a file and turn them into tests.


My attitude with tests is not to write individual tests when I can write property-based tests. The payoff from the latter is considerable. Let the computer do the work of generating and running tests; its time is worth a whole lot less than mine.

For individual tests, say for coverage, these should also be generated automatically if possible, say by looking for inputs that kill mutants. I've backburned a Common Lisp system for doing this, generating mutants from Common Lisp source forms and automatically searching for and minimizing inputs that kill new mutants. Maybe one day I'll finish this and put it out there for general use.


My point was, having an actual example of data in front of you, instead of only definition of the structure/schema/interface/type of the data could push people more towards relying on things specific to that example. Especially in dynamically typed languages, but also for things like trying to take the first element of a list that might be empty (in languages where that doesn't return an `Option`), etc.

And I wonder whether someone observed that in practice.


I see what you were getting at now

I've not personally observed that, fwiw.


Never programmed in common lisp but I imagine it is trivial to enumerate all refs and unbind them.


I just create new / changed functions next to the others and eval the selected region, then clean up. When I think i'm done, I'll restart the repl and try if it all is fine or if I depended on something in the state. That doesn't often happen anymore. I use the repl to try out things I just written in files. I can't say I remember a moment when state was a/the problem.

M-x slime-restart-inferior-lisp

works fine.


I wish there was some, even theoretical, effort to fix this. It's a crossdomain issue, even react in a way deals with that.


There's a practical way to do that right now in CL; in most implementations it's cl-user:quit, but UIOP defines a portable wrapper for it


I usually add one or more reset functions, and then I can customize whatever state I want to return to.


Love your site's CGA vibes.


Maybe I'm missing something. What about the site is giving CGA vibes?


The 4-colour palette with cyan and magenta.


That is exactly what I was going for!


Small typo enusre => ensure


I use Clojure at work but wow do I miss just about everything about Common Lisp whenever I have to debug anything or want performant code. Being able to be in nested errors and click at any part of the stack to inspect lexical bindings is extremely useful, and more importantly, clicking on an object then pushing M-<RET> to copy it to my REPL is much nicer than what Clojure offers (tap>, which I consider a glorified pretty printer even if you use tools like Portal).

As for performance, well, Common Lisp lets you statically type things, and SBCL can emit really efficient code if you do this. I find it helpful to run DISASSEMBLE on my own code to see what exactly is being emitted and optimize from there. And more importantly, packages like SB-SIMD and Loopus are a god send for any number crunching application.


This nicely summarizes some of my frustrations with using Clojure for my master's thesis. I'm not unhappy with the choice. Clojure allows such a juicy crossover between "everything is a key-value map, mannn" and "If it has :quack key set to true, treat it like a duck" which works really well for entity-component-system game-design-y things.

but the development story in Common Lisp ... and my gawd, the CONDITION SYSTEM ... were things that I sorely missed for the last year. and I'm not even that experienced of a CL hacker. It just grew on me so quickly. If only CLOS and the primitive data types in CL played together more nicely than they seem to.


I know. I've been spending a lot of time with CL, Scheme, and Clojure the past few years, and the ideal Lisp is some combination of them all. There are aspects of each that I miss in the others. CL has the nicest environment and development story (generally speaking). Scheme feels more refined in the small. And although they can be divisive, I really appreciate Clojure's data structure literals.


CL is the x86 of the Lisps. Successful because of backwards compatibility, but also ugly because of it.


You should look at flowstorm for Clojure. It lets you step through and back from a function and you can send maps to the repl with their functions.


I don't think either language offers a way to send a form to the repl, that is a function of the tooling.

This is certainly easy to do with Cider and I imagine the main tooling in other editors is equally competent.


You can kind of do the same as DISASSEMBLE in Clojure.

There are some helper projects like https://github.com/Bronsa/tools.decompiler, and on the OpenJDK JitWatch (https://github.com/AdoptOpenJDK/jitwatch), other JVMs have similar tools as well.

It isn't as straightforward as in Lisp, but it is nonetheless doable.


Steel Bank Common Lisp is the workhorse which led me to build profitable software companies. I don't think I would be as productive without it. The repl driven workflow is amazing and the lisp images are rock solid and highly performant.


Care to share the companies for those curious?


It looks awesome, but I'm too lazy as of today to go back to Emacs. I usually just use VSCode close to the defaults for my (mostly) Python and JavaScript development. I don't code full time, since I'm on a CTO role.


You may be interested in https://github.com/nobody-famous/alive which brings the power of slime to vscode (Mostly, it's relatively new and missing some features, but getting better all the time)


I don't know if you're interested in Sublime Text or not but https://github.com/s-clerc/slyblime is pretty good. VS Code also has Alive which I heard is good although I don't use Electron apps.


Thanks a lot! It does indeed look good.

Btw, what made you choose Common Lisp instead of Scheme (Guile, Racket, etc) or Clojure? What made it more business effective? Genuine curiosity :)


I dislike the JVM and the other lisps did not have the code performance and stability I needed.


I think the times when your tech stacks mattered in the slightest are mostly behind us.

Also: it's good you concocted some arcane shit that works like a charm, but now nobody - except the ones whose pay you express in number of zeroes - is touching it.


I think that you’re just restating the Blub point of view. You look back on the tech stacks of the past, and can see how they were worse than the ones we have today, but looking at the ones today you think that there are no more improvements to be made — or at least, none that matter.

Given that (I assume) you really do appreciate how much better the stacks of today are then the ones of the past, that seems a highly unwarranted assumption. Heck, I will tell you this: as much as a Lisp stack is better than the alternatives today, it’s not perfect. There’s a ton of future work to improve things even above the current state of the art.

But that state of the art is still better than what everyone else is using. What’s great about Lisp is that improvements are possible: with other technologies, there are more hard limits on what can be done.


Given the myriad other variables that go into a successful software business, the choice of stack and its various modes of expressing whatever transformation on whatever data it is you are mangling is so exceedingly minor a consideration that I'm close to experiencing it as professional negligence to even fuss over it to the degree it is being fussed over by many people.

I'm not dissing Lisp by any means by the way.


I understand your point and it's quite true - but hard problems require adequate tools and I wouldn't choose Java, for example, non-crud stuff.


You would never program Minecraft in it, right?

Probably for the best, but you can and he did and it’ll make you a billionaire all the same.


Personally, no. But Minecraft isn't what I'm talking about.


I always find it odd that people say this. If stack doesn't matter than why not start writing machine code again?


Agree. ThePrimeagen had something to say about this on one of his streams, responding to someone who said "The programming language doesn't matter, only the programmer." He said something like "If that were true, let's just all go back to writing C, it's pretty good. But then you'd say 'well not exactly...'"


I think they meant the stack doesn’t matter in the sense of “which stack you choose from the options available”, rather than “whether you choose a stack versus writing machine code”.


You can hook into the abi for software that runs in Linux trivially. So why isn't machine code acceptable? When you give the honest answer you see why the majority of languages aren't acceptable either.


It's said mostly by people who let others make technical decisions for them. Others being either their bosses or the main stream.


That's not a good faith interpretation given the near infinite amount of options for conjuring up your favourite moneyprinting system of choice besides "machine code". SBCL is about the most arcane option you can pick and even that can work, which I think actually proves the point: it doesn't matter to any significant degree (anymore).


How else is one meant to read that language doesn't matter?

You can hook into anything that runs in Linux since it's abi is rock solid so the excuse of not being able to use the usual tech stacks doesn't hold water either.


You can use RoR, .NET, SBCL, Python, Erlang and 6502 assembly if you so desire. Sure “machine code” is one option, but that doesn’t engage with the argument in any meaningful way IMO.


If this were true then half the posts in HN would have no audience :)


> The repl driven workflow is amazing and the lisp images are rock solid and highly performant.

do people not realize that basically everything vm/interpreted language has a repl these days?

https://www.digitalocean.com/community/tutorials/java-repl-j...

https://github.com/waf/CSharpRepl

https://pub.dev/packages/interactive

not to mention ruby, python, php, lua

hell even c++ has a janky repl https://github.com/root-project/cling

edit: i get downvoted by the lisp crowd every time i bring up that the repl isn't a differentiating feature anymore :shrug:


Of course people "realise" this. But those REPLs are not actually REPLs. They are interactive language prompts. They aren't actually REPLs. As the joke goes, Python doesn't have a REPL: it lacks READ, EVAL, PRINT and LOOP.

Being able to type in code and have it evaluated one line at a time isn't a REPL.


i have no idea what subtle or nuanced distinction you're trying to strike so what exactly do you imagine is the difference between a lisp repl and a python repl?

Edit: people that aren't familiar with python (or how interpreters work in general) don't seem to understand that being able to poke and prod the runtime is entirely a function of the runtime, not the language. In cpython you can absolutely do anything you want to the program state, all the way up to, and including, manually push/pop from the interpreter's value stack (to say nothing of moving up and down the frame stack), mutating owned data, redefining functions, classes, modules, etc. You can even, again at runtime, parse, to AST, and compile source to get macro-like functionally. It's not as clean as in lisp but it 100% gets the job done.


How does it look like in Python? In Lisp:

  CL-USER 43 > (+ 1 (foo 20))

  Error: Undefined operator FOO in form (FOO 20).
    1 (continue) Try invoking FOO again.
    2 Return some values from the form (FOO 20).
    3 Try invoking something other than FOO with the same arguments.
    4 Set the symbol-function of FOO to another function.
    5 Set the macro-function of FOO to another function.
    6 (abort) Return to top loop level 0.

  Type :b for backtrace or :c <option number> to proceed.
  Type :bug-form "<subject>" for a bug report template or :? for other options.

  CL-USER 44 : 1 > (defun foo (a) (+ a 21))
  FOO

  CL-USER 45 : 1 > :c 1
  42
Note that we are not in some debug mode, to get this functionality. It also works for compiled code.

Lisp detects that FOO is undefined. We get a clear error message.

Lisp then offers me a list of restarts, how to continue.

It then displays a REPL one level deep in an error.

I then define the missing function.

Then I tell Lisp to use the first restart, to try to invoke FOO again. We don't want to start from scratch, we want to continue the computation.

Lisp then is able to complete the computation, since FOO is available now.


Hmm, what advantage does Lisp offer here over Python?

  >>> 1 + foo(20)
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  NameError: name 'foo' is not defined
  >>> def foo(a):
  ... return a + 21
    File "<stdin>", line 2
      return a + 21
           ^
  IndentationError: expected an indented block
  >>> def foo(a):
  ...   return a + 21
  ...
  >>> 1 + foo(20)
  42
  >>>
Mind the hilarious indentation error, as I had not touched the old-school REPL in ages.

In normal day to day operations, I do the same thing daily with Jupyter Notebooks. I get access to as much state as I need.

With notebooks workflow it is normal to forget to define something and then redefine in the next cell. You could redefine function signatures etc. Ideally then you move cells in the correct order so that code can be used as Run All.

I "feel" ridiculously productive in VS Code with full Notebook support + copilot. I can work across multiple knowledge domains with ease (ETL across multiple database technologies, NLP-ML, visualization, web scraping, etc)

Underneath it is same as working in old school Python REPL just with more scaffolding.


I have been playing again with CL recently and am doing some trivial web-scraping of an old internet forum. I don't use a REPL directly, but just have a bunch of code snippets in a lisp file that I tell my editor to evaluate (similar to Jupyter?). I haven't bothered doing any exception (condition) handling, and so this morning I found this in a new window:

   Condition USOCKET:TIMEOUT-ERROR was signalled.
      [Condition of type USOCKET:TIMEOUT-ERROR]

   Restarts:
    0: [RETRY-REQUEST] Retry the same request.
    1: [RETRY-INSECURE] Retry the same request without checking for SSL certificate validity.
    2: [RETRY] Retry SLIME interactive evaluation request.
    3: [*ABORT] Return to SLIME's top level.
    4: [ABORT] abort thread (#<THREAD tid=17291 "worker" RUNNING {1001088003}>)
plus the backtrace. This is in a loop that's already crawled a load of webpages and has some accumulated some state. I don't want a full redo (2), so I just press 0. The request succeeds this time and it continues as if nothing happened.


You got a lot of correct but verbose responses. Put in layman's terms you had to run 1 + foo(20) again. If 1 + foo(20) were replaced by a complex and long winded function you would have lost all of that state and needed to run it all again. What if 1 + foo(20) had to read several TB of data in a distributed manner. You would have to do that all again.

There are ways around this and of course you could probably develop your own crash loop system in python but in lisp you simply continue where it failed. It's already there.

You mention doing things in Jupyter and ETLs which are often long running. This could be hugely beneficial to you.


> Hmm, what advantage does Lisp offer here over Python?

It does have a clear advantage if instead of

    (+ 1 (foo 20))
we were doing

    (+ (long-computation-answering-the-ultimate-question-of-life-the-universe-and-everything)
       (foo 20))
(Reminder: we're dicussing Common Lisp here.)


From what I see in your example, you invoke the form again. In Common Lisp you don't need that. You can stay in a computation and fix&resume from within.


You are not fixing the issue in the dynamic context of a running program. Doesn't matter in this trivial example but is very noticeable when you have a loaded DB cache and a few hundred active network connections.


The advantage is that Python has just diagnosed the error and aborted the whole thing back to the top level, whereas in the Common Lisp, the entire context where the error happened is still standing. There are things you can do like interactive replace a bad value with a good value and re-try the failed computation.

In lispm's example, the problem is that there is no foo function, so (foo 20) cannot be evaluated. You have various choices at the debugger prompt; you can specify a different function, to which the same arguments will be applied. Or just specify a value to be used in place of the nonworking function call.

Being able to fix and re-try a failed expression could be valuable if you have a large running system with hundreds of megabytes or even gigabytes of data in the image, which took a long time to get to that state.


> Hmm, what advantage does Lisp offer here over Python?

In lisp, I never edit code at the REPL, yet the REPL is what enables me to edit code anywhere. I edit the source files and have my editor eval the changes I made in the source. This gets me the benefit that should my changes work, I don't have to retype them to get them into version control. This works because the Lisp REPL is designed to be able to switch into any existing package, apply code there, and also switch back to the CL-USER package after. My editor uses the same mechanism and only has to inject a single prefix (`in-package :xyz`) before it pastes the code I've selected for eval.

In Python, editing a method in a class inside some module (i.e., not toplevel) is less easy. At least, I haven't found any editor support for it. What I did find is the common advice to just reload the whole module/file.

Okay, so let's reload the whole module, then? Well, Python isn't really built for frequent module reloads and that can sometimes bite. In Common Lisp, the assumption that any code may be re-eval-ed is built in. For example, there's two ways of declaring a global value in CL: defvar and defparameter. The latter is simply an assignment of a value to a variable in the global scope, but the former is special. By default, `defvar` defines a variable only if it's not already defined. So that a CL source file may be loaded and reloaded any number of times without resetting a global variable.

Then there's classes. Oh my. Common Lisp has the most powerful (in terms of flexibility) OO system I know of. Not only can you redefine functions and methods, you can even redefine classes dynamically. Adding a property to a class adds that property to all existing objects of that class. Removing a property from a class removes it from all existing objects of that class. This feature is no longer CL-exclusive, but it is sufficient to offer a massive advantage over Python. I don't need to talk about method combinations, multi-methods and the many other cool features of the Common Lisp Object System here.

Then there's the debugging system. In Python, when an exception is thrown, it immediately unwinds the stack all the way up until it is first caught. So not only do you need to know beforehand where to catch what exception, if you get it wrong you cannot inspect the site of the error. In CL, a condition ("exception") does not unwind the stack until a restart is chosen. Not when it is caught, but rather when — after being caught — a resolution mechanism has been chosen. This allows interactive debugging (another cool CL feature) to inspect the stack frames at (and above) the site of error, redefine whatever code needs to be corrected, all before the error is allowed to unwind and destroy the stack. You still need to set-up handlers (and restarts) before the error happens, but you can be absolutely wildly lax and use catch-all handlers anywhere on the stack and restarts that take absolutely anything (even functions) at debug-time so you don't really need to be prescient with your error handling code unlike in Python.

I'm sure there's more, but I think this is pretty sufficient.


You're losing all the state since you're not being dropped in the closure where the error happened but in the end of the program.

To see the difference use some function counter() instead of 1 in the example.


>Note that we are not in some debug mode, to get this functionality.

Jesus Christ I swear it's like you ascribe mysterious powers to the parens. Do you think the parens give you the ability to travel through time or reverse the pc or what? Okay it's not in a debug mode but it's in a "debug mode". Like seriously tell me how you think this works if it's not effectively catching/trapping some sigkill or something that's the equivalent thereof?

I have never in my life met this kind of intransigence on just manifestly obvious things.


Common Lisp programs run by default in a way that calls to undefined functions are detected.

Here the Lisp simply tries to look up the function object from the symbol. There is no function, so it signals a condition (aka exception). The default exception handler gets called (without unwinding the stack). This handler prints the restarts and calls another REPL. I define the function -> the symbol now has a function definition. We then resume and Lisp tries again to get the function definition. The computation continues where we were.

That's the DEFAULT behavior you'll find in Common Lisp implementations.


>Common Lisp programs run by default in a way that calls to undefined functions are detected.

Cool so what you're telling me is that by default every single function call incurs the unavoidable overhead of indirecting through some lookup for a function bound to a symbol. And you're proud of this?


I thought you know Lisp? Now you are surprised that Lisp often looks up functions via symbols -> aka "late binding"? How can that be? That's one of the basic Lisp features.

Next you can find out what optimizing compilers do to avoid it, where possible or where wanted.


At no point in time did I claim to know lisp well. I stated my familiarity at the outset. But what you all did was claim to know a lot about every other interpreted runtime without a grain of salt.

>Next you can find out what optimizing compilers do to avoid it, where possible or where wanted.

But compilers I am an expert in and what you're implying is impossible - either you have dynamic linkage, which means symbol resolution is deferred until call (and possibly guarded) or you have the equivalent of RTLD_NOW ie early/eager binding. There is no "optimization" possible here because the symbol is not Schrodinger's cat - it is either resolved statically or at runtime - prefetching symbols with some lookahead or cabinet is the same thing as resolving at calltime/runtime because you still need a guard.


> But compilers I am an expert in and what you're implying is impossible

> it is either resolved statically or at runtime

Just tell Lisp which calls to statically resolve, inline, optimize. Overwrite the global default.

  (defun foo (a)
    (declare (inline +)
             (optimize (speed 3))
             (type (integer 0 100) a))
    (* 10 (+ 3 a)))
Above tells Lisp to inline the + function, optimize for speed and declares the type of a to be an integer in the range 0 to 100.

  * (disassemble #'foo)
  ; disassembly for FOO
  ; Size: 32 bytes. Origin: #x7006DC8544  ; FOO
  ; 44:       40190091         ADD NL0, R0, #6
  ; 48:       5C0180D2         MOVZ TMP, #10
  ; 4C:       0A7C1C9B         MUL R0, NL0, TMP
  ; 50:       FB031AAA         MOV CSP, CFP
  ; 54:       5A7B40A9         LDP CFP, LR, [CFP]
  ; 58:       BF0300F1         CMP NULL, #0
  ; 5C:       C0035FD6         RET
  ; 60:       E00120D4         BRK #15    ; Invalid argument count trap
As you can see in the machine code, Lisp then uses the native machine code ADD and MUL instructions.


What you're missing is that, unlike any other commonly used language runtime, compilation in CL is not all-or-nothing, nor is it left solely to the runtime to decide which to use. A CL program can very well have a mix of interpreted functions and compiled functions, and use late or eager binding based on that. This is mostly up to the programmer to decide, by using declarations to control how, when, and if compilation should happen.


It should also be noted that by spec symbols in the system package (like + and such) should not be redefined. This offers “unspecified” behavior and lets the system make optimizations out of the box.

Outside of that you can selectively optimize definitions to empower the system to make better decisions at the cost of runtime protection or dynamism. However these are all compiler specific.


To be fair, any dynamic language with a JIT will mix interpreted and compiled functions, and will probably claim as a strength not leaving to the programmer the problem of which to compile.

Opinions may vary on that point.


You are incorrect; optimizations are possible in dynamic linking by making first references go through a slow path, which then patches some code thunk to make a direct call. This is limited only by the undesirability of making either the callling object or the called object a private, writable mapping. Because we want to keep both objects immutable, the call has to go into some privately mapped jump table. That table contains a thunk that can be rewritten to do a direct call to an absolute address. If we didn't care about sharing executables between address spaces we could patch the actual code in one object to jump directly to a resolved address in the other object. (mmap can do this: MAP_FILE plus MAP_PRIVATE: you map a file in a way that you can change the memory, but the changes appear only in your address space and not the file.)


Compared to Python, Common Lisp hardly has any performance issues.


Okay well when pytorch, tensorflow, pandas, Django, flask, numpy, networks, script, xgboost, matplotlib, spacy, scrapy, selenium get ported to lisp, I'll consider switching (only consider though since the are probably at least another 20 python python packages that I couldn't do my job without).


That is a typical consumerism: "Give me everything ready to use and then I'll use it."

How about bringing some value to the community?


For the sake of anyone reading this thread who isn't in the know: many of these libraries are really written in C/C++ and have Python bindings.


i said ported not implemented; the likelihood that any of those libraries sprout lisp bindings is about as likely as them being rewritten in lisp. so it's the same thing and the point is clear: i don't care about some zany runtime feature, i care about the ecosystem.


Stop moving the goalposts: your answer to a commenter who stated that Common Lisp was faster than Python (a fact) was a list of packages, many of which are (1) not even written in Python and (2) some of them actually do have Common Lisp bindings.


This is not necessarily the case.

Firstly, functions that are in the same compilation unit that refer to each other can use a faster mechanism, not going through a symbol. The same applies to lexical functions. Lisp compilers support inlining, and the spec allows automatic inlining between functions in the same compilation unit, and it allows calls to be less dynamic and m more optimized. If f and g are in the same file, where g calls f, then implementations are not required to allow f and go to be separately redefinable. So that is to say, if f is redefined only, the existing g may keep calling the old f. The intent is that redefinition has the granularity of compiled files: if a new version of the entire compiled file is loaded, then f and g get redefined together and all is cool.

Lisp symbol lookup takes place at read time. If we are calling some function foo and have to go through the symbol (it's in another compilation unit), there is no hashing of the string "foo" going on at call time. The calling code hangs on to the foo symbol, which is an object. The hashing is done when the caller is loaded. The caller's compiled file contains literal objects, some of which are symbols. A compiled file on disk records externalized images of symbols which have the textual names; when those are internalized again, they become objects.

The "classic" Lisp approach for implementing a global function binding of a symbol is be to have dedicated "function cell" field in the symbol itself. So, the compiled module from which the call is emanating is hanging on to the foo symbol as static data, and that symbol has a field in it (at a fixed offset) from which it can pull the current function object in order to call it (or use it indirectly).

Cross-module Lisp calls have overhead due to the dynamism; that's a fact of life. You don't get safety for nothing.

(Yes, yes, you can name ten "Lisp" implementations which do a hashed lookup on a string every time a function is called, I know.)


> If f and g are in the same file, where g calls f, then implementations are not required to allow f and go to be separately redefinable. So that is to say, if f is redefined only, the existing g may keep calling the old f. The intent is that redefinition has the granularity of compiled files: if a new version of the entire compiled file is loaded, then f and g get redefined together and all is cool.

This isn't the default behaviour though, right?


That depends. The Common Lisp standard says nothing on the subject. CMUCL[1] and its descendent SBCL[2] do something clever called local call. It's not terribly difficult to optimize hot spots in your code to use local call. Outside of the bottlenecks, the full call overhead isn't significant for the overwhelming majority of cases. It's not like full call is any more expensive than a vtable lookup anyhow.

[1] https://cmucl.org/downloads/doc/cmu-user-2010-05-03/compiler...

[2] https://www.sbcl.org/manual/#Miscellaneous-Efficiency-Issues


Do you think Python or Ruby or PHP are any different? And yet, not one of them actually chose to use this in a sane way, where a simple lookup error doesn't have to crash the whole program.


Restarting from the debugger keeps state without third party Python hacks that you mention. In this example Python increments x twice, Lisp just once:

  >>> x = 0
  >>> def f():
  ...     global x # yuck!
  ...     x += 1
  ... 
  >>> def g(y):
  ...     h()
  ... 
  >>> 
  >>> g(f())
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    File "<stdin>", line 2, in g
  NameError: name 'h' is not defined
  >>> 
  >>> def h(): pass
  ... 
  >>> g(f())
  >>> 
  >>> x
  2

Versus:

  * (setf x 0)
  * (defun f() (incf x))
  * (defun g(y) (h))
  * (g(f))

  debugger invoked on a UNDEFINED-FUNCTION in thread
  #<THREAD "main thread" RUNNING {1001878103}>:
    The function COMMON-LISP-USER::H is undefined.

  Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.

  restarts (invokable by number or by possibly-abbreviated name):
    0: [CONTINUE      ] Retry calling H.
    1: [USE-VALUE     ] Call specified function.
    2: [RETURN-VALUE  ] Return specified values.
    3: [RETURN-NOTHING] Return zero values.
    4: [ABORT         ] Exit debugger, returning to top level.

  ("undefined function")
  0] (defun h() nil)
  ; No debug variables for current frame: using EVAL instead of EVAL-IN-FRAME.
  H
  0] 0
  NIL
  * x
  1


Can you connect to a running server or other running application, inspect live in memory data, change live in memory data, redefine functions and classes and have those changes take immediate effect without restarting the server or app?

I think that is that is the big difference.

It’s a triple edged sword bonded to a double barreled shotgun though, and the very antithesis of the idea of functional programming vs mutable state.


>Can you connect to a running server or other running application, inspect live in memory data, change live in memory data, redefine functions and classes and have those changes take immediate effect without restarting the server or app?

The answer to all of these things, at least in python, is emphatically yes. I do this absolutely all the time. You can debug from one process to another if you've loaded the right hooks. You don't need to take my word for it or even try to do it; you just need to reason a fortiori: python can do it because it's an interpreter with a boxed calling convention and managed memory, just like lisp interpreters.

It's amazing: people will die on this hill for some reason but lisp isn't some kind of mysterious system that was and continues to be beyond us mere mortal language/runtime designers. the good ideas in lisp were recognized as good ideas and then incorporated and improved upon.


The answer to all these things should be "just doesn't work in practise", not for real programs anyways. Unlike Lisp, Python doesn't lean itself well to this mode of development.

Primitive CLI-like tinkering, figuring out language features, calc-like usage - maybe. But not a single time in 15 years of doing Python across the industry I saw anybody using these features for serious program development, or live coding, or REPL-driven development.


>Primitive CLI-like tinkering, figuring out language features, calc-like usage - maybe. But not a single time in 15 years of doing Python across the industry I saw anybody using these features for serious program development, or live coding, or REPL-driven development.

I swear you people are like ostriches in the sand over this - Django, pytest, fastapi, pytorch, Jax, all use these features and more. I work on DL compilers and I use those features every day - python is a fantastic edsl host for whatever IR you can dream of. So just because you're in some sector/area/job that doesn't put you in contact with this kind of python dev doesn't mean it's not happening, doesn't mean that python doesn't support it, doesn't mean it's an accidentally supported API (as if such a thing could even be possible).

Really what this convo is doing is underscoring for me how there really is nothing more to be learned from lisp - I had a lingering doubt that I'd missed some aspect but you guys are all repeating the same thing over and over. So thanks!


> Really what this convo is doing is underscoring for me how there really is nothing more to be learned from lisp - I had a lingering doubt that I'd missed some aspect but you guys are all repeating the same thing over and over.

No, you keep on misunderstanding what people are trying to tell you. It’s a communication failure. The thing that you think you are doing in Python is not the thing that people are doing in Lisp.

As an example, I suppose that when you’re developing code in Python’s pseudo-REPL you often reimport a file containing class definitions. When you do that, what happens to all the old objects with the old class definition? Nothing, they still belong to the old class.

If you did this on a REPL connected to a server, what would happen to the classes of objects currently being computed on? Nothing, they would still belong to the old class.

In Lisp, it’s different. There is a defined protocol for what happens when a class is redefined. Every single object belonging to the old class gets updated to the new class. You can define code to get called when this happens (say, you added a new mandatory field, or need to calculate a new field based on old ones — and of course ‘calculate’ could also mean ‘open a network connection, dial out to a database and look up the answer’ or even ‘print the old object and offer the system operator a list of options for how to proceed’). And everything that is currently in-flight gets updated, in a regular and easy-to-understand way.

People are telling you ‘with Lisp central air conditioning, I can easily heat my house in the winter’ and you are saying ‘with Python, I can easily build a fire whenever my house gets cold too!’


Python is the wrong example here. Ruby did inherit Lisp-style class reopening. But then Ruby always was explicitly very lispy.


As others here, I don't understand how those features (seamless continuation of a program with the exact state) could possibly work in Python.

What I do know is that the Python community has a propensity for claiming that approximations of complex features by gigantic hacks work and are sound, while they are not.

The Python community also has an extreme tolerance for unsound and buggy software that is propped up by censoring those who complain. Occasional complaints are offset by happy (and selected) marketing talks at your nearest PyCon.


I think in just about every response you left in these threads, you misunderstood what was being said. Possibly through impatience, or just plain arrogance. I really encourage you to spend some time trying to understand how interactivity/restartability (as in Lisp restarts, not process restarts) is built into the language. Especially if you're specializing in the compilers of dynamic languages.

You might also check out Smalltalk, which has a similar level of dynamism.


Listen, I've been on many sides: Lisp stuff, Python stuff, C stuff, etc. I don't think that "something has to be learned". Lisp has many good ideas, Python has good ideas. But REPL-driven development is not one of them. But let me explain.

You see, it's not about how REPL in Python just does not allow something (even though it is rather primitive). Python makes it superhard to tweak things, even if you can change a certain variable in memory. Here's why.

Think about Lisp programs, including OOP flavours. These fundamentally consist of 2 things: a list of functions + a list a variables. If you replace a function or a variable then every call will go through it. And that's it. You change a function - all calls to it will be routed through the new implementation. Because of REPL-centric culture of things people really do organise their programs around this style of development.

Python was developed with an dynamic OOP idea in mind where everything is an object, everything is a reference. Endless references to references of references to references. It's a massive graph, including methods and functions and objects and classes and metaclasses. There is no single list of functions where you can just replace this name-to-implementation mapping.

TL;DR Replacing a single reference doesn't change much in the general case. It does work in some cases. But that's not enough for people to rely on it as main development driver.

Python fundamentally makes a different tradeoff than your average lisp.


I've mentioned this in a sibling thread, but it's interesting to compare this to Ruby. Ruby does support the sort of redefinition you're talking about. And yet REPL-centric development isn't primary there, either. Yes, there are very good REPL implementations, but I don't know of anyone who develops at the Ruby REPL the same way you would in a Lisp REPL. Maybe it's a performance thing? Maybe it's the lack of images?


BTW, you mentioned that classes can be redefined in Ruby.

How does this work for existing class instances? Anonymous pieces of code, methods, etc? Even lisp itself does not save from all the corner cases, it's the dev culture that makes all these wonderful things possible.


Existing instances get the new capabilities.

The first time you do `class A....end` you're defining the class. Instances when they are created keep a reference to that class - which itself is just another object, an instance of the class `Class` which just so happens to be assigned to the constant `A`. If you later say `class A... end` and redefine a method, or add something new, what you're actually doing is reopening the same class object to add or change its contents, so the reference to that class doesn't change and all the instances will get the new behaviour. If you redefine a method, calls to that method name will go to the new implementation.

So in that sense it works like you'd expect, I think. As I said, Ruby is very lispy - Matz lists Lisp as one of the inspirations, and I think I'm right in saying he even cribbed some implementation details from elisp early on.


Lispy expressions are easy to type into a repl? The leaking block reference trouble is still there, btw, just not that bad as in python.

I did explore the problem in python and don't really understand Ruby so no idea.


What a condescending answer.


It just shows that there's no understanding of the depth of the problem.

Years ago I tried doing something like this (redefining functions, classes, etc) in a dev environment of a MMO game. This would be crazily useful as the env took 5-10 mins to boot. And game logic really needs tweaking A LOT.

I really wanted this to work. After all, it really feels as if python has everything for it. Banged my head against the wall for weeks, failed ultimately and gave up on live development in python completely.

In contrast, as a heavy emacs user, I tweak my environment a couple of time a day. I restart this lisp machine a couple of time a month.


Am I some kind of a python unicorn? I do those things with python just about every time I write new python code.

The thing Common Lisp does that python doesn't (so far) is outputting well-performing code.


No, you're fine. For certain things Python REPL-like live development is ok indeed. Say, if your program boils down to a list of functions. Think request handlers or something.


Can you give me some links? I don’t program clojure much these days but I’ve never found anything comparable in python.


https://www.jetbrains.com/help/pycharm/remote-debugging-with...

I need to be very clear so that no one misunderstands: this is not proprietary pycharm functionality - this is all due to to sys.settrace and the pydev debug protocol

https://www.pydev.org/manual_adv_remote_debugger.html

So you can hook this up completely by yourself with some work but lucky for you and me pycharm makes it effortless.


In TFA, go to the “Try this in your favorite repl”, try that in your “repl” and that would be the fine distinction you’re missing.


>The answer to that question is the differentiating point of repl-driven programming. In an old-fashioned Lisp or Smalltalk environment, the break in foo drops you into a breakloop.

do you want me to show you how to do this in a python repl? it's literally just breaking on exception...


Since Smalltalk was mentioned, please consider following points:

1. Smalltalk has first class, live activation records (instances of the class Context). Smalltalk offers the special "variable" thisContext to obtain the current stack frame at any point during method execution.

If an exception raised, the current execution context is suspended and control is transferred to the nearest exception handler, but the entire stack frame remains intact and execution can be resumed at any point or even altered (continuations and prolog like backtracking facilities have been added to Smalltalk without changing the language or the VM).

2. The exception system is implemented in Smalltalk itself. There are no reserved keywords for handling or raising exceptions. The implementation can be studied in the live system and, with some precautions, changed while the entire system it is running.

3. The Smalltalk debugger is not only a tool for diagnosing and fixing errors, it also designed as tool for writing code (or put differently, revising conversational content without having to restart the entire conversation including its state). Few systems offer that workflow out of the box, which brings me to the last point.

4. I said earlier that Racket is different from Common Lisp. It's not only about language syntax, semantics, its implementation or other technicalities. It is also about the culture of a language, its history, its people, how they use a language and ultimately, how they approach and do computing. Even in the same language family tree you will find that there are vast differences, if you take said factors into account, so it might be worthwhile to study Common Lisp with an open mind and how it actually feels in use.


No, it’s not: an exception unwinds the stack all the way up to where the exception is caught. By the time the enclosing Python pseudo-REPL sees the undefined function error, all the intervening stack frames have dissolved. The way it works is that a function tries code, and catches exceptions.

In Lisp (and I believe Smalltalk), it doesn’t work that way: there is an indirection. Rather than try/except, a function registers a condition handler; when that particular condition happens, the handler is called without unwinding the stack. That handler can do anything, to include reading and evaluating more code. And re-trying the failed operation.

It would be possible to implement this in Python, of course, but it doesn’t offer the affordances (e.g. macros) that Lisp has, and it’s not built into the language like it is in Lisp (e.g., every single unoptimised function call in Lisp offers an implicit ‘retry’).


IMO, the downside with the term REPL is that if you don't understand the specific Lisp definitions of the terms, it sounds like any other interactive execution environment.


> i get downvoted by the lisp crowd every time i bring up that the repl isn't a differentiating feature anymore

I’d suggest it has more to do with tone than content.


The repls you mention are not like lisp repls. You're being downvoted because your comment makes it sound like you've never programmed a lisp but have strong opinions nonetheless.


Not the OP but would somebody be able to summarize HOW are the lisp REPLs different then to me? I've written limited amount of clojure and common lisp just to play around and I don't recall any difference between Clojure REPL and the REPL I get for say Kotlin inside IntelliJ idea.

Maybe the ability to send expression from the IDE into the REPL with one keybind but I cannot say it's not possible with the Kotlin one right now because that's not what I use it for.


Watch this video on Lisp interactive development approach. I've recorded it especially to answer the question:

https://www.youtube.com/watch?v=JklkKkqSg4c


Thanks, I forgot about this aspect of live program editing. Whether or not it's possible (or how close just quick live reload) is to this it' definitely not a first class citizen like you presented. It also reminds me of Pharo (or maybe just smalltalk, I've only played with Pharo) where you build the program incrementally "inside out".

It does make me wonder how aplicable this way of programming is to what I do at work but that is more because of the technologies and architectural choices where most of the work is plumbing stuff that is not local to the program itself together. And maybe even for that with the edges mocked out it would make sense to work like this.

Again, interesting video that made me think. Thanks.


Yes, Smalltalk/Pharo also support this.

Being able to interactively update code in response to an error, without leaving the error context and being able to restart stack frames (not just a “catch” or top level, as in most languages) is one of the key features that makes REPL-driven development possible. Or at least that’s how I see it.

It’s not something you always need to use, but it can be handy, especially for prototyping and validating fixes.


There's a person above saying that it's about being to able to mutate program state from the repl, which is a thing that's also possible in any repl for a language with managed memory.


Not just from the REPL, but from the REPL in the context where the error occurred, without having to structure the code ahead of time to support this. It’s not always an important distinction, but it’s handy when prototyping or if the error is difficult to reproduce.

There are some other affordances for interactive programming, such as a standard way to update existing instances of classes. I’m sure you could implement this sort of functionality in any language, but this is universal and comes for free in Common Lisp.

CL also has other interesting features such as macros, multiple dispatch, compilation at runtime, and being able to save a memory snapshot of the program. It’s quite unique.


Cl Condition system + repl = godmode. Your software crashes? Do you go back and set a breakpoint? No, because you’re already in the stacktrace in the repl exactly where the crash occurred. You fix the code, reload it, tell it to ether run where it left off, or restart from an earlier point.


Flask and Django have the exact same functionality - I've already said that this thing you guys keep talking is just a matter catching exceptions.

https://flask.palletsprojects.com/en/2.3.x/debugging/

https://docs.djangoproject.com/en/dev/ref/settings/#debug


That is definitely not the same. I write a lot of python code and the interpreter / interactive development is just not as good as it is in Common Lisp.

To my knowledge there’s no real “mainstream” language that goes all in on interactive development. Breakpoints and traceback are all perfectly cromulent ways to debug, but it’s really not the same, sadly.


exceptions unwind the stack in all languages I know except in CL


and Smalltalk ;-)!


good to know! but the point still stands :D


The fact that I’ve never seen a CL lover who can explain this adequately is quite concerning in itself


>you've never programmed a lisp but have strong opinions nonetheless

i've written racket and clojure (and mathematica, which is a lisp). not multiple 10kloc but enough to understand what the big ideas are. claiming i just haven't written enough lisp is basically the logical fallacy of assuming the premise.


But Racket and Clojure are very different from Lisps such as Common Lisp that embrace the idea of a lively, malleable and explorable environment, which is arguably the biggest idea.


> “mathematica is a lisp”


http://xahlee.info/M/lisp_root_of_wolfram_lang.html

http://xahlee.info/M/lisp_vs_WolframLang.html

> WolframLang has all the characteristics of LISP:

seems you either don't know what lisp is or you've never written mathematica


The content on the pages clearly explain the differences.

Mathematica is a symbolic language based on 'rewriting' There are other examples - Prolog would be an example, a logic language. Also most other computer algebra systems are in this category, similar to Mathematica: Macsyma/Maxima, Axiom, ...

> WolframLang has all the characteristics of LISP

It has many, but there are a lot of differences, too.

The big difference is the actual engine. Mathematica is based on a 'rewrite system'. It translates expressions by applying rewrite rules.

Lisp evaluates expressions either based on an interpreted evaluator or by evaluating compiled code. Lisp has macros, but those can be transformed before the code is compiled or running. The practical effect is that in many Lisp implementations usually all code is compiled, incl. user code. Mathematica uses C++ then. Most of the UI in Mathematica is implemented in C++, where many Lisp systems would implement that in native compiled Lisp.

Thus the computation is very different. Using a rewrite system for programming is quite clunky and inefficient under the hood. A simple example would be to look how lexical closures are implemented.

Another difference is that Mathematica does not expose the data representation of programs to the user all the time, where Lisp programs are also on the surface written as s-expressions (aka symbolic expressions) in text.

The linked page from the Mathematica book also claims that Mathematica is a higher level language. Which is true. Lisp is lower level and languages like the Wolfram Language can be implemented in it. That's one of its original purposes: it's an implementation language for other ('higher-level') languages. Sometimes it already comes with embedded higher-level languages. CLOS + MOP (the meta-object protocol) would be an example for that.


> Another difference is that Mathematica does not expose the data representation of programs to the user all the time, where Lisp programs are also on the surface written as s-expressions (aka symbolic expressions) in text.

I have already addressed this: FullForm

https://reference.wolfram.com/language/tutorial/Expressions....

>Thus the computation is very different. Using a rewrite system for programming is quite clunky and inefficient under the hood. A simple example would be to look how lexical closures are implemented.

You're skimming a couple of paragraphs without actually knowing much about Mathematica. It's absolutely not the case that Mathematica is purely a redex system; it's just that it's very good at beta reduction because it has a strong focus on CAS features.


> I have already addressed this: FullForm

No you haven't addressed it. The "Wolfram Language" user typically does not write code in FullForm. It's used as an internal representation.

> it's just that it's very good at beta reduction

and not so good at compiling code...

https://reference.wolfram.com/language/ref/Compile.html

See "Details and Options"


>The "Wolfram Language" user typically does not write code in FullForm. It's used as an internal representation.

I have no clue what you're talking about - it's an available primitive and I use it all the time.

>and not so good at compiling code...

Lol I am 100% sure that the majority of lisps cannot be aot compiled.


> Lol I am 100% sure that the majority of lisps cannot be aot compiled.

Ahead-of-time compiling has been the principal method in mainstream Lisps going back to the 1960's. The Lisp 1.5 Programmer's Manual from 1962 describes ahead-of-time compiling.

The curious thing is how can you be "100% sure" in making a completely wrong statement, rather than some lower number, like "12% sure".


>The curious thing is how can you be "100% sure" in making a completely wrong statement, rather than some lower number, like "12% sure".

The reason is very simple and surprisingly straightforward (but requires some understanding of compilers): dynamically typed languages that are amenable to interpreter implementations are very hard to compile AOT. Now note I have since the beginning emphasized AOT - ahead of time - but this does not preclude JITs.

But in reality I don't really care about this aspect - it was the other guy who for whatever reason decided to flaunt that clisp can be compiled when comparing it with Mathematica.


For someone playing with Mathematica, you have a curious intellectual process. To be clear, I'd rather have you doing that than hocking loogies at cars from an overpass.


> I have no clue what you're talking about

That's not good. Try again.

In Lisp adding two numbers looks like this is source code: (+ 1 2)

  CL-USER 41 > (+ 1 2)
  3
If I quote the expression and evaluate it, the result is (+ 1 2)

  CL-USER 42 > (quote (+ 1 2))
  (+ 1 2)
Thus in Lisp the textual representation of code and code as data are the same.

Not so in "Wolfram Language": a + b has a FullForm which looks differently. The user does not write ALL of the code in FullForm notation.

Source notation

  a + b
FullForm

  Plus[a, b]
Lisp:

Source notation

  (+ a b)
FullForm

  (+ a b)
Can you see the difference?

> Lol I am 100% sure that the majority of lisps cannot be aot compiled.

I'd expect that they can. That's a feature since 1962. SBCL for example does AOT compilation by default, always.

  * (disassemble (lambda (a) (+ a 42)))
  ; disassembly for (LAMBDA (A))
  ; Size: 36 bytes. Origin: #x7006DC83B4                        ; (LAMBDA (A))
  ; B4:       AA0A40F9         LDR R0, [THREAD, #16]            ; binding-stack-pointer
  ; B8:       4A0B00F9         STR R0, [CFP, #16]
  ; BC:       EA030CAA         MOV R0, R2
  ; C0:       8B0A80D2         MOVZ R1, #84
  ; C4:       3CAA80D2         MOVZ TMP, #1361
  ; C8:       BE6B7CF8         LDR LR, [NULL, TMP]              ; SB-KERNEL:TWO-ARG-+
  ; CC:       DE130091         ADD LR, LR, #4
  ; D0:       C0031FD6         BR LR
  ; D4:       E00120D4         BRK #15                          ; Invalid argument count trap
  NIL
Looks like native ARM64 code to me.


> FullForm Plus[a, b]

How can I make this any more clear? You are able, in Mathematica, to write Plus[a, b] with your own fingers on your own keyboard and it will be interpreted as the same thing as a+b

> I'd expect that they can.

Clisp is not the only lisp - I can name 10 others that cannot be compiled.


If we count everyone's one-weekend project that evaluates (+ 1 2) into 3, then there are probably thousands of Lisps that cannot be compiled. So what?


Then the person should spend another weekend and implement a compiler for it.


> You are able, in Mathematica, to write Plus[a, b] with your own fingers on your own keyboard and it will be interpreted as the same thing as a+b

Sure, but it is not Mathematica's InputForm:

https://reference.wolfram.com/language/ref/InputForm.html

The majority of code is written not in FullForm. In Lisp 100% of the code is written in s-expressions.

> Clisp is not the only lisp - I can name 10 others that cannot be compiled.

Typical Lisp and Lisp dialects all can be compiled: Common Lisp, Emacs Lisp, ISLisp, Scheme, Racket, ...

Which Lisps can not be compiled?


>Racket

Do you really know what you're talking about here?

https://docs.racket-lang.org/raco/make.html

>The raco make command accept filenames for Racket modules to be compiled to bytecode format.

That's not a compiler...

I don't claim to be an expert on lisp, so further googling I find

https://racket.discourse.group/t/chez-for-architectures-with...

which has some discussion about this and that native backend.

Suffice it to say I am not any more confident that being compilable is somehow intrinsic to lisp.


From the Racket documentation:

https://docs.racket-lang.org/reference/compiler.html

"18.7.1.2 CS Compilation Modes

The CS implementation of Racket supports several compilation modes: machine code, machine-independent, interpreted, and JIT. Machine code is the primary mode, and the machine-independent mode is the same as for BC."

CS is the new implementation of Racket on top of the Chez Scheme runtime. Chez Scheme is known for its excellent machine code compiler.

"Machine code is the primary mode"

> Do you really know what you're talking about here?

Read above.


If you have time to research Lisp implementations until you gather ten that don't have compilers, you might want to take a few seconds to visit https://clisp.cons.org to find out what Clisp means.


> Lol I am 100% sure that the majority of lisps cannot be aot compiled.

    CL-USER> (defun foobar (x) (1+ x))
    FOOBAR
    CL-USER> (disassemble #'foobar)
    ; disassembly for FOOBAR
    ; Size: 35 bytes. Origin: #x5365BF44                          ; FOOBAR
    ; 44:       498B4510         MOV RAX, [R13+16]                ; thread.binding-stack-pointer
    ; 48:       488945F8         MOV [RBP-8], RAX
    ; 4C:       BF02000000       MOV EDI, 2
    ; 51:       488BD3           MOV RDX, RBX
    ; 54:       FF14251001A052   CALL QWORD PTR [#x52A00110]      ; SB-VM::GENERIC-+
    ; 5B:       488B5DF0         MOV RBX, [RBP-16]
    ; 5F:       488BE5           MOV RSP, RBP
    ; 62:       F8               CLC
    ; 63:       5D               POP RBP
    ; 64:       C3               RET
    ; 65:       CC10             INT3 16                          ; Invalid argument count trap
    NIL
    CL-USER> 
There you go: #'FOOBAR is AOT-compiled down to four MOVs, a CALL, two MOVs, a CLC, a POP and a RET.


> "seems you either don't know what lisp or you've never written mathematica"

Meanwhile, you brought up examples from Mathematica docs that talk about head/tails (car/cdr) but by that logic, Python is a Lisp too because you have:

   list[0]
and

    list[1:]
Maybe your Clojure/Racket experience wasn't enough to teach you what the essence of Lisp was. From your first link:

"Mathematica expressions are in many respects like LISP lists. In Mathematica, however, expressions are the lowest-level objects accessible to the user. LISP allows you to go below lists, and access the binary trees from which they are built."

That right there is telling you that Mathematica is not a Lisp.

Edit: Corrected the Python list example.


I'm sorry but are you really going to pretend like car and cdr are not core to lisp?

>list[0] and list[-1]

That is not car and cdr; closer would be list[0] and list[1:] if lists were cons in python.

>Mathematica expressions are in many respects like LISP lists. In Mathematica, however, expressions are the lowest-level objects accessible to the user. LISP allows you to go below lists, and access the binary trees from which they are built

This is a quote from 1986. I wonder if the language has changed much since then

https://reference.wolfram.com/language/tutorial/Expressions....


Read PG's "Roots of Lisp" and you'll understand what I mean.


I believe that is an argument from authority (if I remember correctly).


A REPL isn't just a REPL. You are comparing modern day Toyota Corollas to a Spaceship sent from the future to the 80s. One is just different level radical. At least when it's baked by SLY or SLIME


here is the list of slime features on the slime webpage

>Code evaluation, compilation, and macroexpansion.

>Online documentation (describe, apropos, hyperspec).

>Definition finding (aka Meta-Point aka M-.).

>Symbol and package name completion.

>Automatic macro indentation based on &body.

>Cross-reference interface (WHO-CALLS, etc).

https://slime.common-lisp.dev/

and i'm still wondering which of these things i can't do in a python repl? note macroexpansion doesn't count because that's not a dimension of the repl.


>Code evaluation, compilation

I couldn’t debug the following in pycharm and add the missing function at runtime, or could i?

    def interactively_writing_code():
        this_doesnt_exist_yet()

    interactively_writing_code()
I don’t think i can patch a function at runtime without losing state either in python - the act of redefining the function causes the variables to be reset but in lisp the bindings are untouched.


I just did it - it works perfectly fine. Debug-run your code, an exception will be thrown at the call site, step up one frame from the exception (ie module level), define the missing function, call again and it succeeds - all without leaving the same repl instance. Don't believe me? Try it.

I'll say it again: you guys are in plain denial not about python or lisp as languages but about how interpreters work. There's just nothing more to be said about this dimension of it.


What's being asked is, after defining the missing function, whether it's possible to clear the exception and continue the execution without having to restart from the beginning. This is very useful when you hit an exception after 10 minutes of execution. (This is a real usecase which would have saved me untold hours.)

I hope it's possible somehow, but if you just load pdb (e.g. with %pdb in ipython), pdb is entered in post-mortem mode, from which it's impossible to modify code/data and resume execution. Setting a breakpoint (or pdb.set_trace()) would requiring knowing about the bug ahead of time. Does it only work when interrupting with a remote debugger rather than on exception?

However, it wouldn't be possible if the interpreter unwinds the stack looking for exception handlers before finding that there is none? In other languages/VMs such as SBCL the runtime can look up the stack for handlers, and invoke the debugger before destructively unwinding.


The other guy up above claims this is a feature unique to calling functions, rather than all error states, and that the lisp runtime specifically guards against this. If that's the case then my answer is very simple: it would be trivial to guard function calls (all function calls) to achieve the exact same functionality in python. I'm in bed but it would literally take me 5 minutes (I would hook eval of the CALL_FUNCTION opcode). Now it would be asinine because it's a six-sigma event that I call a function that isn't defined. On the other hand, setting a breakpoint and redefining functions as you go works perfectly well and is the common case and simultaneously the kind of "repl driven development" discussed all up and down this thread.


Thank you, you're very helpful despite this raging flame war. I'm glad to hear you can hook opcodes like that, then you really can do anything. And I really need to give "set a defensive breakpoint and then step through the function" an honest go. Now that you say it, I realise I haven't.


>I'm glad to hear you can hook opcodes like that, then you really can do anything

Just in case someone comes around calls me a liar: the way to do this is to spread the bytecodes out one per line and set a line trace. Then when your bytecode of choice pops up, do what you want (including manipulate the stack) and advance the line number (cpython let's you advance the manipulate the line number).


Calling again and continuing are not the same thing. Sure, with the above trivial example it is. But if the parent function has non idempotent code before calling the missing function (like doing some global change / side effects), then calling again will give a different result than just continuing from the current state.

So is it possible to define the missing function and continue from the same state in Python? I don't think so, but I'm not a heavy Python user (just for small/medium scripts).


>So is it possible to define the missing function and continue from the same state in Python? I don't think so, but I'm not a heavy Python user

This is a pointless debate - someone has to catch the exception, save caller registers, handle the exception (if there's a handler) or reraise. Either you have to do it (by putting a try except there) or your runtime has to be always defensively saving registers or something. Lisp isn't magic, it's just a point on trade-off curve and I have without a shadow of a doubt proven that that point is very close to python (wrt the repl). So okay maybe clisp has made some design decisions that make it a hair more effective at resuming than python. Cool I guess I'll just ignore all the other python features where there's parity or advantage because of this one thing /s.


I'll take this as an answer to my sibling comment that the answer is "No". I'm really sad CPython can't do that, but maybe some other Python can. It shouldn't necessarily be any slower for the interpreter to figure out where to jump to before saving the execution trace and jumping.

It's not "pointless", I was tearing out my hair and losing days because I couldn't do this in CPython. Yes, I'd much rather use Python than Common Lisp regardless.


This works in compiled Lisp code.


It works in code compiled from c++ too: define and associate a signal handler for sigkill, call a function whose symbol can't be runtime resolved by the linker, sigkill is sent and caught, define your function (in your asm dejure), patch the GOT to point from the original symbol to wherever the bytearray is with your asm, and voila.

I'll say it again: what exactly do you think your magical lisp is doing that defies the laws of physics/computing?


You act as if you know better than everyone in this thread and yet you don't know the 101 level fact that you can't catch SIGKILL.

Maybe try to relax, and learn some humility.


> It works in code compiled from c++ too: define and associate a signal handler for sigkill, call a function whose symbol can't be runtime resolved by the linker, sigkill is sent and caught, define your function (in your asm dejure), patch the GOT to point from the original symbol to wherever the bytearray is with your asm, and voila.

I don't need to do anything like that in Lisp. I just define the function and RESUME THE COMPUTATION WHERE IT STANDS in my read eval print loop. << important parts in uppercase.


Do you think the magic fairies are doing it for you? Your interpreter/runtime is still doing it whether you're aware of it or not.

My point is very simple: I can do it too, in any language I want, and so there's nothing special about lisp.


> My point is very simple: I can do it too, in any language I want, and so there's nothing special about lisp.

The big difference is: "I can do it too" means YOU need to do it. Lisp does it for me already, I have not to do anything. I don't want to know what you claim you can do with C++, show me where C++ does it for you.

Telling me "I can do it too" is not a good answer. Show me where the language implementation (!) does it for you.



So you update a single function without updating the global state, using a vile hack. In CL the entire state is saved.


If you're into 3D graphics, this could be a fun Common Lisp codebase to look at. I have tried to keep it simple and comprehensible.

https://github.com/kaveh808/kons-9


Look up Kaveh’s CL tutorial videos on YouTube. They’re really good.


I think you are replying to him.



Well, did you watch the recommended tutorials? Or are you one of those artists that don't watch their own work :-)?


If that was intended for me, I rarely watch my own videos. Only when I want to check on something. Don't like the sound of my voice. :)


As a Clojure dev, break loops and REPL-driven workflows sound wonderful, and something we could definitely benefit from, which would make it more like front-end coding with JS/TypeScript using the browser’s awesome debugging tools. Sadly, the state of tooling and community support for the Clojure ecosystem seems to be pretty lackluster at present.


Clojure can kinda-sorta simulate the true REPL workflow, if you're making something like a web server where deep calls down the code hierarchy only happen with each request. So you can rewrite and reload various functions while the server is still running and make requests from your browser again. The caveat is that eventually these redefinitions and overwrites pollute the namespaces and eventually something will break, at which point you reload your server.


I love CL and I really miss it when i'm doing something else. I mean, many things are such a pain in 'modern' languages that it's not even funny, when you compare it to the Lisp experience of even decades ago.

There are many cons, but those are simply not as bad as most of the pure technical language/dev env cons in almost everything else. Sure Python & JS have more uptake, more libraries etc, but the experience of developing for them is so much worse. IMHO of course. I have been doing a lot of languages over the years including C#, TS, Py, Hs and more esoteric ones, but I keep coming back to CL (SBCL + emacs + Slime) when I get seriously angry about stuff that is missing or plainly bad in those languages. It makes me relaxed and convinced there is some good in the world after all.

I am currently raising for a product we (foolishly so) bootstrapped in Typescript but now we will, for a launch version, redo it in CL. Meaning I get to work with / in CL (and all of the fun stuff; implementing DSL, code generation, working with macros, implementing a static type solver etc) for the coming 3-5 years before we launch. Lovely.


> There are many cons

Can't have Lisp without cons. (sorry)

What do you miss in Lisp when working in C#?


I would say:

1) painless debugging with Emacs/Slime vs Rider/VS/VSCode

2) performance & consistency of it; SBCL is fast and remains that way, I can leave emacs/slime running for months and it doesn't degrade; vs/rider... they really burn a hole through my laptop and isn't even that good at most things compared, even on really old computers

3) Re-eval; you can re-eval a region, function, file, last expression etc, during running and/or debugging, which is very flexible ; something wrong, change (or write the same function next to it with a fix; very handy), re-eval and when happy , clean up and save

4) data formats... We have had, since 'forever' had 'proxy' tooling that converts any incoming and outgoing JSON to and from Lisp lists. While in code, we only work with s-expressions and it makes life so easy.


Why is Typescript unsuitable for the product?


I see a lot of “coding” talk in the blog and comments from the author here, but few mentions as to what kind of software they’re building or what use cases they’re targeting.

My hot take is that the reason functional programming never took off is that, while it certainly is fine for writing programs, most software these days is not “program running locally on my pc/server from the command line until it completes” and is instead “program that starts, reacts to input from user, then gets closed by the user” or “program that starts, then responds to network or other automated I/O (to serve web pages, to monitor something, to emit logs, etc) then stops when the other software tells it to”. This is a lot harder to do in a purely functional style, or at least it is in most opinionated functional programming implementations I’ve used, because you’re no longer “just” evaluating some expression but instead initializing state, reacting to I/O, then updating state and/or performing further I/O potentially while using parallelization to perform monitoring/listen for other things/perform further I/O and state updates.

Of course it’s not impossible to do these things with Lisp but from my couple of semesters of exposure of FP in undergrad and use of FP features in C++ and Scala professionally to solve these kinds of problems… it seems quite hard to get FP to work for these applications, and that lack of suitability is what discourages me from diving more fully into FP


> I see a lot of “coding” talk in the blog and comments from the author here, but few mentions as to what kind of software they’re building or what use cases they’re targeting.

Good point! This is all currently just a hobby. For Common Lisp specifically, the only things I've produced are a (mediocre) Battlesnake client and a (now defunct, as of yesterday) multiplayer word scramble game. Neither of these really derives much benefit from being created in Lisp, but I learned a lot along the way (which was really the point).

Unrelated to Common Lisp, I've found myself often needing to write code that generates code. This is an area where I suspect Lisp will shine, although I haven't had a chance to give it a try yet. Two examples from recent projects (which I tackled before ever thinking about using Common Lisp) are:

* Generating code to validate a particular JSON Schema (used in a static site generator)

* Generating JSX from Markdown (used for story content in a programming game)

To say nothing of the innumerable C macros I've written in my lifetime :)


Thanks for the answer! Your word scramble game in particular seems like something that approximates my “maybe not a good fit for FP” bucket. Do you plan on sharing it on GitHub or describing the challenges you ran into?

Completely agree code generation is where I expect Lisp to perform the best. Though, I looked it up and apparently Markdown is not context-free so.. curious as to the challenges that introduces as I figure FP could zap through parsing a CFG but really struggle with state for something not context free


Note that Common Lisp doesn’t require functional programming. Mutation, side effects, etc. are fine. I just write imperative code for the most part.

My code was quick and dirty, so I don’t think anyone will learn anything from it, but it’s here: https://github.com/jaredkrinke/thirteen-letters


Note that Common Lisp contains CLOS, which is one of the most advanced object-oriented systems even now. Most Lisps are not functional like Haskell is.


I read this Wikipedia article and some examples: https://en.m.wikipedia.org/wiki/Common_Lisp_Object_System

Yes, the article calls it powerful, but aside from the ability to update classes and their functions at runtime - which, maybe I’m missing the utility of so I won’t say it’s useless although in my experience a SharedInstanceSingleton or LocalImmutableConfig or BatchedMonitoringEvent wouldn’t need it - it’s “just” dynamically resolving the method implementation to use based on argument types dynamically and letting you provide an order of precedence for diamond inheritance.

I think it does solve one problem I have - with some solid tooling and a guarantee that class updates at runtime don’t disrupt ongoing calls, it might let me patch running binaries without doing a full update. Though, the cost is that I seem to incur a few extra lookups or even sorts on each dynamically dispatched method, to resolve the implementation? Besides that it doesn’t seem to really solve most problems I have regarding I/O or state - having the option to update some of these at runtime is interesting but it seems like something I’d only want to selectively enable


in my experience a SharedInstanceSingleton or LocalImmutableConfig or BatchedMonitoringEvent wouldn’t need it

When you can't have something, it often becomes felt that you don't need something.


Elixir uses functional programming and its excellent for web development and whenever you want a fault tolerant system.

You also don't need to throw out all the good features of the other styles, as parts of functional programming are becoming more and more common in "regular" languages too. Rust uses functional patterns in many cases for instance.

And you can also write Lisp in an OO or imperative style if you want, it's no Haskell.


It took me a while to grok monads, and the IO monad, and longer still to figure out how to compose them in safe ways, and manipulate execution order, etc. But: now I can write typesafe applications, and I produce fewer bugs when I work in non-FP languages (I get paid to write Java.) Lisp is a starting point. Haskell is where it's at. I recommend learning the style, even if you never produce production code in it.


Yes, Haskell is magnificent for learning FP. I used to think Haskell was terrible for IO, but my tune has changed dramatically since I started working with it full time.


Let’s say I want to do something simple but slightly beyond the scope of a traditional toy demonstration:

* Read some environment variables and a local file

* Start a monitoring thread that consumes from a channel or something similar, then every X s or X events writes to a local temp file and then sends a request batching some metrics to an external system

* Configure and start an http server

* Said server has a handler that 0. Starts a timer 1. loads, then increments an atomic “num requests served until now” variable 2. uses synchronization to lock on a list or ring buffer containing the last 5 requests’ user-agent headers 2.5 copies the current last 5 values, replaces oldest one with the one from the handles request, unlocks 3. generates a json response containing like “num_so_far: x, last5agent: [..], “some_env_var”:..” 3.5 stops the timer 4. write request user agent and time interval to monitoring thread’s channel 5. write response and end handling

* server’s gotta be able to do concurrency > 1 with parallelism

* On sigterm set the server to a state that rejects new requests, waits for existing requests to complete, then flushes the monitoring channel

I’d consider this a trial run of some of the most basic patterns commonly used by networked software: init io, immutable shared state, atomic mutable shared state, synchronization locked shared state, http ingress and egress, serialization, concurrency, parallelizarion, background threads, os signals, nontrivial cleanup. In Go, Java, or C++ I could write this with my eyes closed. How easy is it in Haskell or Lisp?

If you know of any demos or repos that do something like this - not a pure toy or barebones demo, but not a huge task all in all- in either I’d be interested in seeing what it looks like.


There are bunches of web frameworks and various support libraries for both Haskell and for Common Lisp. They'll range from simple use cases to more complete and/or opinionated in style, depending what your needs are. For Haskell examples, Servant is used for web APIs, where Yesod is a larger all-around framework.

https://www.servant.dev/ https://www.yesodweb.com/


I'll cover the Haskell side because I'm more familiar with its library ecosystem:

> Read some environment variables and a local file

    import System.Environment (getEnv)
    import System.FilePath ((</>))
    main = do
      dir <- getEnv "WHATEVER_DIR"
      data <- readFile (dir </> "foo.txt")
      putStrLn ("Had " ++ show (length (lines data)) ++ " lines")
> Start a monitoring thread [...]

    import Control.Concurrent
    import Control.Concurrent.Chan
    import Network.HTTP
    -- also more stuff...

    monitoringThread :: Chan String -> IO ()
    monitoringThread chan = do
      file <- openFile "log.txt" AppendMode
      forever $ do -- no $ needed if you pass -XBlockArguments [0, 1]
        batch <- replicateM 5 (readChan chan)
        let chunk = unlines batch
        hPutStr file chunk
        simpleHTTP (postRequestWithBody "" "text/plain" chunk)

    main :: IO ()
    main = do
      logChan <- newChan
      void (forkIO (monitoringThread logChan))
      -- ...
      forever $ do
        threadDelay (1000 * 1000) -- 1M usec
        writeChan logChan "Hello, world!"
> Configure and start an http server

    import Network.Wai
    import Network.Wai.Handler.Warp

    main = run 8000 $ \req respond ->
      respond (responseLBS status200 [] "Hello, world!")
> Said server has [...]

Yeah, this is long. If you're just getting the current time with the timer, that's here[2]; synchronize across threads with MVars[3] or STM[4]; JSON is in aeson[5], which should feel broadly familiar if you know Rust's serde.

> server’s gotta be able to do concurrency > 1 with parallelism

Yep, GHC Haskell has _excellent_ concurrency support on top of a parallel runtime.

> On sigterm set the server to a state that rejects new requests, waits for existing requests to complete, then flushes the monitoring channel

I haven't personally tried this, but this[6] function sounds like... exactly this, actually, so I think its example should suffice?

On two separate notes:

- Common Lisp and Python 3 are a _lot_ closer than Common Lisp and Haskell, or even Python 3 and JavaScript; the Python 3 object model is very close to Common Lisp's, and Common Lisp is not particularly pure (setf isn't unidiomatic by a longshot), and supports a very non-functional style of programming (it has gotos!).

- "Haskell is worse at IO than other high-level languages" isn't particularly true. What _is_ true is that Haskell has the same "function coloring problem" as JavaScript (Haskell has the IO monad, JavaScript has the Promise monad); Haskell also has a "uses funny academic words" problem (well, debatably a problem...) which I think confuses the issue.

[0]: https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/bloc...

[1]: Haskell has a spec, one "main" implementation (GHC), little spec-committee activity, and a respect for that implementation not superseding the spec; many improvements become language extensions (-X flags or {-# LANGUAGE #-} pragmas), so when you invoke GHC you're getting a by-the-spec implementation by default.

[2]: https://hackage.haskell.org/package/time-1.12.2/docs/Data-Ti...

[3]: https://hackage.haskell.org/package/base-4.18.0.0/docs/Contr...

[4]: https://hackage.haskell.org/package/stm-2.5.1.0/docs/Control...

[5]: https://hackage.haskell.org/package/aeson-2.2.0.0/docs/Data-...

[6]: https://hackage.haskell.org/package/warp-3.3.28/docs/Network...


I don't have time to try it atm but this looks like it would be quite easy to implement in Clojure (that falls under "Lisp", right?)


Common Lisp isn't a "purely functional" language, it supports every paradigm. It allows silly things like...

  (let ((pair (cons 1 nil)))
    (setf (cdr pair) pair)
    (list (first pair) (second pair) (third pair)))
  ;; => (1 1 1)


I'm using Lisp for simulation. It's really wonderful being able to poke and prod long-running computations while they run. I missed this too much when I tried using Julia.


Whatsapp and discord runs on functional elixir/erlang. I heard they are pretty big and not hobby projects.


Functional programming took off big time. Just look at the JavaScript ecosystem.


When I want "this will run forever," I write it in Rust.

When I want "this will compile forever," I write it in ANSI C.

When I want "this will live forever," I write it Python 2.7 and make it the backbone of the entire org's infra templating. Bonus points if it's a custom Ansible module.


> It's 2023, so of course I'm learning Common Lisp

So am I although none of the features mentioned seem useful to me so far (perhaps I will change my mind once I become fluent). I just hope Lisp will make it easier to express my thought in code, minimizing/abstracting all the cruft/boilerplate. To me Lisp expressions seem the most natural way of expressing a thought.

As I primarily am interested in writing GUI apps I hope to master Clog or find/develop a good wrapper around some GUI toolkit.


The scoop: Scheme and Janet are great, but the author wants a more standalone language. What makes the difference is the breakloop, a full-blown REPL that opens when an error in a program occurs. Not a stacktrace, not a debugger; just build from the point where it's currently broken.


This sounds so amazing, why is Common Lisp not the most popular language out there? (asking as someone who almost never writes code)


Because language popularity is, at best, loosely correlated with any intrinsic qualities of the language itself.


See also advertising. C++ and Java had enormous advertising budgets, while Common Lisp had virtually none. For years, virtually every programming book and magazine was touting C++ and then later Java. Every conference, every keynote, everything a CTO might ever read or notice was telling them to use C++ or Java.


Sounds like a religion.

https://www.nicklitten.com/if-programming-languages-were-rel...

Unsurprising that C/C++/Java are all put in the 'religion of the book' family, with great emphasis on proselytization.


C++ itself never had a marketing budget! The nearest you might find is marketing for implementations back when people paid for programming languages, but the only surviving one of those is really Visual Studio.

Lisp has had decades to break out of its niche if it delivered a really advantageous solution, but somehow that never happened.


> Lisp has had decades to break out of its niche if it delivered a really advantageous solution, but somehow that never happened.

I think a huge part of it is that it is not immediately obvious that one needs what Lisp offers, and by the time the system has grown to the extent that the need is obvious, it has also grown to the extent that one no longer sees the fores for the trees. One doesn’t think ‘oh man, I need garbage collection’; one thinks, ‘oh man, I need to manage malloc and free better!’. One doesn’t think, ‘oh man, dynamic scope would really fit this problem well’; one thinks ‘oh man, I need dependency injection.’ Peter Norvig famously noted that 16 of the original 23 design patterns were invisible or simpler in dynamic languages such as Lisp†. Heck, there was a time when one couldn’t rely on recursion, or even conditionals! But the programmer who has managed to get stuff done without recursion, or without conditionals, or without macros doesn’t really see the point. He’s even worried: those things may add too much expressivity to the language. Why, folks could write unmaintainable code with them!

Of course, folks write unmaintainable code without them, too …

Anyway, I think a huge issue is one of education and experience. Ours is a massively growing field. The vast majority of folks are juniors, and don’t know any better; a portion of their education was miseducation. The seniors often have one year of experience, twenty times (rather than twenty years of experience). Objective standards are rare to nonexistent. Norms and standards are absent.

But yeah, when I’m working on a large project in a language other than Lisp, I often think, ‘man, this would be so much easier in Lisp!’ or even ‘man, this would be practical in Lisp!’ (because anything is possible in a Turing-complete language …).

†: https://norvig.com/design-patterns/ppframe.htm


> one no longer sees the forest for the trees

One no longer sees the forest for the fire.


Yes, and whose advertisements do you think show up in every single one of those magazines? Which implementations get mentioned by every single C++ book? Which organization sponsored every single C++ conference? Don’t forget that they had stiff competition from the advertising budgets of other large companies, such as Oracle and IBM.

Also, don’t forget that Lisp machines were once the most coveted development machines on the market. But Symbolics had to develop not only the language and IDE, but also the OS, the hardware, the microcode, and everything else all at once. It’s pretty telling that they soon began running Unix (on a separate processor) and then their next product was an add–in card for an Apple Macintosh II containing a Lisp processor ASIC. By then the C++ hype train was gathering steam and the AI winter had begun. Symbolics didn’t survive, and their direct competitor LMI had even less chance. So it’s not that Lisp offers no advantages, it’s just that market conditions killed off the companies that were offering it. Note that these market conditions were created by advertising and shifting public perception.

I thus return to my thesis, which is that the market success of a language has little, if anything, to do with the advantages of the language. Instead marketing and advertising rule the day.


> But Symbolics had to develop not only the language and IDE, but also the OS, the hardware, the microcode, and everything else all at once

This was forty years ago. Doing LISP advocacy like this just makes people sound like they're that Japanese guy who refused to surrender until the 1970s. The world has moved on; there have been other opportunities; and LISP has not won them either.


The timeframe doesn’t matter. What matters is that C++ triumphed not because it was a better language, but because it was sold better. It had better advertising.


I was going to disagree with you, until I re-read and noticed the word "intrinsic".

I would say that language popularity is highly correlated with the actual usefulness of the language. But "actual usefulness" covers far more than the "intrinsic qualities" of the language. It also covers the scope and quality of the standard libraries, the third-party libraries, the available tools like compilers, IDEs, and debuggers (which may be third-party), available documentation and training, and people available to hire who know the language. Of those items I listed, the only parts that could be considered "intrinsic" are the standard libraries and the tooling that comes with the language by default.


Eventually you need to work with other people, and using a common time-shared or multi user session is unlikely. Now consider that lisp images generally can't be easily diff'd or merged.

And with that the edit-and-continue paradigm loses much of its value. If you have to commit changes to a shared source file anyhow then you'll be not much worse off with debugging a core dump.


People say this a lot, but they fail to take into account that you can debug your server live as it continues to handle normal traffic. Even if you don’t deploy changes via the REPL, merely debugging the problem in a REPL without restarting anything is a huge win.


Lots of languages that are not lisp have this ability.


I don’t think that they do. I know that Erlang has something similar; you can reload a module and it will gradually replace the old code as processes are replaced. In principle you could debug a single thread in a C (or C++) program without stopping the others, and some IDEs will let you edit the code and recompile while the program is running (they patch out the old function definition so that it jumps to the new one instead), but good luck doing that in production.

But don’t forget that in Common Lisp, you can redefine classes at run time as well as functions. All existing instances of the class will be updated to the new definition, and you can provide the code that decides how the old fields are translated into the new ones if the default behavior is insufficient. Good luck doing that in C or C++.

My favorite story involved a race condition that was discovered in the code running on a satellite, after it had been launched. The software on the satellite was mostly written in Common Lisp (there was a C component as well), so they opened a connection to the satellite, started the REPL, debugged the problem, and uploaded replacement code (which obviously added a lock or something) to the satellite all through that same REPL. While the satellite was a hundred million miles away from Earth, and while it kept performing it’s other duties. You can’t do that on a system which merely dumps core any time something unexpected happens.


Said "software on satellite" story is from Ron Garret, for anyone interested.

https://flownet.com/gat/jpl-lisp.html

> During that time we were able to debug and fix a race condition that had not shown up during ground testing. (Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem.


examples please, because so far i have only seen this from common lisp and smalltalk. there is also pike where i can reload classes or objects at runtime, thus avoiding a full restart, but it's not as closely integrated as in smalltalk and you actually have to build your app in a way that allows you to do that.


It's not always usable, but Visual Studio offers this for C# (works most of the time) or C++ (works in fewer cases because of the terrible header model)


but you wouldn't be able to use that on your production server, would you?


JavaScript immediately comes to mind.


how do you do it? in the browsers debugger? maybe, but that is not integrated with your actual source files, so you have to be careful to track your changes and copy them to your source. that may help in some cases but isn't really practical.


Java supports live debugging and profiling.


But that’s not the same thing at all. If you’re debugging an exception in Java, you cannot continue execution as if the exception had not been thrown at all. With Common Lisp’s condition system you can.


The question was whether you can debug a live service while it's handling live traffic. Not whether you can fix it. Java can definitely do the former, and definitely can't do the latter.


my question specifically was which languages/runtimes allow you to actually make changes to the code in a live process without restarting it.


Your question moved the goalposts. Making change to a running system wasn't part of db48x's claim to which dleslie responded. It was explicitly excluded, in fact.


ok, fair, but what i am asking about is a feature of common lisp (and smalltalk or pike), so i didn't pay attention to the exclusion. that was not deliberate. my bad. (maybe you could say i moved the goalpost back to the original topic)


Umm.. you can throw an exception, you can return to previous call frame, you can reload modified classes. If you want unlimited code modification, you can use dcevm https://github.com/dcevm/dcevm

https://www.jetbrains.com/help/idea/altering-the-program-s-e...


Which?


I'm confused. Why aren't you all just working on your own machines?


The classic lisp way is to build a runtime image by editing the image while running it, then dumping a binary. You never specifically need to load a source file.

But you can't easily collaborate with that style of development.


With lisp you typically develop in source files, versioned with Git. The same as any other language. Source files and live development are not mutually exclusive. SLIME can send code snippets from your file over to the REPL for live development. You have your cake and eat it too.

The REPL (or scratch buffer) is typically used for testing/observing. Not the actual source code development. Although it is possible to never write your source code to a file if you're just playing around with a toy experiment.


> The classic lisp way is to build a runtime image by editing the image while running it, then dumping a binary. You never specifically need to load a source file.

Says who?


It's hard to maintain and read. There you go.

I'm not a huge fan of lisp, but I do like the language. Unfortunately, it's way too hard on 90% of the developers. They need some more structure so they can think about one thing at a time, which is why C-like syntax won in the end.

All the best developers I know are into Lisp or Haskell (or both). They can crank out ridiculous code which then goes unused because maintenance would be too much of a burden. Sometimes I write some really complex one-liners (which are like 5-10 lines long) to do some tasks using all the possible hacks to avoid having to type an extra character. I might be able to do that, but most developers wouldn't be able to see how the data get transformed and keep all of that in their mind. Whatever I wrote is unmaintainable by most people.

The reality is that the majority of people can't grasp their mind around complex concepts. Which is ok, most developers write a few API endpoints and some UI components, they don't need much to create value.

We can get some good concepts from the functional world and transfer them to C-like syntax languages though. We can even have some of the programmability of Lisp (but not all of it) via macros.


It probably didn't help that a bunch of key Lisp people were leaning hard into proprietary $80,000 minicomputers right around the time that commodity(ish) microcomputers were about to massively explode in popularity.


Lisp is more of a meta-language than a mere language. Since it's homoiconic, you eventually end up developing a domain-specific language that works great for your subject area. It also may make it a bit harder to onboard new team members, because the level of abstraction which you can reach can be relly high, all while keeping performance reasonable.

Technically, you could run e.g. a Python program under pdb, break on certain exceptions, and fix things inside a living system. It's just not a customary way to do that.


Only five years ago, CL's web presence was not attractive. This included "official" websites and online documentation (despite all the great books). It's better now (common-lisp.net was reshaped, there's lisp-lang.org, a better Cookbook, the CL Community Spec, more YT tutorials…)

there is no full-featured web framework (although you can write web apps of course),

no satisfactory GUI lib (now Gtk4, Qt5 (hard to install), IUP, nice-looking Tk themes, more low-level bindings to graphics libraries etc)

the package manager came late,

good open-source compilers came late,

less choice in editors (now many https://lispcookbook.github.io/cl-cookbook/editor-support.ht...),

and, well, lots of FUD and a language not for everyone.


“Avoid success at all costs”


It's questionable whether it's really much better than just a debugger with a core dump (what I usually work on, it's not any better). It is, however, a pretty snazzy feature.


with a debugger, after you fix the application, you still have to restart and run it again. the big benefit here is that no restart is required.

smalltalk can do the same btw. i had been working on a small website where a specific request from the browser would fail. instead of sending a failure message the request would just hang. in the mean time on the server in my pharo smalltalk window an error would pop up. when i fixed the error, the download of the request resumed in the browser as if nothing had happened other than a delay.


> with a debugger, after you fix the application, you still have to restart and run it again. the big benefit here is that no restart is required.

We like to make sure everything running in prod is verifiably built from source in-repo. So that's the thing, while it's a really snazzy feature for sure, the value over the rest of the world is on the questionable side. At least for our use case, but I think it's true for most use cases.

edit: Also really curious about your smalltalk and pharo experience. Sounds fascinating!


once you fixed the problem you of course commit the change to the code on disk. there is nothing in the workflow that prevents you from doing that. you are not going to just fix apps in production without running your tests and what not. at worst you fix an error in a production system, and run the tests afterwards to make sure everything is clean. but mostly this feature is used during development when your code is still incomplete. not having to restart every time there is an error simply speeds up your development loop.


I am really curious about your experience with Smalltalk and Pharo!


i am really just a beginner with smalltalk and CL. as a vim user i didn't really have a good integration of the CL repl with the editor (there were tools, but they weren't as straightforward to set up as slime would have been). and when i encountered the breakloop i didn't really know what do to and just tried to get out of it as quickly as i could. (exiting vim is easier ;-) the thing that bothered me was that when i change code in the repl without an integrated editor, then how do i keep track of the changes and make sure i don't loose them? but then, i just never tried to set up a proper environment.

in smalltalk on the other hand you get a nice IDE with all the comforts of a GUI. you have your windows where you browse your code neatly structured in classes and methods. there is a window where you run your app and manage your tests which light up red or green if they fail or pass, another which logs error or other print messages, and if an error happens while an app is running a new window pops up, showing you a trace of what was running and a text field with the code that failed, like in a debugger, and right there you can edit the code and resume.

the code is written to your class, and when you go back to your code browser the change is reflected there, and you can commit it to a version control system. pharo btw has pretty good integration with git, and already a few years ago it almost acted like a git gui. it's probably even better now. the primary downside is that the text editor in pharo is simple, like a browser text area, and not a sophisticated editor like emacs or vim.


> [Lisp] the thing that bothered me was that when i change code in the repl without an integrated editor, then how do i keep track of the changes and make sure i don't loose them

> [Smalltalk] the code is written to your class, and when you go back to your code browser the change is reflected there

I feel your pain. "writing the changes back to the source code definition" seemed like a no-brainer desirable feature of a Lisp REPL, yet I could not find a way to do that out of the box using Slime. I'm sure one could program it, however! Bet someone has...


> ... yet I could not find a way to do that out of the box using Slime.

Here's what I use: edit code, save file, tell slime to eval current defun. I haven't yet suffered indiscipline to hook `slime-eval-defun` to call `save-buffer`. Would that work for you?


In my experience, it's definitely better for prototyping because if you hit an error that is difficult to reproduce, you can update your code and try again, without having to try and create reliable steps to reproduce the problem.


Yeah I can see this being a pretty handy feature for prototyping. Otherwise you'll need to, like, catch errors in your main loop to ensure you don't have some program-halting issue while you're working.


Performance, approachability

Someone's going to argue with me. Fair enough. Provide your explanation.


this is an age old argument, but given the popularity of other slower languages, i'd rather think that approachability is the more critical issue.


It’s the issue. The ever popular syntax issue continues to haunt it.

CL has had no real “unknown unknowns” for a very long time. While folks who newly discover it feel they found the gold idol in the jungle cave, the cave is, in truth, well explored, mapped, and documented but the idol is left behind.

All excuses to not use CL have long been, or have had the opportunity to be, addressed. Today, it’s fast enough, small enough, empowered through utilities and libraries enough, has different build and deployment scenarios to work with a vast array of applications. And yet here we are...still.

ABCL runs on the JVM, which runs everywhere on everything. Clojure, first class system on top of the JVM, but no real adoption. Some, to be sure, likely (I have no data) more than CL itself. But it’s still an blip on the radar.

Meanwhile, a bunch of hackers threw together a language sharing many aspects of the core feature set made popular in Lisp and Scheme runtime environments, made it look like an Algol step child with curly braces and everything, and since then an entire ecosystem of software has been written (and rewritten) into this system and it’s runtime is the focus of some of the largest companies in the history of civilization.

Raise your hand if you think that if the creators of JavaScript went with an S-expression syntax instead of a C/Java derivative, we’d be running a VBA clone in our browsers (but nowhere else)?

Because at this juncture, THE thing that distinguishes CL and other Lisps from where we are today, is the syntax. Every other charm these systems enjoyed have been cherry picked away.

Advocates say the syntax is not an issue. It’s a feature m, not a bug. But the “wisdom of the crowds” has spoken, and they stay away.


It's complicated, but there are likely some main contributing points.

1. Computing is extremely blinkered. More than in other professions, people in computing are either unaware of what has been done before. If they learn a little bit about what has come before, they look for reasons to be dismissive of it, so they can turn their attention away. They know a few things (languages, platforms, tools) and just live in that world.

2. At any given time, a small handful of things are popular. This changes over time. The amount of stuff we have produced in computing vastly outnumbers what is popular. It's like a game of musical chairs in a packed sports dome, where there are seven or eight chairs on the floor.

3. The field is still growing; there are probably more people who joined in the last 10-15 years, than those who joined the field before that. Almost every newcomer plunges into whatever is popular at the time, and will never look at anything else unless it is new and popular, which will happen at most some 3-4 times in their career before they are out.

Those are generalities. Then there are Lisp specific historic items.

Lisp specifically had a bit of a heyday in the 1970s and into the 80s. People developing Lisp systems were very ambitious and their work eventually demanded hardware that only big companies and well-funded institutions could afford. They did fantastic things, but Lisp did not scale down to the emerging single-chip microcomputer with a small memory (or not in that fantastic form). Typically, Lisp would have liked a few megabytes of RAM compared to tens or hundreds of kilobytes.

The microcomputer was something new and popular, bringing with it new people who had nothing but microcomputer experience. Most of them didn't know anything about Lisp other than reading about it in books or some magazines like Byte and Creative Computing, which is something that only a curious minority would engage in.

Eventually, consumer microcomputers became powerful enough to run Lisp well, but by that time, the people who remembered Lisp were vastly outnumbered by new people.

(Speaking of memory sizes, GNU's implementation of Lisp, even, GNU Emacs, was absolutely derided for its memory use, well into the 1990's. For instance, one joke interprets its name as an acronym for Eight Megabytes And Constantly Swapping (EMACS). Eight! Not Eight Hundred or Eighty. Eight megabytes is ridiculous today; the resident size of a Bash process can easily be that.)

Another problem with Lisp is academia, which has played a role in actively destroying interest in Lisp.

After the downturn in Lisp popularity, schools continued to teach Lisp dialects, but often badly, leaving students with a bad taste. They used scaled down dialects not suitable for software development work, like certain Scheme implementations, and gave students assignments that focused on doing things with recursion and lists, and other nonsense that is far removed from making a text editor or game or whatever.

This practice is still continuing. If you follow the [Lisp] tag on StackOverflow, you will notice that from time to time, students post Lisp homework questions. E.g. "we are required to write a recursive function that removes matching items from a list". The comments will say, there is a built-in function remove, why don't you use that. The student will reply, oh, is that right? But, in any case, we are only allowed to these five functions: cons, car, atom, ... and we can't use loops, only recursion.

Never in a programming course that used C, or Modula-2 or whatever have I had homework forbidding me to use any language statement type, operator, or library function! This is purely a Lisp teaching problem, and it leaves students with wrong ideas and impressions. They might misremember things and spread misinformation like "Lisp has only linked lists and nothing more, and everything must be done with recursion; it is useless".


That sounds intriguing as a Clojure dev but what happens in the following case (not very lispy code but it's just to show what I don't get):

    (do-it (do-it first))
What if (do-it first) works fine, but it's the call to (do-it (do-it first)) that fails?

I get control right where it's broken, so I can fix the do-it defun. Great, I like that. But by fixing it, this means I changed the result of (do-it first).

So the point at which the machine (?) is is a point that's unreachable anymore by the current code.

I hope my example is clear enough.

I really don't understand how that works when fixing what would allow you to continue would change the state at which you're given control to fix things?


Please excuse the really contrived example, but you can do this in Gambit:

    ~ cat do-it.scm
  (define (do-it x)
    (if (> x 0)
        x
        'error))
  
  (define (do-it-fixed x)
    (if (and (number? x) (> x 0))
        x
        'error))
  ~ gsi do-it.scm -
  > (do-it (do-it 0))
  *** ERROR IN do-it, "do-it.scm"@2.7 -- (Argument 2) REAL expected
  (> 'error 0)
  1> ,b
  0  do-it                   "do-it.scm"@2:7         (> x 0)
  1  (interaction)           (stdin)@1:1             (do-it (do-it 0))
  2  ##main                  
  1> ,e
  x = 'error
  1> (set! do-it do-it-fixed) 
  1> ,(c x)
  error
  > (do-it (do-it 0))
  error
Per the Gambit docs[1], "The nested REPL’s continuation and evaluation environment are the same as the point where the evaluation was stopped.". The call to ,(c x) is really just calling the reified continuation c with argument x.

[1]: https://gambitscheme.org/latest/manual/#Debugging


you didn't necessarily change the result of (do-it first), you may have, but that just means you introduced another error.

i think the approach here is to accept that you fixed the bug for the second call, but you will still have to go back and retest the first call.


Lisp used to specialize in offering extravagantly expensive features, maybe time travel debugging would be a good addition.


> you didn't necessarily change the result of (do-it first)

You're right.

> i think the approach here is to accept that you fixed the bug for the second call, but you will still have to go back and retest the first call.

Gotcha. It looks like a very useful feature. I may actually just try it to try to understand how it works: especially since TFA says the CL integration with Emacs is good (I happen to be an Emacs user).


> What makes the difference is the breakloop, a full-blown REPL that opens when an error in a program occurs. Not a stacktrace, not a debugger; just build from the point where it's currently broken.

This just makes me wanna bust open a smalltalk image...


This is standard in Gambit Scheme as well.


It supports modifying code in the middle of an error and continuing on? I hadn’t found that in a Scheme before!


I'm sure you can find differences, but here's an example adapted from the docs[1]:

  Gambit v4.9.5
  
  > (let ((x 10) (y (- 1 1))) (\* (/ x y) 2))
  \*\* ERROR IN (stdin)@1.30 -- Divide by zero
  (/ 10 0)
  1> ,e
  x = 10
  y = 0
  1> (set! y 2)
  1> ,(c y)
  4
  >
[1]: https://gambitscheme.org/latest/manual/#Debugging


Isn't all this stuff a vector for malicious code and security vulnerabilities in production ?


No.


Can you explain why this is not ? Code injection in production environments is generally considered an easy attack vector. Lots of CVE's around this in other language SDK's that have been ironed out over the last decade and half. I don't think Common Lisp gets "special protection" here or does it ?

Unless you are restricting this to only development in which case there are a lot more languages other than common lisp that support hot-reloading/re-definition.


> Lots of CVE's around this in other language SDK's that have been ironed out over the last decade and half

Like what? The only notable one I know of is log4shell. And no one advocates not to use java because of rce. Nor javascript, nor python, nor erlang. Compare with c...


You are loading code you wrote, not evaling untrusted user input. Common Lisp is actually safer than a lot of languages in that Java, Python, Javascript, etc all do lots of runtime reflection and metaprogramming that leads to vulnerabilities where lisp metaprogramming is happening at compile time and therefore a lot safer.


I'd be interested in if you considered Guile first and what made you decide in favor of common lisp. Few years ago when I decided its finally time to learn lisp I looked at few variants and Guile seemed to have the benefit of: fairly vibrant(but somewhat hermetic) online community, a sizable manual that describes most frequently used APIs (you can learn the language itself very quickly, but it's the knowledge of the APIs you need to do anything in the "real world"), and being actively maintained and extended. So I chose Guile.

For those that don't know these terms Guile and common lisp are two implementations of the same scheme language (simplifying a lot).


When I looked, I got the impression that Guile didn’t run on Windows, and that’s a platform I needed to support.


Let me preface this by saying I used LISP professionally in the '80's for about ten years.

It's a great language. It is right up there at the top of my list with Assembler, APL and Forth as languages that taught me so much more than the typical C-like language path most people are exposed to today. And, yes, I used those languages professionally for years.

I have always said it is important to learn these non-C languages.

However...

> I've spent some time contemplating future-proof programming languages because I want to ensure that code I write will be usable in the future.

I think it is clear that it will not be long until you can use an AI-based tool to translate any program from language A to language B. And, in fact, likely improve, maintain and extend it.

For example, you might be able to have the AI tool write a function or module in assembler targeted at different processors and be able to accelerate critical code in a platform-specific manner that would be almost impossible for most developers to manage and maintain today.

I experimented with some of this using ChatGPT. We built a product using MicroPython that require hard real time performance. Sure, MicroPython was not the right choice to begin with. This was one of those projects where we walked into something that morphed and we were stuck. Being that I am perfectly comfortable in assembler, I replaced chunks of code with ARM assembly routings. The performance boost was massive, of course.

As an experiment, I wrote a specification for one of those modules and asked ChatGPT to write the code in ARM assembler. It took all of five seconds to get a listing. Let's just say it took me a lot longer. The code was not optimal, yet, it worked just fine. Someone like me, with experience in the domain, could easily take that as a starting point and improve from there. Just for kicks, I asked ChatGPT to write the same code in C, C++, JS, Python, 8080, 8085, 6502, 68K and x86 assembler. That probably took a minute or so. Did not test all of the generated code. All of it looked like it would run just fine.

In other words, I believe that, today, the only reason to pick a language is likely something like: It's what I know and it has the libraries, frameworks and support I need. In some cases, it's because it's the only way to achieve required performance (example: Python is 70+ times slower than C).

Code longevity is not likely to be an issue at all.


I strongly agree. For me, it's the libraries.

Not just having libraries, but having One Obvious Choice. I don't want to compare and contrast libraries, realize that one has sixty percent of what I need, the other has eighty, and they overlap for about forty percent of it.

More and more, I think in terms of algorithms and data structures over anything else. Being able to express those fluently is my focus.

So to bring it around to your comment, what I like to imagine is that someone designs a programming language where the focus is on the ability of the language to be translated to other languages. Then, libraries will be built out, everything that is in standard Python and more. Once a translator is built and tweaked, we could have functional (not like the paradigm) libraries for any langue you fancy.

Yes, the translator would need to be more constrained to avoid "hallucination" and I am sure the resultant libraries would be slow, inefficient, and so on, but they would be there. As it stands now, I think there's a lot of rebuilding the wheel in scores of languages. I wouldn't say that the effort is wasted, exactly, but I can imagine talented programmers making better use of their time.


I was going to reply by suggesting emacs lisp as a candidate language, really making it a bet on how long emacs will be around. Will people (commonly) be using emacs in 50 years? I think people will, though I hesitate to say so. If it turns out that we converge on text as a necessary interface to a computer (at least in some cases), maybe the bet pays off.

But I think your idea that the expression of a program will become fungible or machine-translatable is much more salient. Though if the program itself depends on a whole chain of ancient dependencies and idioms (think a VB UI in front of an Access DB) might run afoul of infinite regress. So, to really future-proof on a long time-horizon, it seems you need to be preoccupied with a lot more than the programming language.


> In other words, I believe that, today, the only reason to pick a language is likely something like: It's what I know and it has the libraries, frameworks and support I need.

I would take it even further and say that in the near future, everyone will have their own beloved DSL completely customized to their needs and the AI will be able to translate any code to your favorite DSL. You’ll code and commit the changes and the AI will take care of that and convert it back to other peoples’ DSL’s.


Me too buddy. I'm not even sure how I got to this point, but I can't go back.


Care to share what you’re using CL for?


I automate my workflow for approval of rhel kernel content to customers. Keeping the wheels on the delivery of 12 different streams of kernels to customers.

It also controls the workflow of delivery of intermediate "hot fix" kernels between scheduled releases when fixes are needed immediately.

I also use it for gathering metrics on gitlab ci and internal pipelines, alerting if the system is stalling or performing outside acceptable limits.


Kudos for the Janet shout out!

It's an excellent little language, and Janet for Mortals is an excellent book to learn it. The author has a great sense of humor.


as a manager type, I wish my engineers were more broadly aware of REPL style development and the massive productivity boost it can have for certain kinds of development. I am used to doing a lot of ruby/python/js in REPL, but the idea of a sort of "break into a repl and implement the missing function and continue" sounds really nifty.


Big fan of Lispworks (Yes, there is a free version). Using it in production, as many others. Using Quicklisp in experiments. Not direct in my codebase. Lisp will always be there. Learning Lisp will make you a better programmer, while having fun.


I wonder if LLM assist tools like GitHub CoPilot will further consolidate the programming language landscape. The productivity gains from using a well-supported language might raise the barrier to entry for new or niche ones.


Always been fascinated by Lisp, but I never spent enough time to enjoy its elegance and applications. I went through Lurk, a Lisp dialect, a Turing-complete programming language for recursive zk-SNARKs in the last year. I started to grasp a bit of its potential in solving real-world problems, but still too little to understand why it's so right for solving some problems. I'm curious about what other projects you guys are successfully and better solving because of Lisp today.


Common Lisp is still the most pleasant REPL language. The only complaint I have is that too many function names are taken due to the large spec.


It's fine, you can shadow any function you want: <https://cl-community-spec.github.io/pages/shadow.html>


I am also learning common lisp in 2023, I like it!


I'm sorry about my basic question. Back in the 80s, AI was betting on Lisp machines — https://en.wikipedia.org/wiki/Lisp_machine — now, of course obsolete. Is Lisp still relevant in the AI space?


Not really no. It’s mostly Python 3. The AI space back then had heavy emphasis on symbolic AI. Modern deep learning algorithms have very little overlap.


I don't see why the author says:

> I had previously abandoned using Scheme because, frankly, I ran out of free time for exploratory programming.

But they find Common Lisp acceptable. In what way are Schemes more "exploratory" than Common Lisp? Isn't that exactly what the author says they like about CL (REPL driven development)?


Sorry that was unclear. What I meant was: a while back, I was exploring Scheme (motivated by SICP) and then ran out of free time. Now, I’ve got some free time again and want to try Common Lisp because of the REPL-driven workflow.

It wasn’t meant to be a comment on Scheme vs. CL.


i found the repl driven workflow intriguing but i could never get into it. i am not an emacs user and the vim integration wasn't as good as slime promises to be and i couldn't really get comfortable running lisp from within vim. not ssure, i probably didn't try to hard.

smalltalk on the other hand made this a lot easier. not repl driven but having an actual UI to manage code and handling errors it provides the same ability to fix issues at runtime without restarting but with a nicer interface to manage the code.


Thanks for clarifying.

I recently went into the lispy rabbit hole for a while.

Scheme is so beautiful.

CL seems more willing to compromise for pragmatism.


Yes, CL is extremely pragmatic. And Scheme was invented specifically as a pedagogical tool, so everything is much cleaner. At least until you install scm-utils, and find that it added an entire computer algebra system and physics simulation system and so on. :)


I see, thanks for clarifying!


There is something very playful about lisp that is so intriguing to me. I've come to rely heavily on powerful type systems at work (which is amazing), but lisp feels like unbounded imagination flowing at my fingertips.


It' s 2023. It' s time to switch [From Common Lisp to Julia](https://news.ycombinator.com/item?id=32745318)


Then Prolog should be somewhere on your list also.


There are plenty of old business systems which are critical, can't be removed or turned off, and use LISP, COBOL, etc. Meanwhile, nothing important uses Clojure or other trendy flash-in-the-pan language. If you want an interesting project, sure, use Clojure or something. If you want money, learn COBOL.


I hear this a lot, but have never once seen a COBOL job posting.


I think a lot of these jobs go to former employees who now contract.


That's because gigs in trendy languages come and go, and you might earn good pay for six months, whereas cobol gigs run for 20 years and pay consistently high salaries.



Doesn’t walmart use clojure?


It's not as though the company that bought the consultancy that employs many of the core Clojure people runs a bank with Clojure.


If we're gonna go off what has the most businesses built on it, LISP wouldn't even be in the top 20




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: