Hacker News new | past | comments | ask | show | jobs | submit login
Common Lisp: An Interactive Approach (1992) (buffalo.edu)
119 points by nanna 11 months ago | hide | past | favorite | 92 comments



Steel Bank Common Lisp is what I built my businesses on. Interactive software building is the work of magic.

Nothing comes close.

Once I retire and have some more free time I want to make videos and make open source contributions to make it more palatable and sexy for beginners.

Folks are really missing out on a lot of joy and happiness by not programming in Common Lisp but I also understand why.


For me David Botton [0] with his work including code, support and videos is doing very nice work in this direction.

I use SBCL for everything but work because I cannot yet; we are getting there, but like you say, it’s such a nice experience working interactively building fast that it is magic and it’s painful returning to my daily work of Python and typescript/react. It feels like a waste of time/life, really.

[0] https://github.com/rabbibotton/clog


for someone new and mostly work in web/mobile dev, will you recommend SBCL or clojure/clojurescript?


For most people clojure/clojurescript is the way to go as it's practical for building modern software. I'm working on something (but very slowly as I have a dayjob) to do this. I wish we had started in clojure(script), but that was not my call at the time (it is now).

But in short; for now clojure(script) is the most practical way currently.

Note: there is ABCL for running CL on the JVM, so for that you don't need clojure (I haven't tried but I have friends who use it and say it works well), however the JS issue (and I would need JS and interact with JS) isn't solved yet, as far as I know. I haven't looked recently, but threads like [0] are not that old and I cannot quickly see many things changed in that respect. So I would rather opt for what CLOG did and other Liveview type of systems, so it doesn't matter what happens in the frontend. WASM and (because now it's still needed) a robust JS implementation would be great.

[0] https://news.ycombinator.com/item?id=21535165


ECL runs on wasm via Emscripten.


Sorry, off topic, but recently I tried using ECL as my daily driver after I setup Emacs and Slime to use it. It was fine for dev as well as deployment. I did go back to using SBCL and LispWorks after this experiment was over.

There is a rich ecosystem of CL implementations. I don’t use it much at all, but ABCL is really interesting also if you work on the JVM.


While Clojure is a great language, I would say it does not offer the same REPL driven experience as Common Lisp because it runs on the JVM.

Common Lisp (SBCL) and CLOG is worth checking out first.

The magic I described in the original comment exists in Clojure but you can feel its true power in Common Lisp.


It helps when one shells out a license for Cursive, https://cursive-ide.com/


Agreed, but people usually want practical first.


A thing you could do right now is sponsor people that contribute to SBCL, to Lisp libraries or who make videos ;) For example, the person behind the new parallel GC for SBCL: https://github.com/no-defun-allowed or Shinmera or more https://github.com/vindarel/lisp-maintainers and one could reach to the CL Foundation too.


Yup thats a great idea and thanks for sharing that list. I do Github sponsor a few there already.


Just curious, if you don't mind sharing: what's the type safety story with Steel Bank Common Lisp?


SBCL has pretty good inference and compile-time checking for basic types when you declare your types. It's not so good at polymorphic types, custom types, etc. in terms of compile-the checking. But for that, Coalton lets you get something like Haskell or OCaml type checking at the cost of writing code in a more constrained manner.


Thanks for sharing that. Coalton looks pretty cool.

On the SBCL side of things, a quick skimming of https://lispcookbook.github.io/cl-cookbook/type.html doesn't paint a very pretty picture. The syntax looks a bit verbose, and the error seem to be very rudimentary, not worded well, and lacking even a line number. Overall, the readability of the Common Lisp code in this link (as someone who doesn't know CL) honestly doesn't seem to be great. Hopefully, if someone could distill and write about some of the key language features that make CL great, we could carefully transplant/copy those features into a modern type-safe language.


The main thing I like is basically you always have a debugger present. When you run into some issue you don't have to restart your program in a debugger and recreate it. Even running in production you can remotely connect to your program and inspect everything about it. Another thing about this is that when exceptions happen you're dropped in the debugger before the stack is unwound, and you generally have options to recover and continue execution without restarting everything and losing your state, which imo is the only sane choice for a dynamic language where you know you will be getting some type errors at runtime in development at least. Everyone knows the frustration of starting some python script, having it run for 10 minutes only to have it crash on some exception that you have to fix and restart. Also I'm not entirely sure why, I think because CL does more checks at compile time, and there is more type checking, I tend to get fewer runtime errors and they're closer to the source of the problem and easier to debug than in other dynamic languages. Then there are questions like what happens to existing instances of classes when you add fields or otherwise change them, which CL handles (https://malisper.me/debugging-lisp-part-3-redefining-classes...), but other language REPLs don't.

Regarding the syntax of those type declarations, it's true I also thought CL was verbose and ugly at first, but it is also extremely flexible through macros and reader macros. At the end of the day I think it's easier to improve that in CL that it is to make other languages support the interactive development of CL which requires fundamentally rewriting the language runtime. For those type declarations specifically, there is serapeum which is a extremely commonly used library, that provides -> for more ergonomic type declarations, ie:

  (declaim (ftype (function (string string) string) concat))
would become: (-> concat (string string) string) there is also https://github.com/lisp-maintainers/defstar for providing more ergonomic type declarations inline in definitions

And this is another thing I'm not sure how to explain, I thought CL is surely more verbose and ugly than python for small scripts, but maybe it's macros will make it cleaner for building large systems. But then when I started writing actual programs, even small programs without any of my own macros, I generally use about 30% less LoC than in python... I've thought about making sly/slime like support for python (built on ipython with autoreload extension) or ruby (with it's fairly new low-overhead debug gem). But at the end of the day support for these things will always be incomplete and a hack compared to CL where it was designed from the start to support it, they run 20-100x slower than CL, and imo their runtime metaprogramming is harder to reason about than CL which is mostly compile time metaprogramming. When I've had to dig into some CL library, which is a lot more often than in those languages because it has 10000x fewer users so of course I will be first to run into some issue, it has generally been easy to understand what is going on and fix it, compared to large codebases in other languages.

Regarding "modern type-safe language", languages with expressive type systems, rust, ocaml, haskell, typescript, etc, can give really confusing type errors, when you get into generics and traits and more expressive stuff. I'm not convinced it's a better development experience than a dynamically typed languages where values have simple types, and when you get a type error you see the actual contents of the variable that is the wrong type and state of the program, at least in the case of CL where the stack isn't unwound on error and runtime is kind of compile-time as you're running all code as you write it. But mostly this sort of interactive development is very hard to implement in static languages, I'm not aware of any that does it. For example even in static langs like ocaml that have a repl through a bytecode interpreter, simple things don't work like say you pass some function as an event handler, and then update the function. As you passed efectively a function pointer to the old definition, rather than a symbol name like lisp, it will be calling the original function not the new version. But the main issue is that efficient staticly typed languages the type system is all at compile time, type information doesn't exist at runtime, which is great for performance, but means you don't get the ability to introspect on your running program like you do in CL and elixir, which personally I value more than full compile-time type checking.

Would I like some new language or heavy modification of existing language runtime that provides the best of everything? of course, but I also realize that it's a huge amount of work and won't happen with 10 years, while I can have a nice experience hacking away in CL and emacs right now. And ultimately CL is an extremely flexible language and I think it'll be less work to build on CL than to provide a CL like runtime for some other language. For example projects really pushing the edge there is Coalton described above. While personally I prefer dynamicly typed for general application programming I think Coalton could be great for compilers, parsing some protocol, or writing some subparts of your program in. And vernacular (https://github.com/ruricolist/vernacular) which explores bringing racket's lang and macro system to CL. For more standard CL code, using extremely common and widely used libraries like alexandria, serapeum, trivia, etc, already makes CL into a fairly modern and ergonomic language to write.

Edit: also about the lack of line numbers in the compiler message, it's funny I never noticed that and I'm not sure how exactly emacs does it, but for those compiler warnings about types emacs underlines in red not just the line but the exact expression within the line and you can press a shortcut to go to the next and previous compiler warning/error/note. For better or worse emacs is the de facto free development environment for CL (lispworks and allegro are the commercial ones still maintained), though in recent years there are plugins for VSCode and most major editors, I haven't tried them and am not sure how they compare.


Thank you for the really detailed and thorough comment! I appreciate it.


Common Lisp is dynamically typed but it matters far less compared to a language like Javascript. Again that is because the development process is very different. You are not writing code, compiling it and hope it works. You are modifying a Lisp image as it runs, so you are working at a much finer grain where type safety is not that required. I have not faced major typing related issues and also in the rare occasions when you do face them, you can simply update the image while it is running.


Ah, that's a really interesting approach. With JavaScript and Python, you re-run the code or re-start the server or refresh the page (there's no compile step), but I've way too frequently run into type errors in project both small and large (but especially larger ones). I've also found it significantly harder to understand code written by other (i.e. to understand or know what the structure of objects being passed around is)--my solution often was to use a debugger and set a breakpoint and inspect the structure of the object (of course, this was experientially a lot more cumbersome than using a statically typed language), but is the idea with CL kind of similar, in that you sort of inspect the program while it runs, but you also have a JVM HotSwap like capability?


If you don't mind me asking, is it an application for a specialized field?

I ask because it seems if one want to go CL route, the options would be scarce for basic stuff that other programming languages have lots of tried and true libraries for. Take webdev for example. What are CL options that are on par with Django and Fast API?


There are multiple applications that are in different fields. Some of the web stuff I have started converting to Golang because it's easier to hire devs for that language. Regarding CL options, check out this article to get a feel for it: https://lisp-journey.gitlab.io/blog/clog-contest/


Sounds interesting. May I ask what do you do with SBCL? Like what kind of business.


Data processing, booking systems, custom app development, api development, web stuff.


Even today it amazes me that python devs don’t live in the repl the same way that lispers do. The interactive approach is underrated. Especially in the era of test-first development, which I think is a fad long term. (That’s not to say no tests, just not tests first.)


I'm trying to convince myself Jupyter isn't a form of living in the REPL expressed through tabbed state in a linear order, of python execution. I think it is, but I suspect it fails some boundary condition of significance. That said, when I did it, I didn't find it too hard to introspect on my code, in what it told me.

I do think there is a meta going on around LISP repl and it's internal introspection which other REPL don't quite provide. You can come to almost the same place in output, but not at interaction with the run state of the 'machine' itself. (somebody else in the thread made the point its mostly there except handling exceptions in the stack)


Jupyter Notebooks are absolutely a form of REPL, the type pioneered by MACSYMA, cloned by Mathematica, and significantly enhanced by Symbolics Dynamic Windows and Common Lisp Interface Manager (which is basically just DW but for any Common Lisp with CLOS).


Try redefining something from a library that is used from a lot of different places. It's easy to change one copy of a definition in Python, but the only way to get all of them is to write your new definition to disk and restart Jupyter.


You simply can't in Python, not only due to unwind-first exception handling, but also because one module is represented by multiple objects in memory: one object for each other module that imports it. That makes it impossible to redefine anything (you'll always miss some of the copies of whatever you tried to redefine). Importing things from modules piecemeal makes it even harder to find them.

The importlib.reload function only applies to one copy of a module. A comprehensive solution would be incredibly complicated and would still have limitations.

In contrast, a Lisp package exists in exactly one place. No matter where you get it from, two references to the same package are always eq (having "is" equality in Python terms). Therefore, if you redefine something in a package, you redefine it for everything that uses the package. It also helps that imports in Lisp apply to the symbols, not the things they refer to. So importing something doesn't hide a copy of it anywhere. The most naive redefinition technique will always work, with no edge cases.

The inability for class redefinition to affect already existing objects in Python is another severe limitation. Even if you could redefine anything reliably in Python, you'd still have this problem.


I think you can argue that the pervasive use of notebooks is close enough for learning at least, but it's not as good for real development. The edit-and-continue features in Visual Studio for C# (and similar feature in Java) is the closest non-lisp thing we have these days. The languages aren't made for it like lisp are though, you have to do full restarts all the time.

I still wish there was an environment more like Smalltalk for Python.


>The edit-and-continue features in Visual Studio for C# (and similar feature in Java) is the closest non-lisp thing we have these days.

Some years ago I was playing around with Franz Lisp, and noticed an interesting debugging feature in it. A few days later, I was using a then-new version of Visual Studio for writing either C (not C++) or VB, and noticed the same cool feature in it, which MS was calling Edit and Continue.

IIRC, it was introduced in that VS version, but not sure, since I was not a regular Visual Studio user.


On Microsoft ecosystem, it was initially introduced in VB.


Interesting.


Shameless plug: I have always been fascinated by edit-and-continue and I build a JavaScript IDE centered around it.

https://leporello.tech/


Python doesn't work well for repl development. You cannot set the current module in Python as you can do in Lisp. So you're restricted to make changes to the current module. Also, debugging in the repl is not as powerful in Python.


What? Yes you can. I do it all the time.

    import code; code.interact(local=vars(foo))
where `foo` is the module object you want to set as current. This launches a subREPL in that module's namespace. (EOF kills the subREPL and returns to main.)

Python has a debugger in the standard library that can also do post-mortem inspections.


Sure it does. Just paste the defs into the repl.

You can also use `from importlib import reload` and then have an expression like reload(foo).bar() to constantly reload your foo library.


But this means you have to litter your application with support for it, doesn't it? In Common Lisp, I can stop at a breakpoint, paste in a new definition of any function in the application, and then resume and it will use the new definition everywhere rather than the one that existed when the breakpoint was triggered: https://two-wrongs.com/debugging-common-lisp-in-slime.html


*and not only paste, but write, in the source file, the new definition, re-compile it (C-c C-c), and resume execution from a stackframe before this call. (a clarification for folks who would think we must paste in the REPL)

That's useful not only for breakpoints but also for errors. We can fix an error without re-running everything from zero. https://www.youtube.com/watch?v=jBBS4FeY7XM


Reload only reloads one copy of the module. If there are other modules that imported the same module, your new definition will not be seen by them unless you find all the copies of foo in memory and reload them also. Since there's no reliable way to do that, you're more likely to just restart Python all the time.


Not accurate as worded. Python caches module imports in `sys.modules`, so all imports get the same one. A `reload()` will reuse the same dict object that was being used as the module's namespace, so everything with a reference to the module object will see the same namespace and get the updates.


Check out lpy-mode


[flagged]


Whoa, please don't take HN threads into programming language flamewars, "slap fights", or attacks on other users. We're trying to avoid all that here.

https://news.ycombinator.com/newsguidelines.html


I didn't say this should be a slap fight - I said the last one was. I'm always at a loss with how reading comprehension should lead people to conclude X here but instead they conclude Y. Like it literally says the previous one ie I don't know how I could've made it any clearer.

Also you're perpetual quoting of the rule book under the guise of "here's our credo" doesn't accomplish anything because it doesn't clarify what rule was broken (and just serves to obscure the censure under "we don't like your kind around here"). At minimum you should cite exactly what rule was broken.


You broke the rules against flamebait and calling names.

If you'd just make your comments less aggressive, it would be much better.


I do a lot of python halted in the debugger, which is a close enough approximation.


I do! I have been using Common Lisp since 1982, love it. But when I code in Python, Emacs with the old Python support gives you a really nice editing and REPL environment.


test driven development has been around since the late 90s and was popularised in eXtreme Programming circles, are you sure it's a fad? I find it the most efficient way to write code and keep it running in the long term personally.


Those are good accompanying resources:

- https://lispcookbook.github.io/cl-cookbook/ (recent and to the point, hope you like it) check out the editor section, there's more than Emacs these days: https://lispcookbook.github.io/cl-cookbook/editor-support.ht...

- https://github.com/CodyReichert/awesome-cl for libraries

- https://www.classcentral.com/report/best-lisp-courses/#ancho...

- a recent overview of the ecosystem: https://lisp-journey.gitlab.io/blog/these-years-in-common-li... (shameless plug, on HN: https://news.ycombinator.com/item?id=34321090)


+1 to the cookbook. Beyond reading the canonical books and referring to the hyperspec, it has been truly the most practical resource for learning more boots-on-the-ground things you will need to do in development. I wish some of the chapters weren't so focused on specific 3rd party libs, because some of it feels out of date now, but it is still by far more helpful than not.


Oh, what chapters are you thinking about? We can fix them. Thanks for the feedback. (a cookbook contributor)


Oh hey! Sorry for the late reply.

I am mainly thinking of testing, and if FiveAM is still the way to go, because there is a note at the beginning about Rove, which seems to be pretty well maintained these days. But looking at FiveAM again I see some recent commits. Also, in the concurrency chapter there is the section on lparallel, which seems like a very old unmaintained library.

This all said, I think one thing that makes cl nice is that it doesn't feel out of the question to use a very old library for something you need. So none of this is actually a problem.


thanks. Indeed, lparallel is maintained on the sharplispers' fork: https://github.com/sharplispers/lparallel I updated the awesome-cl and Cookbook links.

FiveAM is still a very good solution, difficult to replace. We didn't replace it with another test framework, and we looked at a few of them. Rove doesn't live on its promise to run `rove <system>` on the command line for example.

Your last sentence is very true!


> The existence of packages (multiple name spaces for symbols)

Sigh. It is depressing to see this kind of mistake in a book targeted towards beginners. There is already enough confusion around packages.

Packages are not "multiple name spaces for symbols". A namespace is a set of bindings, and so a namespace for symbols would be a set of bindings for symbols, and that is the definition of an environment [1], not a package.

A package maps strings onto symbols, so technically a package could be considered a set of bindings for strings, though no one actually thinks of them that way and they are never ever referred to that way. Packages are data structures that map strings onto symbols. That's all.

One can only hope that this is an isolated mistake and not indicative of what goes on in the rest of the book.

---

[1] http://www.lispworks.com/documentation/HyperSpec/Body/26_glo...


Could you explain the significance of this blunder? Or like where this distinction would be important? I see lots of confusion between "packages" and ASDF systems, but none between packages and environments. It seems to me, I write a function in a package and whether I am in the package or using it, I refer to that function by the symbol it is given in the defun, even if really in the latter case its some sugar around a string implied by the original defun. Where will it matter to think about it the right way? When I need to shadow things maybe?

Definitely take your authority on this in general, just genuinely curious.


That's a great question, and actually not so easy to answer. To understand it you have to put yourself in the mindset of a beginner, and here there are two possibilities: either you are a beginner, in which case putting yourself in a beginner mindset is trivial because you're already there, or you are not, in which case putting yourself in a beginner mindset is very, very hard because you have to actively forget (or ignore) things that you know, and that is not easy to do. I don't know which you are, but either way, I'm going to ask you to put yourself into a beginner mindset.

So with your beginner mindset on, imagine reading this sentence for the first time:

"The existence of packages (multiple name spaces for symbols) in Common Lisp is very important for allowing several people to cooperate in producing a large system."

Now, as a beginner, you don't know what "package" or "name space" or "symbol" means, though you might have some preconceived notions about these words. But embedded in this sentence is the "fact" that packages are name spaces for symbols (whatever that might actually mean), and so you tuck this little factoid away in your mind so you can try to make sense of it later because the author has taken pains to point out that whatever it means, it's "very important".

And then you read the rest of the book, where you find that the term "name space" is never mentioned again.

At this point you might find yourself scratching your head a bit. So on the one hand the author takes pains to call out packages as "very important", and tells you what they are, but then never bothers to explain what the words that define them actually mean. So you, being intellectually curious, go to try to find out so you go out onto the Internet to try to find out what a "name space" is. A reasonable place to start might be the definition in the Common Lisp Hyperspec:

> namespace n. 1. bindings whose denotations are restricted to a particular kind. ``The bindings of names to tags is the tag namespace.'' 2. any mapping whose domain is a set of names. ``A package defines a namespace.''

This seems promising, though it seems a bit odd that here a package defines a namespace, rather than a package is a namespace. Maybe "defines" and "is" are actually synonyms? But the bigger problem is that this definition is chock-full of new words which you as a beginner don't know the meaning of, most notably "binding". So you go to find out what that means, and your first step is to go back to the book, which turns out to be no help because despite the fact that it uses the word "binding" it never actually defines it, and also when the word is first introduced it is used as a verb, not a noun. So back to the Hyperspec:

> binding n. an association between a name and that which the name denotes. ``A lexical binding is a lexical association between a name and its value.'' When the term binding is qualified by the name of a namespace, such as ``variable'' or ``function,'' it restricts the binding to the indicated namespace, as in: ``let establishes variable bindings.'' or ``let establishes bindings of variables.''

Whoa! That's a lot of new words, starting with "name", which is hyperlinked so it has a definition of its own:

> name n., v.t. 1. n. an identifier by which an object, a binding, or an exit point is referred to by association using a binding. 2. v.t. to give a name to. 3. n. (of an object having a name component) the object which is that component. ``The string which is a symbol's name is returned by symbol-name.'' 4. n. (of a pathname) a. the name component, returned by pathname-name. b. the entire namestring, returned by namestring. 5. n. (of a character) a string that names the character and that has length greater than one. (All non-graphic characters are required to have names unless they have some implementation-defined attribute which is not null. Whether or not other characters have names is implementation-dependent.)

Double whoa! What is an "identifier"?

> identifier n. 1. a symbol used to identify or to distinguish names. 2. a string used the same way.

OK, at least now this is a short definition, with only two new words, SYMBOL and STRING. So what is a symbol?

> symbol n. an object of type symbol.

Well, that's not very helpful. And at this point you might be forgiven if you decide that this whole Lisp thing is not really worth the bother and you really ought to go learn Rust instead because that seems to be what the cool kids are using anyway.

The Right Way to explain this to a beginner IMHO is to start with strings because everyone has an intuition about what those are which is close enough: a STRING is a sequence of characters (which are a complicated topic in heir own right, but a naive view of what characters are is good enough to start with). A SYMBOL is a thing with a NAME, which is a string, and which cannot be changed. A symbol is created with a name, and it retains that same name forever. A PACKAGE is just a map, a function, from strings onto symbols such that the name of the symbol mapped from string S has the name S. This is significant because it insures that there is one and only one symbol with the name S in a given package at a given time. Packages are COLLECTIONS OF SYMBOLS WITH UNIQUE NAMES stored in a way that allows you to efficiently find the unique symbol with any given name in that collection. That's it (or at least close enough for a beginner). The reason this matters is that it allows different people to write code without stomping on each other. Alice can put her symbols in one package, and Bob can put his symbols in a different package, and so when Alice types "foo" into her code she can count on that referring to the One Symbol Named "foo" in Alice's package, and likewise when Bob types "foo" into his code he can count on that referring to the One Symbol Named "foo" in Bob's package. But these are nonetheless two different symbols (with the same name) and so Alice's code doesn't stomp on Bob's code.

That's it. ~250 words with no unfamiliar terminology for a beginner to have to scratch their head over (assuming they already know what a "map" or a "function" is).


This makes sense, but I guess I don't see, still, how this might get the beginner into too much trouble. Pedagogical methods and statements will always not quite stay true to the letter of the spec. When we teach, we say one thing "is like" another, or you "might understand this by comparing it to foo," knowing well that the concept isn't actually foo. This is never meant to confuse the student, but to help them get in the right position to actually understand something.

But still, I get your point. Certain diligent students might read that statement and, rationally, read a lot into the terms "namespace" and "binding," and then get confused when they look up those terms in the hyperspec. It does come across careless in this way. But I just don't think, in itself, this would lead to a confusion about what packages are good for, or why they are used. Whether a beginner wrongly considers a package a namespace, or something like a namespace, wouldn't affect the way they would decide to actually use a package in their work. (Not to disregard how an experienced, real-lisp-understander might use their precise knowledge of the way packages work to make even better/cleaner/organized code.)

But either way, thank you for the thoughts, and please take my thoughts with a grain of salt.


The problem is not so much with the use of the word "namespace" per se, but with saying that packages are name spaces for symbols. That's just wrong, and is in direct conflict with the definition of things that are actually talked about as name spaces for symbols, most notably, variable and function bindings. The difference is that the domains of these "name spaces" are disjoint. The domain of packages is the set of strings, but the domain of (say) lexical variable bindings is the set of symbols. So, for example, lexical environments and dynamic environments are name spaces for symbols because their domains are symbols. If you insist on thinking about packages as name spaces (which no one actually does, which is another reason the sentence is misleading) then they are name spaces for strings, not for symbols.


I get the impression that the original quote from the book is using "name space" as it's used in other languages outside of CL, except that the book is from 1992 so not sure if that usage was widespread yet - maybe in C++? But it's almost like they wanted to say "a package is analogous to a name space as it's used in [algol-derivative]." Definite potential for confusion there, since "namespace" has a very specific meaning in CL and in general helps explain the difference between Lisp-1 and Lisp-2 (Lisp-n) varieties.


The quote is in the Preface, which is outside of the book itself. The same paragraph acknowledges that "packages can be very confusing for Lispers who have not learned about them in an organized way" and that the author has "seen experienced, Ph.D.-level Lispers hack away, adding qualifications to symbol names in their code, with no understanding of the organized structure of the package system." Furthermore, he adds that "[t]he reasons that packages are even more confusing in Lisp than in other, compiler-oriented languages, such as Ada, is that in Lisp one may introduce symbols on-line, and one typically stays in one Lisp environment for hours, losing track of what symbols have been introduced. A symbol naming conflict may be introduced in the course of debugging that will not occur when the fully developed files are loaded into a fresh environment in the proper order."

After that, nobody in their right mind should be hanging on to any preconceived notion that they might have spun out of the earlier phrase "multiple name spaces of symbols", and should be reading the actual book.


> nobody in their right mind should be hanging on to any preconceived notion that they might have spun out of the earlier phrase "multiple name spaces of symbols"

Except that "namespace" is a term of art which is actually defined in the Common Lisp standard, and so it is not unreasonable to suppose that this is the intended meaning in a book on Common Lisp.


Indeed, and look:

namespace n. 1. bindings whose denotations are restricted to a particular kind. ``The bindings of names to tags is the tag namespace.'' 2. any mapping whose domain is a set of names. ``A package defines a namespace.''

So if "the bindings of names to tags is the tag namespace", that means that a set of bindings of names to symbols is a symbol namespace!

According to the Common Lisp Glossary, the values of the dictionary define what the namespace is of. Of course the keys are always names; that's what makes it a namespace.

Also, look: "a package defines a namespace" (and it must be one of symbols).

You're not catching crafty old Shapiro red-handed in anything here.


> So if "the bindings of names to tags is the tag namespace", that means that a set of bindings of names to symbols is a symbol namespace!

No, that does not follow. Even if one were to admit the parallel construct here, the result would be "the symbol namespace" which is clearly nonsense.

Neither "tag namespace" nor "symbol namespace" is a term of art, so here we are in the domain of natural language, and natural language is irregular and ambiguous. In natural language usage, a "namespace for symbols" is one where symbols are the domain, not the range. In particular, the most common usage is to distinguish between Lisp-1 and Lisp-2, where the former has a single namespace for symbols and the latter has at least two, one for values and one for functions.


> The quote is in the Preface, which is outside of the book itself.

Thank you for pointing that out; I'll read the whole thing in context.


A binding being an association between a name and a value is a defect in Common Lisp.

The correct view (in a language with mutable variables) is that it's an association between a name and an abstract location where a value is stored. When we assign to a variable, the binding doesn't change; any closure which has captured that binding sees the new value.

On entry into a lexical scope, fresh bindings are allocated only once, no matter how many times a new value is assigned to any of them in that same scope.

Schemers get this right. In R7RS:

An identifier that names a location is called a variable and is said to be bound to that location.

The problem is not just in the Glossary. defvar is documented as leaving the variable unbound if it is previously unbound (in the case when no initial-value is specified).

Yet, defvar is described as establishing the name as a dynamic variable. But a variable is defined as a binding. Thus if X is unbound and (defvar X) leaves it unbound, then it is not establishing a variable because that would require a binding. Oops!


> A binding being an association between a name and a value is a defect in Common Lisp.

Well, it would be if that is how CL actually defined the word "binding" but it's not. The CL glossary defines "binding" simply as "an association between a name and that which the name denotes". It does not specify that "that which the name denotes" must be a value, and in fact there are name spaces (collections of bindings) in CL where the things denoted by names are not values. The TAG namespace, for example.

The CL authors did make a few mistakes, but the definition of "binding" is not one of them.


Sorry, I blundered. What I wanted is that a variable is defined as a binding between a name and value.


> The Right Way to explain this to a beginner IMHO is to start with strings

And by golly, Shapiro's Common Lisp: An Interactive Approach does that. It has a chapter on Strings, then Symbols, then Packages.


Well, yeah, but the chapter on strings is chapter 5. Shapiro doesn't start with strings, he starts with numbers, which I think is a catastrophic mistake. When I say start with strings I mean start with strings. Actually, start with characters, i.e. start by pointing out that the fundamental units of computation when you interact with a computer using a keyboard are things like these:

a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 0 1 2 3 4 5 6 7 8 9 ~ ! @ # $ % ^ & * ( ) _ - + = { } [ ] : ; " ' < > , . / ?

Then talk about how stringing these things together in sequences can denote different things, but that these denotations are just conventions. For example, by convention we denote strings using double quotes:

"This is a string"

Note that there are two ways you can look at the above. You can see it as a sequence of 18 characters that starts and ends with quotes, or you can see it as a sequence of 16 characters that starts with T and ends with g. This ambiguity leads to a whole host of problems, not least of which is that if you want to write a string that includes a double-quote mark you now have to somehow indicate that the embedded double-quote does not denote the end of the string but it intended to be a constituent of the string, and the fact that the same character is used to denote both the start and end of strings was actually a catastrophic design error but we're stuck with it now because of the weight of history. (The Right Way to denote strings is with balanced quotes «like this» but that ship sailed a long time ago.)

And then, once the student understands strings, you can start talking about how some strings, like "123", stand for numbers, and how this is also just a convention, because strings like "123,000.00" look like numbers to any educated human but don't stand for numbers in any programming language except Microsoft Excel again because history, yada yada yada.

The point is, numbers are really really complicated, even more complicated than symbols, and they are definitely not the right place to start teaching any of this notwithstanding that this is where everyone starts.


Shapiro does a very good job of emphasizing the difference between an object and its multiple printed-representations (S-expression). He promises to do that in the Preface, and delivers.

Computer Science once used to be the same thing as Numerical Analysis; that has left a deep imprint on the education. Here is how we use Lisp: (+ 2 2) evaluates to 4.


> Shapiro does a very good job of emphasizing the difference between an object and its multiple printed-representations (S-expression).

I think that is debatable. When he introduces S-expressions in chapter 3 it is in the context of a chapter on lists, not S-expressions. (In fact, he doesn't have a chapter on S-expressions!) And he doesn't actually define S-expression, he only defines "list S-expression" and leaves it up to the reader to infer that numbers are S-expressions -- or is it the printed representation of a number that is an S-expression? Shapiro never actually says. So is 123 a number? Or is it an S-expression denoting a number? Are these the same thing? Again, Shapiro never actually says. AFAICT, at no point in the book does he ever make it explicit that an S-expression is a string, and in particular, a string which is a serialization of a data structure.

It makes sense to you because you already know how it works. You need to read it with your beginner-mindset hat on to see the problems.


But I had read that with an actual beginner's* mindset. Since then, not once have I had the thought that Shapiro misled about this or that.

In the Preface there is a paragraph Package Systems, S-expressions and Forms where Shapiro explains what those mean. It's clear that he's not using S-expression just to refer to compound syntax. The paragraph concludes:

In this book, I distinguish the S-expression from the form—the printed representation from the object—in Chapter 1 and continue making the distinction consistently and explicitly through the entire book.

There is only small matter there in that Common Lisp uses form for an expression in an evaluated context. What Shapiro wants there is "distinguish the S-expression from the (internal) expression".

Some of that paragraph also rather belongs in the book proper rather than the Preface.

Chapter 3 does not leave it to the reader to infer that number tokens are S-expressions. It says so explicitly: "According to this definition (1 2 3.3 4) is a list S-expression, since 1, 2, 3.3 and 4 are S-expressions (denoting numbers)". That's just re-iteration; prior material in the book hammered the point that the printed representation of any object is a S-expression.

---

* Well, a Lisp beginner's mindset. Not a programming beginner's mindset. If you already know things like that compilers scan textual numeric tokens, turning them into binary numbers, that colors the interpretation.


> prior material in the book hammered the point that the printed representation of any object is a S-expression

Yes, he does say that. The problem is that this is wrong. There are many printed forms of objects that are not S-expressions. In fact, these are so common that CL has some fairly extensive infrastructure for dealing with these cases.

S-expressions have nothing to do with printing (except insofar as Lisp makes an effort to maintain read-print consistency in some circumstances), they have to do with reading. They are operational at the beginning of the read-eval-print loop, not at the end.


Yes, obviously, most of those chapters are concerned with what we type into the text file or REPL, which is often something that was not ever printed.

There are many examples of #< notation in the book, but, as far as I can see, no remarks are made about what that means, or even that the contents of #<...> are implementation-specific and may appear differently. I don't see any discussions of the concept of print-read consistency, and so on: that objects can sometimes be printed in a way that either cannot be read at all, or worse, that produces a different object, like the #: notation.

It would help the book to talk early about print-read consistency. What it is, when do we have it, when do we not have it, in what situations can we provide it for ourselves when we don't have it, etc.

Compiling Lisp isn't covered in the book; there is only a cursory mention of compile-file. The omission is a lost opportunity to discuss Lisp's "interactive approach" to compilation. In compilation there is the issue of literals: what kinds of objects are externalizable. That relates to printing because externalization is a kind of printing. Compile-file has to print the literal objects into some kind of bits in the file, which then recover a similar object.

The word "image" doesn't appear in the book; it doesn't look as if image saving is mentioned anywhere.


Shapiro does a good job of covering packages, probably better than most if not all, other books.

Because I started in Lisp with that book (back in late 1999 or 2000? I can't remember), I was never confused about how packages and symbols work.

What the book does right is explain, early, that symbols are objects that have a name which is a character string, and which is different from the token syntax so that Foo and FOO and |FOO| are the same symbol:

Quote from the Symbol chapter:

> Returning to our discussion of symbols, every symbol has a print name, or simply a name, which is a string of the characters used to print the symbol. The name of the symbol frank is the string "FRANK". You can see the name of a symbol by using the function symbol-name

After that it goes into how we can get any characters we want into a symbol name, and so on.

Then, early in the Packages chapter:

> In this chapter, you will see that several different symbols can have the same symbol name—as long as they are in different packages.

Thanks to starting with this book, I had an accurate mental model of packages from the get go.

The "name spaces for symbols" phrase appears only in the Preface. Firstly, nobody should be taking a passing remark in the Preface as a lecture on how packages work. Secondly, the phrase has a straightforward interpretation which is squarely correct. A symbol has a name which is a string. That name exists in a namespace, which is a package. If we understand "name" to mean "symbol name" (and not "symbol used as a name") then a package is a namespace, and it is of symbols.

In that sense, a lexical scope is a namespace for variables and function bindings, and a class object contains a namespace for slots. Those namespaces are not for symbols, they are based on symbols being the names; i.e of symbols.

If you don't know anything about Lisp symbols and packages, and are reading only the Preface, you will probably not understand what exactly "name spaces for symbols" means; you will just have to read the book, and carefully go through the Symbols and Packages chapters.


> Foo and FOO and |FOO| are the same symbol

No, they aren't. The strings "Foo" and "FOO" and "|FOO|" might be read by the reader as the same symbol, but then again, they might not. It depends on a great many things.

By default, all else being equal, yes, reading these three strings will produce the same symbol. But there are any number of factors that can change this.

    Clozure Common Lisp Version 1.12.1 (v1.12.1-10-gca107b94) DarwinX8664
    ? (setf x 'foo)
    FOO
    
    [Stuff elided]
    
    ? (eq x 'foo)
    NIL
    
    [More stuff elided]
    
    ? (EQ 'foo 'FOO)
    NIL
Figuring out what I left out is left as an (elementary) exercise (though the fact that I had to type EQ instead of eq in the last line is a big clue).

And I know that you know this. My point is not that you don't understand symbols or name spaces; I know you do. My point is that explaining this stuff is hard, and very few people seem to be willing to put in the effort. And this is not unique to CL, and it's not even unique to software, or even to STEM. It seems to be endemic in the human condition. But that doesn't mean one should not lament it or try to improve it.


It's probably not necessary to mention that the treatment of case and whatnot is default behavior. That can be covered in an advanced chapter about read tables. There we can say, oh we lied when we said that foo, FOO and |FOO| read the same; this is actually highly programmable.

The student will not encounter that unless they explore someone else's code, or discover the features like readtable-case and experiment.

Just talking about the default behavior, if it is stable and portable, and doesn't mysteriously flip behind your back, is fine.


Even if you leave case out of it, it is not true that Foo and FOO are necessarily the same symbol, or even that FOO and FOO are necessarily the same symbol. In fact, the whole point of packages is that you can have two different symbols with the same name.


If someone is considering learning CL effectively, take this piece of advice: use Emacs.

You might think that it's an outdated piece of shit, maybe you hate RMS with a passion or whatever.

But make yourself a favour and use it at least for the month that will take you to go through a manual like this or Practical Common Lisp or several others. Just install SBCL, QuickLisp, Emacs and SLIME (or Sly, that is a more featureful fork) and start hacking. I promise that you won't need more than ten hours to get hooked. Maybe you'll eventually like Emacs.

If you're on Windows, watch this video right now:

https://www.youtube.com/watch?v=VnWVu8VVDbI

There's a moment at 12:52 when he invokes SLIME from inside Emacs. If you're confused how he does it, it's using Alt-x (or M-x in Emacs slang) and then writing "slime".

Also at 2:40, to make a persistent environment string, there's now an easier way: in PowerShell write "setx HOME c:\Emacs" or whatever directory you choose. I use "C:\Lisp" and I put everything related under it: sbcl, quicklisp, emacs and my own projects.

Other than that, even if the video is ten years old, everything works as advertised. That's true for most of the resources you will find around. Apparently nothing is showing much activity, but that impression is misleading. The core sbcl is getting relentless upgrades and the rest simply works.

For web stuff, use Hunchentoot. Remember that quicklisp makes super-easy to download libraries.

More interesting resources, apart what others have already mentioned:

https://gigamonkeys.com/book/

https://www.youtube.com/@CBaggers/videos

https://www.youtube.com/@GavinFreeborn/videos

https://lispcookbook.github.io/cl-cookbook/web.html

#clschool at libera.net.

If some of the authors of those are reading this: a huge thank you!


Some basic Common Lisp videos showing Emacs/Slime in action:

https://www.youtube.com/playlist?list=PLTA6M4yZF0MzsMlNL0N67...


I've just take a quick look but it looks great, thank you!

It seems I'll need to use the Air to follow the series. As far as I can see it's ccl + aquamacs + slime... right? It comes at the right time because I'm starting with the CLOS part right now.


Yes. Though unfortunately CCL does not run on Apple's M chips.


No problem, my Air is old :)


This was the first Common Lisp book I studied back in the 90s. Two of my Professors had done early work in AI and started a successful company that still runs on CL: https://www.siscog.pt/en-gb/


It's better, perhaps, to link to the parent page:

https://cse.buffalo.edu/~shapiro/Commonlisp/

Shapiro mentions "An even more up-to-date, faster introduction to Common Lisp" therein, if anyone's interested.


It's even specifically called out that links directly to the pdf are not ok.

> web links must point to this page rather than to a separate copy of the dvi, ps, or pdf file


That is not really how the web works, of course. But I've changed the URL from https://cse.buffalo.edu/~shapiro/Commonlisp/commonLisp.pdf now.


This book is how I got started in Lisp.


How does Common Lisp compare with elixir. I'm trying to find a good functional language to learn.


If your goal is specifically to learn functional programming, I recommend you consider Lisp-1 dialects instead of Common Lisp, like Scheme or Clojure. While CL supports functional programming (it supports literally everything) functions exist in a separate namespace, and treating functions as data can be a little extra confusing.

As for Elixir: it heavily leans on the actor model. Understanding this was the biggest hurdle for me, but once I got over that learning Elixir was a breeze (though I was already comfortable with FP by the time I picked up Elixir). If you want to go the Elixir route, I recommend trying to play around with an actor model system in a language you are familiar with first. I was familiar with Scala so I learned the actor model by playing with Akka.

My #1 recommendation to you though is to go with Scheme and watch the SICP lectures. That will sneak functional programming into your brain without you even realizing it.


I had a similar question. I decided to learn Erlang / Elixir first. Starting with Elixir and thought that I should probably learn Erlang first since it's going to come up. I actually like Erlang a lot and use it as much as I can. There are use-cases where Elixir is more suitable, e.g. web development using the amazing Phoenix Framework.

Anyway, I found myself doing REPL development using both Erlang and Elixir. Many of the benefits that I read about CL seem to exist within these languages as well.

I hope to explore CL eventually, but I'm just having too much fun with Erlang / Elixir at the moment.


CL is the most interactive (by far). Its image-based development and its tooling (Emacs&Slime or other IDEs) offer more features. Elixir's tooling is more classic, focused on good tools on the terminal. CL has a pretty good type inference, which happens interactively too (you can compile a function with a shortcut and have instant compiler feedback). Elixir people say that in practice pattern matching alleviates from type issues. It seems to me CL will have a wider range of applications (good for number crunching, different implementations); it's possible to build self-contained binaries in CL.

CL isn't entirely functional, nor pure FP, you can mutate things (and can not to with some care, or with libraries) and you can mix styles (with OOP etc).


To get a good answer you need to clarify what you mean by a functional language. Do you simply mean a language that has first class functions, or do you mean something that's inspired by category theory, or something somewhere in between?


You could check out Lisp Flavored Erlang if you want a little of both languages. Perhaps even Shen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: