The Lisp world is really lucky that Peter came along and wrote this book. It was amazing to watch him tease out, soak up, and internalize Lisp culture from all the available sources and then write such a canonical book to share it all with the rest of us.
Dear God man, please do! I'm into the start of chapter 3 and have suddenly discovered that in fact lisp is not obscure or hard or anything like I was told years ago in high school.
I mean, my introduction to lisp was something to do with concat but it was so divorced from reality that I literally found an early book on C programming and semi-self taught myself it, then found a book called C Pointers and Dynamic Memory Management that literally changed my life.
I have a feeling if I'd read this book I'd have actually done something awesome by now. Frankly, this book might allow me to do something that I've been itching to do for months now.
I can't really express my appreciation enough... but I'll try right now: thank you! A million times over, thank you!
Edit: so Arc - is that like a Webserver based on lisp?
Lisp is a fantastic language and well worth learning.
But these days I pick a language with strong static types every time if I have the choice. I know you can use them with Lisp too to some extent but it's not the same as a language built from the ground up around types.
I've never been entirely certain what the relationship is between YCombinator and the Arc community, if any, but one experimental and partially black-boxed forum doesn't signal a mature language, just one adequate to a particular task.
Being a Lisp, I'm not exactly sure what would constitute a proprietary fork. Furthering my uncertainty is that it [Anarki] is built on Racket and that kinda makes it hard to say one implementation is forked from another.
Anyway, I didn't intend to claim that Arc was a mature language.
Yeah, I just did a quick bit of reading. Ironically, I think I might be groking lisp because it appears ldap filters are lisp expressions, which I've used quite a few times at work...
Well, these days I'm working at Twitter on our abuse problem. So that keeps me pretty busy. I did write another book, Coders at Work, a book of interviews with notable programmers.
I kind of suspect that if I write another book (as I hope to) that I'll continue along the trajectory from pure technical (PCL) to semi-technical (Coders) to something that might reach an even more general audience.
How useful would that really be? PCL has Peter's take on Lisp, and the Joy of Clojure occupies a similar niche if you just want to learn specifically-Clojure.
At one point he mentioned writing a book to the effect of Statistics for Programmers, which sounds intriguing to me. AFAIK it hasn't happened yet, though.
Why intriguing? The most general family of distributions can only be expressed with a programming language, so there are certainly many connections between both fields.
Incidentally, that's one of my major complaints about Lisp these days. Lisp-Stat started dying in the early 2000s and completely faded away long ago. I read PCL when it was published a decade ago and I fondly remember how enthusiastic I became about Lisp.
I have used Clojure extensively, but the ecosystem for doing math and statistics is quite reduced. The same applies to CL and Scheme. I wish I could use one Lisp for most tasks.
> > At one point he mentioned writing a book to the effect of Statistics for Programmers, which sounds intriguing to me.
> Why intriguing? The most general family of distributions can only be expressed with a programming language, so there are certainly many connections between both fields.
That seems to be an argument for why it is intriguing. Were you perhaps arguing that it wasn't surprising?
I was thinking about something completely new. I know one can easily (more or less) follow along PCL and do the solutions in Clojure. I just think that Clojure space needs a book like this, with emphasis on practical. I like the practical section in PCL book at the end, where author builds useful little apps and guides you through the thought process. I know about Joy Of Clojure, great book, but it doesn't contain enough toy projects. I think the goal of the book is to help you start thinking functionaly.
I'm working on a book covering parallel and concurrent programming in Clojure, with the style of building abstractions before actually using them. On the topic of Joy of Clojure, as far as I could tell, the book is not meant to be an introduction to the language, rather meant to be read once already familiar. It's helped me internalize concepts I'd picked up through usage.
at the library, I also find it very good in how it presents programming, it's both hands on, concise and abstract at the same time. A refreshing view on the use of electronic machines.
If anyone is using windows and wants to go through PCL, I believe the lisp box linked to is way out of date. I've generated a portable zips that are more up to date:
It certainly shows some impressive Lisp techniques, and surely it will impress anyone who reads it about Lisp capabilities. The PCL made me feel awe about what Lisp can do. But not about what I could do with Lisp myself.
PCL doesn't really help the reader to really understand how all that code works to the point where the reader could write it without the writer's help.
I found this book: Lisp, An Interactive Approach https://www.cse.buffalo.edu/~shapiro/Commonlisp/
a better pedagogical tool to learn Lisp, and it guides the reader all the way, tests him, and makes him write all the needed code before going to the next chapter. It's not a showcase like the PCL book, but it imparts the reader with the ability to write a similar showcase himself.
The 'An Interactive Approach' book made me grok the power of Lisp in a way no other book could. It made me feel powerful and confident about what I could do with Lisp.
I think Paul Grahams ANSI Common Lisp is the best if you are learning Common Lisp and Lisp. The language reference in the end is the extremely handy compact reference for CL. http://www.paulgraham.com/acl.html
While I understand the need to use other data structures (and even in Lisp, I would not use cons pairs for everything), I think "stupid cons pairs" is pretty extreme.
If you don't understand why cons is important, and the appropriate places to use it, then you really don't understand Lisp.
> If you don't understand why cons is important, and the appropriate places to use it, then you really don't understand Lisp.
Lisp-the-mathematical-model, perhaps, but not Lisp-the-programming-language.
Conses are definitely a fundamental idea, and they're obviously how source code is written, and it's nice that they're around — but honestly for real work it's unlikely that most data will live in cons structures (unless of course the higher-level data structures actually are built from conses, which of course they could be).
At what point did I say "most data will live in cons structures"? And I explicitly said I would not use them for everything. And no, it isn't just "Lisp-the-mathematical-model", cons cells have real-world implications to the language.
I was going to suggest watching the video presentation "The Swine Before Perl", by Shriram Krishnamurthi, but alas I can't find a link to the video (which is sad, because it is a very entertaining and informative video). The slides are here: ( http://ll1.ai.mit.edu/shriram-talk.pdf ), but without the talk they may do you no good.
Hopefully someone will see this and add a link to the video ( it should be made available again, even after all of this time, it is still great -- I believe I saw it as recently as 2 years ago).
This is my pet peeve you are arguing into. Beware. :)
Here's my point: People can build amazing programs with just Vectors or Hashes. People can build amazing programs without understanding recursion, evaluation, macros, or metalinguistic abstraction. And, in fact, before memory became cheap (roughly 1995) recursion was a negative, not a positive.
So, guess which things Lisp taught and which it didn't? Yeah, they got it precisely BACKWARDS and then wondered why Lisp wasn't more popular.
Go back, and look at Lisp books prior to the web--call it 1995-1996.
I 1987, I had a version of Touretzky's "Common LISP: A Gentle Introduction to Symbolic Computation". I do not remember any discussion of Hashes, Vectors, etc., but I see that even the 1990 version confines them to the next to the last chapter and the discussion qualifies as cursory, at best.
SICP doesn't even mention it. Nor does Little Schemer (nee Little Lisper).
"On Lisp" (1993) just drops hashes and vectors in your lap as it expects you to already know about them. "ANSI Common Lisp" by Graham doesn't appear until 1995.
CLtL and R4RS are references. You sure aren't going to learn Lisp/Scheme from those.
Sure, once Perl taught the universe how useful hash tables were, everybody started going "Hey, we have those too...", but prior to that is a big hole.
>People can build amazing programs with just Vectors or Hashes. People can build amazing programs without understanding recursion,
evaluation, macros, or metalinguistic abstraction.
That's what Common Lisp provided back in 1984. Basic data structures like strings, characters, vectors, various numbers, I/O streams, records and hashes. It offer a lot of Lisp-stuff additionally, but it had everything to write plain programs.
> recursion was a negative, not a positive
That's why real-world Common Lisp usually avoided recursion early on and Common Lisp has lots of non-recursive iteration facilities. Scheme made some forms of recursion 'cheap' by introducing tail call optimisation (TCO) - which often is provided by Common Lisp implementations, too. But TCO was not standardised because it was difficult to integrate it with the rest of the language back then.
> I do not remember any discussion of Hashes, Vectors, etc.,
There were some books with more interesting examples, like the then popular LISP from Winston/Horn, where the 3rd edition from 1989 was fully moved to Common Lisp.
For key-value data structures Lisp traditionally used property lists, association lists and forms of search trees. Later some forms of object systems, which also map keys to values - like Frame systems. Larger Lisp systems used hash arrays (Interlisp) or hash tables (Common Lisp). Hash-tables were indeed not that often topic in the literature.
> CLtL and R4RS are references
The CLtL1 was quite good for learning Common Lisp. It's an nicely readable book and not the dense R4RS. CLtL2 made thinks difficult, because it was presenting the then defined ANSI CL (before it was actually finished) and all the deltas to the older CLtL1.
To be fair, that's a 2016 book. Pretty awesome though, I have to say - I'd highly recommend it for anyone who knows a bit of Common Lisp (e.g. after PCL). It's basically the "Effective C++" of Lisp world.
Slightly disagree with this....pg's work is the manual, though peter's examples and explanation of the concepts in lisp are practical to the core. To the point that you re-use your own code developed in the earlier chapters into the later chapters. That was the deal breaker, at least for me. Learning a new programming language is often bland, though engaging the reader practically isn't actually easy as it may seem.
For me, during the university ANSI Common Lisp was the Kernighan & Ritchie of Lisp - a standalone physical source of everything that I needed to start doing interesting things. Since then I haven't found another "canonical book" for another language that I jumped on, instead relying basically on internet. On that line, I loved "The haskell school of expression", though!
Over 10 years old, but still the best modern Common Lisp book. It helps, that it has been written by someone who had a lot of experience with other languages (Java), so less captured in a Lisp ivory tower, showing a truly practical approach.
Quicklisp is not panacea, even though it simplifies matters to a substantial degree for people new to Common Lisp.
On the other hand Quicklisp has serious issues:
+ Minimal if any documentation of internals.
+ A substantial chunk of the codebase can only be described
as spaghetti code. To make matters even worse, most functions lack documentation strings. A sad state of affairs given the interactive and self-documenting nature of CL.
+ Is vulnerable to man-in-the-middle attacks since it verifies neither certificates nor checksums. This means that using Quicklisp can get you owned. Unacceptable these days.
+ Operates over a 'curated repository' model that Xach is managing. The repository has been found to be vulnerable to man-in-the-middle attacks in the past since packages were fetched over plain HTTP or git://. Even if that wasn't the case, it's yet another element in the chain of things to trust.
+ Few if any people besides Xach are working on quicklisp-client to mitigate these issues. Lack of documentation and spaghetti code is not helping to attract new talent.
+ Xach is employed by Clozure Associates. I understand that they've allotted him some time to work on
Quicklisp while on the job, but it's still a side-project.
Look at the number of open issues at Github. Many have been there for years.
For these reasons, I don't use Quicklisp myself. I also advocate against it to friends and colleagues who share my views on software distribution mechanisms.
I hope to improve the documentation and security of Quicklisp in the next few months. It will depend on funding.
The core of Quicklisp is in dist.lisp. Understanding the protocol of the generic functions at the start of that file will help clarify Quicklisp as a whole. Almost everything else Quicklisp does is in support of that protocol.
That's great to hear. On your donations page you have indicated that there will be a special fundraiser. If that's not the case still, I suggest you remove it as people could be waiting for it and not donating.
> Its not that there are no documentation strings. Example:
At the end of the day it's the quality of documentation that matters. He did document some of the generic functions but in manner that is not very productive, as the doc strings are both minimal and don't let you get an idea of how the thing fits in the overall picture.
I would bet money even Xach would be confused going back
to that codebase after a year of non-exposure.
I spent an afternoon trying to figure out how to add cryptographic checksum verification support. One could say the simplest of tasks. After a few hours of diving into that codebase, a picture of the whole was not emerging. Eventually I gave up in frustration. This is not ideal.
Well for some of us the lesser evil is unacceptable. The rest will happily use Quicklisp.
If I had to guess however, I would say that the vast majority of Quicklisp users are not aware of the tradeoffs they are making. It's troubling that none of the security implications I mentioned are listed or even hinted at, in the Quicklisp site https://www.quicklisp.org/beta/release-notes.html or anywhere in the Quicklisp documentation.
One alternative is to improve Quicklisp. I don't want to be seen as blaming Xach here, but he's solely responsible for the lack of documentation. For CL docstrings especially, their absence is unfathomable for a core project that is positioned for wide usage. Every function, method and dynamic variable should have a docstring.
You don't add these after the fact, you add them when you're writing the code.
If Xach improves the documentation situation, people will step in and start fixing issues.
The HTTP issue I guess can be easily solved. In fact if I try to manually download the same package over HTTPS (I mean packages with url from beta.quicklisp.org) it works so maybe th e solution is very simple
The solution is far from simple if you want to do it in 100% Common Lisp as there's no CL HTTP client that can verify certificates. Maybe possible with drakma and cl+ssl and a custom configuration of the latter and a native OpenSSL library, and I still have doubts.
That is shocking! I do not know well the common lisp community. Am I write in thinking that racket has an http client that can verify certificates or am I underestimating the complexity of ssl?
A wonderful book, well worth studying. If only in order to gain a wider perspective on programming languages, and perhaps forget, at least for a while, about the language wars and the ongoing "C bashing."
It may be not obvious to those who may only have had a limited experience with it, Common Lisp, in itself, is an extremely expressive and powerful programming language. Also, there are compilers, such as SBCL, that generate a very efficient machine code; others, such as ECL ("embeddable Common Lisp"), make it very easy to combine lisp code with code written in C, thereby providing the ability to seamlessly integrate high-level and easy-to-use constructs with low-level, performance-critical pieces of code.
Indeed. As much as I dislike CL's quirky inelegance (I prefer the scheme side of the Lisp family tree), it's a tremendously powerful and effective language, and one of the best choices for Getting Stuff Done Right Now using Lisp. In fact, probably the best choice.
On the other hand, quoting Einstein's quote found in the famous book by Sussman and Wisdom who use Scheme to teach Classical Mechanics, "I adhered scrupulously to the precept of that brilliant theoretical physicist L. Boltzmann, according to whom matters of elegance ought be left to the tailor and to the cobbler.”
In this case, though, I think it still works. It says nothing of the work they do, but more to the goal of their work. Tailors/coblers (fashion, in general) are tasked with making elegant attire. It is true that some are tasked with making practical attire.
Physicists, however, are never tasked with making elegant physics. It is rewarded. But it is not directly their task.
quirkyness isn't good in a PL: you want your language to act the way you'd expect, and have the arg orderings, function names, etc. That way, you don't get it wrong.
[ANSI] Common Lisp may be unique among programming languages in that it is based on a standard and that standard was written to reflect how people were actually using Lisp after roughly thirty years of general use and about a decade of actual Common Lisp use.
The other thing about Lisps is that if a programmer doesn't like the order of arguments or the function names or even the parenthesis, the language lets them change whatever it is they don't like and the changed features still have first class status. That's also one of the criticisms of Lisps.
I think the specification committee for ANSI Common Lisp would have agreed that defaults matter. It's just that their criteria for defaults was "What professional programmers are doing right now".
Historically, Scheme somewhat dodged avoided issues like argument order by excluding functionality from the language spec. My experience is that once Scheme gets expanded out and becomes Racket, the many hands show up as inconsistencies among similar functions much like Common Lisp.
Clojure, to me, seems to have the right idea: procedures that abstract operations from data types...for example there is not one mapping procedure for lists and another mapping procedure for arrays. Racket seems to be headed in that direction.
In Clojure map also works on hashes and sets. In keeping with the previous discussion, there's also only one variety of map in Clojure. It's this orthogonality of operations and datatypes that I think is Clojure going in the right direction.
None of which is to say that Clojure isn't standing on the shoulders of giants.
I like the core Scheme language, but I never could get used to its macro syntax. I prefer Common Lisp macro syntax for that, and Clojure's macro system being very similar to that of Common Lisp is an added advantage!
Syntax-rules isn't great. But pretty much every scheme has recognized this, and added an imperative hygienic macro system (think Clojure's macro system, where you can't leak environment unless you want to, but better). There are three main choices at the moment: syntax-case, sc macros, and er/ir macros.
er (or explicit rename) macros are the simplest system: they're like CL macros, with a few minor differences. Also, because Scheme is a lisp-1 with a mutable global environment, you have to rename absolutely everything. Even lambda. Yes, really.
Because of this, CHICKEN (the only scheme that uses er macros) also has ir macros, which are like er macros, except that everything is implicitly renamed unless you say otherwise. However, expanding an ir macro is O(n) under the hood, which sucks. This is probably the worst part of CHICKEN, but it works well enough.
sc (syntactic closure) macros are similar to ir macros in nature, although the syntax and abstraction is different. It's a pretty nice system, currently used in Chibi and MIT scheme, and (while it's really too early to tell) seem to be the macro system most likely to make its way into R7RS-large.
Finally, there's syntax-case. It's used by racket (I think: racket's syntax-case has apparently been heavily extended) and guile, and is the R6RS macro system.
Personally, I don't like syntax-case. IMHO, it's overly complex, it throws away the standard macro abstraction for little benefit, and is generally a pain to use. But it's not objectively badly designed, and some people seem to like it (conveniently, this description, with some minimal modification, applies quite well to R6RS itself). You might be a person that likes it. I don't know. All I know is that I am not one of those people.
The second program will work in every Scheme I can think of (depending on what you want returned when the condition fails, but it shouldn't crash). I don't know what T did differently, but it's no longer the case.
The biggest problem in spreading lisp nowadays is the fragmentation of the documentation.
"Lisp" means so little when you're writing real code.
Each implementation has its own quirks and even venerable books fall when confronted with real world code at learning.
For example: I was reading the SICP and using racket that looks like an interesting runtime nowadays. Turns out that a while a go the dev team made cons immutable and that's okay I guess but it broke (at least for me) the experience of reading SICP because I now have to pause and learn this quirk of this specific lisp implementation.
And don't even get me started on common lisp. There are at least 5 or 6 major common lisp runtimes, each of them incompatible somehow (try and read some of the StumpWM "makefiles" and you'll have a taste of what I'm talking about).
Also, common things (threads, for example) are still not provided in a uniform way across all of the implementations, and are often more or less quick hacks.
Scheme? Cool! Srfi! What part of scheme does your scheme actually implements?
Would you use a C compiler without structures? And write code so that it can be compiled with another C compiler that has structures but doesn't have foot loops (only while loops are provided).
Lisps are awesome languages, but for me, no wonder they remain niche languages.
And before you downvote, please keep in mind that I am now booing Lisp, I would just love it to be more "sane" minded and less schizophrenic as a language/runtime, and more real-world-usable.
There are at least 5 or 6 major Lisp implementations that follow the standard very closely. They might have the odd quirk, and they all have extensions, but ANSI CL also provides a standardised way to handle implementation-specific parts. The situation there isn't any different from C.
Edit: Not to mention, C only got threads in the most recent revision of the standard, and I can't say anything about their implementation because I've yet to encounter a project that uses C11.
Because it is too expensive (it's an ANSI Standard, which makes things difficult), there are no sponsors for it and not enough are interested to do the work.
The language evolves in libraries, portable substrates and implementations since some time.
Note that the language has a lot of flexibility for change built in: reader macros for the s-expression syntax, macros for the Lisp syntax, meta-object protocol for the object-system, packages enable new Lisp dialects, ...
>For example: I was reading the SICP and using racket that looks like an interesting runtime nowadays. Turns out that a while a go the dev team made cons immutable and that's okay I guess but it broke (at least for me) the experience of reading SICP because I now have to pause and learn this quirk of this specific lisp implementation.
I'm pretty sure that you're supposed to choose the correct language in Dr. Racket if you want mutable cons semantics (such that you're using R5RS and not Racket).
Scheme is just one standardised dialect, and it's academically oriented. SRFIs are really a pain and many fundamental things like namespaces and modules in Scheme have been standardised very late. Thus portability among implementations is hard, but achievable if you stick to the Revised Report and ignore uncommon or all SRFIs.
Common Lisp is a different story though, the standard is comprehensive and it's easy to go from one implementation to another, often requiring no effort.
In most cases where there's an incompatibility among two Lisp environments, you can just write a macro or two to sort it out, whereas with C, if you don't have a for loop, you either patch the compiler or use the while loop.
> "The biggest problem in spreading lisp nowadays is the fragmentation of the documentation."
Is it though? I think the biggest problem is employment prospects -- it's just not used enough in industry. I have used a lisp professionally, and I am of the opinion that that opportunity was a one-time thing career-wise for me. these days, I can't justify spending time on anything but Java or C++ in terms of my career trajectory. would like to hear others' thoughts though.
In chapter 1 you mention that Petzold's book is underrated. If that's any consolation, I have bought the book when it came out and have talked about it to many of my friends since. I wouldn't say it's underrated, but just not well known. Great book!
I forgot to say that I'm reading your text now :) I had a compiler class with Marc Feeley, author of Gambit-C, and I've been sold on Lisps ever since. I really love the simplicity of Scheme, but I found that it's much easier to get going (i.e. not having to re-write even basic things such as for-loops) in CL than Scheme. I can write/read some basic macros but my end goal is being able to write a read-macro for a DSL. I can't really envision doing this as easily without a Lisp.
Practical Common Lisp, along with pg's essays on Lisp, can be credited with inspiring the Lisp renaissance of the mid-2000s. It's an incredibly good book, showing Lisp (unlike Scheme) is a practical programming language, with the ability to go from low-level bit-twiddling all the way up to very high levels of abstraction.
Reading it was eye-opening. I had no idea up until that point that Lisp is a real, industrial-grade language (as opposed to a very interesting didactic tool).
Even if you don't have an intention of programming in Lisp, you should read PCL for an idea of how much better the language you're programming in could be. Hopefully, it'll inspire you to write some Lisp, but even if it doesn't you'll be a better programmer for it. You'll be able to understand why static languages like Java have such arcane ecosystems; you'll be able to see how very little new there is under the sun.
I've been working in JS full time for a couple of months now, except for a short break to build a JSX parser for Emacs. Switching to lisp was such a breath of fresh air. The code was denser, and every line took longer to write, so not sure about overall productivity, but man was it more enjoyable to write!
I wish there were an update to this book. I seriously looked into giving up on JS and using Parenscript and Wookie or Hunchentoont but it seemed like I'd be taking on a lot of yak hair. Does anyone here use any lisp for Web Development? What's the modern lisp (CL or otherwise) stack?
> Does anyone here use any lisp for Web Development?
I do.
> What's the modern lisp (CL or otherwise) stack?
There is no fixed stack, though people want to make you believe there is.
I use cl-who for html generation, my own lib for css generation (basically sass with lisp syntax), a bastard of cl-json for json generation and a heavily patched hunchentoot as a web server and in very rare cases parenscript - but only to generate js class definitions from cl classes since transpilation of business logic usually places too many constraints on what you can implement how on the lisp side. For database access I mostly use cl-postgres but fall back to cl-sql if the client insists on another DB than pg.
I also use lisp for web development, and want to second this:
> There is no fixed stack, though people want to make you believe there is.
My stack:
I use parenscript[5] fairly heavily (to the point where I'm now a contributor). Note that parenscript is mostly javascript semantics with lisp syntax, but macros for it are written in common-lisp, which makes a lot of the javascript annoyances go away.
Like jlg23, I use cl-who for html generation.
I spent a while experimenting with css generation, but now just use something prepackaged (currently Pure[1]). If I had need for a custom look, I would pay someone who knows graphic design to generate a layout, and I'd code to that.
I've rotated between several JSON libraries to the point that I couldn't say which one I used for my last project; for JSON I'm very opinionated on the proper mapping from JS types to lisp, and none of the libraries do the Right Thing out of the box, so I wrap them with something that will.
As users expect something less like "fill out a form and hit submit" and more like "Instantly responsive web application that saves my work as I go" I started experimenting with using parenscript with various JS application libraries. I found React to be okay[2], but much prefer the simplicity of Mithril[3].
For webserver, I use clack[4] which is in roughly the same space as WSGI is for python or Ring is for clojure. It is sadly severely lacking in documentation (at least in English). A clack tutorial is on my "todo" list.
I happen to run clack behind mongrel2, but that's because it's the server I'm most familiar with; it has backends for FastCGI and several native lisp web servers, and adding new backends is very easy (the mongrel2 backend is under 200 lines of code).
For a database I use postmodern[6] (a library for pgsql) and I use cl-redis[7] for quick & dirty projects, as a key/value store tends to make for more rapid prototyping.
Thanks! Lots of great reading here. The React piece looks interesting, but would definitely like to hear more about Clack. My biggest hangup is around infrastructure. I wouldn't mind terribly re-inventing the wheel a bit in terms of client-side and business logic (if I can use lisp), but dealing with a FastCGI deploy makes me cringe a bit. Maybe it's just bad memories from a site I took over that was on shared hosting.
I second the request for more information about Clack. I'm using Hunchentoot at the moment and am about to start some stress tests. I hear that Clack has higher performance.
Clojurescript works, but Clojure is a odd beast, especially for those used to more traditional lisps.
While there are many excellent server-side lisps frameworks (I don't know CL super well, but I know they've got them, and here in schemeland, we've got Awful, Artanis, and whatever Racket uses (even though Racket isn't really a scheme), depending on your implementation. As a Chicken fan, I like Awful). However, the clientside situation isn't so great. CL doesn't have a clientside version, AFAICT (although with WASM, this may come). Chibi's got an emscripten compiled version, but now you're running one VM on another. Chicken has Spock, which is an almost-R5RS that compiles to JS, but the cost is high: it reportedly stresses your browser in unusual ways, has a runtime you have to add to your page, and it doesn't have eval (hence "mostly" R5RS).
Maybe now that WASM's here, things will get better.
It's a scheme to C compiler (with an interpreter, of course) that implements Cheney-on-the-MTA compilation (so continuations are almost free), a very nice macro system (although with some unfortunate performance implications), and has a relatively large library repository (for a scheme), as well as an stdlib that emphasizes practicality over purity (it even includes a regex library in the default install) while still keeping the simplicity that makes me love Scheme: I can keep the whole language in my head. Plus, if it doesn't have the library you need, writing C bindings is a piece of cake (because it compiles to C, anyways).
You have lot of good libraries in clojure, as well as clojurescript for the front end with nice js interop. For Common Lisp there's some nice stuff too, that build over hunchentoot but can't remember the name. The book Common Lisp Recipes has a lot to say about those topics too, the perfect complement to Practical Common Lisp in fact :)
> You have lot of good libraries in clojure, as well as clojurescript for the front end with nice js interop.
I've been writing Clojure for about a month now, and I love it.
My background is mostly in Python, and I picked up Ruby about a month before Clojure. My rampup in Ruby was really about a week, given that much of it is so similar to Python - it was mostly about learning conventions and what the community considers to be "idiomatic Ruby".
Clojure was much more difficult to learn, and much more fulfilling/enjoyable. It's definitely changed the way I think about my code in all languages and that's a good thing. One thing I will say though is that the majority of the time I've spent struggling with Clojure has actually been struggling with Java.
For example, I wrote a collection of forms that decrypt and validate an auth token sent by a third-party legacy application. This required using java.crypto, which was a nightmare. I was porting code from Ruby, which was using OpenSSL. At one point I had references for Clojure, Java, Ruby, OpenSSL, and C all open at once, trying to figure out what was going on. In the end that particular problem was because Ruby's OpenSSL wrapper magically use an IV of eight null bytes if you don't supply one while java.crypto didn't use one at all - but it was one of the more difficult issues I've had to debug in recent memory.
I guess what I'm trying to say is that I would absolutely recommend Clojure as a first Lisp to learn, but I would also suggest that the person doing so either already have some experience in Java or have a way to reach out to someone who does.
I'm planning on picking up CL at some point soon as well, but from what I've experienced and gleaned from the experience of others Clojure is probably the better choice is your area of interest is mostly the web, because of the dev community and access to Java libraries.
Clojure is pretty drastically different from Lisp in most ways. If you wrap your head around macros in Clojure that will mostly carry over to CL, but just about everything else will be a learning experience. They're both very cool languages, but much more different from each other than for example Python and Ruby are. If you're interested in Lisp, learning Clojure first is sort of like learning Spanish because you're interested in French.
> Clojure is pretty drastically different from Lisp in most ways.
I remember tinkering with Scheme when I was much less experienced, and getting hung up on the constant recursion and singly-linked nature of the iterables. I'll get back to that, but I strongly suspect that Clojure will remain more practical for the type of work I do (back-end web, mostly).
> If you wrap your head around macros in Clojure that will mostly carry over to CL
I've not tackled writing them yet, but I've read a ton of the language internals and other people's code and I'm getting to the point where I can anticipate where they've used macros and why. I've also begun reading - slowly - Let Over Lambda, which has kinda blown my mind. My colleagues with more Clojure experience strongly advise me to treat macros as black magic that should rarely be used, but my opinion is slowly but surely going the other way. Macros seem like most of the point of using a homoiconic langauge, and to reserve them for edge cases seems unduly conservative. We'll see if that opinion changes as I learn more and begin to use them in my own work.
> If you're interested in Lisp, learning Clojure first is sort of like learning Spanish because you're interested in French.
I'm honestly not sure what I'm interested in learning right now. I feel like I fully grok Python. Though of course there are things I can't sit down right now and write without looking at a reference, I know what tools to use when and why they're appropriate. Having not acquired a compsci degree, I'm basically looking into the more esoteric languages and design patterns at this point, trying to find concepts that expand my abilities and knowledge. FP was one of those things, the concept of homoiconicity was one, and macros seem to be another.
Haskell looks interesting to me as well, but I'm cautious of the passion of their community. My impression is that they believe they are the "one true functional language", and with things like Lisp having been around for much longer in that role that seems a little too "flavor of the month" for me.
> I remember tinkering with Scheme when I was much less experienced, and getting hung up on the constant recursion and singly-linked nature of the iterables.
No one writes code like that in Common Lisp. Both Scheme and CL have more than singly-linked lists, as well.
> I've also begun reading - slowly - Let Over Lambda, which has kinda blown my mind.
I haven't read that one but I've heard good things. I can personally vouch that Paul Graham's On Lisp is excellent, as a book on macros, Lisp, or just in general.
> My colleagues with more Clojure experience strongly advise me to treat macros as black magic that should rarely be used, but my opinion is slowly but surely going the other way. Macros seem like most of the point of using a homoiconic langauge, and to reserve them for edge cases seems unduly conservative. We'll see if that opinion changes as I learn more and begin to use them in my own work.
If a function will do, use a function, but macros serve a different purpose. Over time you build up an intuition for which one you need in which situations.
> I'm basically looking into the more esoteric languages and design patterns at this point, trying to find concepts that expand my abilities and knowledge.
> I'm basically looking into the more esoteric languages and design patterns at this point, trying to find concepts that expand my abilities and knowledge
It looks like I'm a bit further on this particular road, so let me give some suggestions:
1. Take a look at array-based programming languages. I used J (google "J lang" or got to jsoftware.com). It's incredible how you can have a single, two characters long symbol encode concepts such as recursiveness and have that symbol be easily composable with any other function in the language. Learning J or APL or K will change the way you think about tables, probably influencing both your NumPy code and SQL code.
2. Try some concatenative languages. Forth and Factor are fine examples in this category. Forth is a very low-level, completely untyped language with built-in interactive mode; it's mostly built in itself and bootstraped with a bit of assembler. Factor is a modern language with rich data types and batteries-included style standard library. Both use stack for passing arguments and both are interactive, but that's all they have in common, so learning both is not a waste of time.
3. Learn a pure object-oriented language, like Smalltalk (I think Pharo is the most popular implementation right now). It's not at all Java-like experience, having only objects at your disposal is not that bad if you can make the objects behave just the way you want them to: even things like the stack are objects and can be inspected and manipulated like all other objects. There's also Io to look out for: it's not as impressive as a full Smalltalk environment, but it's even simpler semantically and allows easy syntactic extensions.
4. Learn a logic programming language. It's great for quickly solving silly puzzles. It's not bad for dealing with textual data. It honestly sucks for everything else, and everything else breaks the pure-logic semantics, so it's not that practical. It's a bit better when used as an embedded DSL from some other language.
5. Learn (delimited) continuations as implemented in Racket. There are other languages which support some form of continuations, and there are even some frameworks which use continuations in these languages (see Nagare web framework for Python or Seaside for Smalltalk). Continuations will give you tools for understanding generators, coroutines, restarts and many other language constructs.
I use clojurescript. It's much more than just a lisp syntax layer over JavaScript. It has lots of features and of course the generated JS is larger than handwritten ones.
It might be useful, even though is from 2008, if it focuses on fundamentals.
Saved it for later reading, since I'm not learning Lisp currently, though had read a part of the Practical Common Lisp book earlier. (Had liked what I read of it.)
Welcome :) It did look good to me too, though I haven't read it fully yet. Nice to know you thought it worth buying, though I'm not connected to the author in any way.
The short answer is yes, and it's a very nice way to do web development. If you're interested in details, I did a talk on this topic not very long ago.
I've written a web server in Lisp, which I use at home, mainly for debugging - everything in a running Lisp image has a URL and I can use a web browser to examine the contents of data structures while my code is running.
I presume you can do the same with Common Lisp tools. Does anyone else here do that?
In your example, the LAMBDA form is preceded by a sharpquote. So #'FORM is syntactic sugar for (FUNCTION FORM). This is a way to reference a value in the function namespace instead of the variable namespace (Common Lisp is a "Lisp-2"... actually more than that, but that's the common terminology.) Usually this sharpquote would precede names of functions when they are passed as a value. (Interestingly, they are optional on the LAMBDA form.)
Generally, preceding single quotes in Common Lisp e.g. 'FORM are syntactic sugar for (QUOTE FORM) which has the meaning of reading in the form without evaluating it. You'll see lots of single quotes used for that purpose as well.
You can see that this does not match your expectation. Are you actually trying to type that code and evaluate it at a Lisp listener? If not, you should.
Now, FUNCTION is a special operator that in this case returns the function named EVENP.
QUOTE is a special operator that just returns the object it is passed.
So REMOVE-IF-NOT will get called with two arguments: the function named EVENP, and a list of integers from 1 to 10.
What is the current state of affairs with regard to Common Lisp on Windows?
I remember that when I was last trying to explore it, that was the single biggest hurdle - all the recommended freely available "industrial strength" implementations (SBCL, CMUCL) were either unavailable on Windows outright, or considered experimental and highly unstable there. The stuff that was available was either proprietary (and costly), or its performance was not deemed sufficient for production use (e.g. CLisp).
But that was several years ago. Did anything change?
I use CCL on Windows, and at times SBCL. SBCL does have a large warning about lack of maintenance and potential instability or something like that, but I haven't used it heavily enough to expose such behavior.
CCL has no interpreter, so any code you write gets compiled to native code. People worry too much about performance too prematurely in my opinion.
I hate windows in general and try to avoid using it and targeting it whenever I can, but for the past year I did use windows+ccl+emacs to successfully get work done, which I was able to easily transition back over to macos+ccl+emacs just recently.
I would say that at the very least, the hurdles you ran into in the past are gone. Binaries for sbcl on windows are supplied, and I'm pretty sure that threading is enabled in recent builds so no need to recompile yourself (I remember wasting so much time on this a few years ago and moved to ccl).
I recently ran into a workflow that ccl easily accommodated that sbcl didn't, which was specifically such that when you ran out of heap, ccl automatically allocated more, whereas with sbcl it seems to be fixed upon startup and requires more than a command line flag to fix (I couldn't figure it out in the hour or so I spent working on it)
Clozure looks very interesting, thank you! Got it up and running in a few minutes, and the docs look promising. I think it wasn't around last time I tried to seriously explore CL. Good to see more options!
It is a lot better, I don't think SBCL gets the "kitty of doom" start up message anymore. I'd love it if you'd try and let us know :-) (or you could just ask on #lisp). I know for a fact that Clozure CL works well on Windows.
This is an awesome book. I read it online back in 2006 (one of the few books I followed from start to end). Even though I mostly programmed in other languages, the approach and techniques in this book enormously improved me as a programmer.
Thank you for writing this book and sharing it with the world!
If I may suggest, please add 'next' and 'previous' links within the chapters of the online book. It's a little annoying to have to navigate back and then click on the next chapter to read on.
This book looks good from a cursory glance. Can anyone please tell me if I should read this one or Paul Graham's ANSI common lisp. I have some background in scheme due to SICP.
Graham is very clear, concise and readable. Though I liked it, I think it expresses a rather personal view of CL, deliberately excluding parts of the language. Overall PCL is more idiomatic. It's probably better to read them both.
The cool thing about the challenges is that they are language agnostic so you can try whatever you're interested in (CL, Scheme, Racket, Clojure, …). A bit like Project Euler (https://projecteuler.net) but more fun.
Hey tosh! Funny seeing you here (especially mere hours after I replied to you on Reddit)
AoC is great. I've been doing it in Scheme. Except today's challenge, which I did in AWK, because it was perfectly suited for the task (although I could have also used the AWK scheme macro). Plus, in Scheme, you don't wind up with code like this
{if(($1+$2>$3)&&($1+$3>$2)&&($2+$3>$1)) count++;} END {print count}
Feedback is very helpful in learning. It's why so many of us get such satisfaction out of building physical things or making art. It's because the feedback is immediate, among other things.
It's more abstract with programming, but programming a website gives pretty good and immediate feedback. Plus you can show it off.
So even though I'm not a web dev, I still think learning to build a dynamic website or web tool or service is a really great way to get introduced to a language. It will give you a rooting that you start branching out from.
Clojure in particular is pretty good at the web dev thing. I'd just stick to server stuff to start with personally.
SBCL is very good and what most of the open source community uses. Clozure CL is also very good, reasonably popular, and has good Cocoa support. LispWorks is excellent, but proprietary; the paid versions are very expensive but there is a free 'Personal' edition as well.
I would recommend SBCL unless you want to do GUI stuff.
I wish that he would write some more books :).