Hacker News new | past | comments | ask | show | jobs | submit login
ABCL – Common Lisp on the JVM (common-lisp.net)
145 points by simonpure on Dec 10, 2019 | hide | past | favorite | 65 comments



I poked around a bit to figure out why this was even posted and there was a new release 1.6.0 on November 22nd which adds support for jdk 11, among other minor things. ABCL provides a java class which can load/eval lisp code, and the ability to call java classes from your lisp program, stuff like that.

https://abcl.org/svn/tags/1.6.0/CHANGES

As an "old" I learned LISP in college but have never had the opportunity to use it in my career. Still on my todo list!


Don't do it, you'll hate every other language after that (I do).


I like LISPs conceptually. However, I have a strong preference for static type systems, so I have probably written 10x more lines of code of LISP implementations than LISP code itself.


A Common Lisp implementation may have static types:

    (defun foo (x)
      (declare (type fixnum x))
      (1- x))
If you don't specify them, types all default to T.


Although I like OCaml way better than lisp, there is some static typing in Common Lisp. Sure there would be a lot of T (basic object) in your signatures, but you can have some typing discipline within it (although not as strict and beautiful as in ML languages).


Racket has Typed Racket, you might want to try that out!


Have you tried it yourself though? That thing is so slow it's barely usable.


Is it the type checking or the runtime that's slow?


Yeah, this happened to me when I migrated to Clojurescript. But it really only lasted until I (finally) got my arms around Elm. Lisp, of course, is a lot more versatile than Elm since Elm is specifically a browser language, so it's taken the job of being my scripting language away from Python and JS.


It makes everything feel like putting together IKEA furniture


Is that good or bad? IKEA is thrown a lot as a metaphor but I have no idea most of the times what it should mean.


I've never heard the IKEA metaphor before but it makes sense to me. There's really only one way to put any two pieces together. If they forgot to ship the right bolts, you're SOL.

Lisp is like a wood and metal shop. It may take a little more time to become proficient with the basic tools, but once you do, anything is possible. You don't hear Lisp programmers complain they can't do X with Y, and you don't hear builders complain they don't have the right piece to attach X to Y. If you don't have one sitting around, you take a few minutes and make one.


No good. Furniture is nice and cheap, but you ensemble it yourself at home. Honestly, I find it easy, but I'm good with that kind of handiwork. Most people find the instructions unclear and difficult to follow. There are many references, from the top of my head I remember The Wire... look for Mc Nulty IKEA on YouTube, I bet it will work.

For a little more money they serve the furniture to your house and ensemble it.


I've done it, now I hate Lisp. I've read On Lisp and Let Over Lambda, I've used Clojure in production, and I still hate Lisp, any and all of them. What am I doing wrong? Perhaps it's that reading code is more important than being clever when writing it. Or maybe that Lisp syntax doesn't pass the squint test[0] because it makes the wrong tradeoff of catering to the machine instead of us good ole' humans?

When I make my own language, I'll make sure to make macro calls as clumsy and hideous as possible to discourage their unexpanded use. I'll also make the whole language available at compile time but without syntax manipulation (a la Zig) to make most macros unnecessary.

So no, Lisp is far from "the language to end all languages" or whatever it is you smug Lithp weenies imagine it to be.

[0] https://www.teamten.com/lawrence/writings/the_language_squin...


I'm not sure what the Language Squint Test is supposed to tell me here. I can clearly see the structure of the Lisp code, too. It's only half as long as the Java one, which is great for reading. A single character can completely change the meaning of an entire block of Java code, too -- that has nothing to do with macros.

> I claim that this squint test is important, and that languages should deliberately make different constructs look different.

For your example, you just happened to pick a programming language you already know well. If this is a beneficial goal, why pick that particular point on the spectrum? Java and Lisp aren't the only games in town. It would be a remarkable coincidence if the language you already know were the optimal point.

COBOL's constructs look even more distinct from each other. A COBOL programmer probably thinks your Java methods all look the same despite having completely different calling conventions. After all, COBOL was definitely designed for "us good ole' humans".


> When I make my own language, I'll make sure to make macro calls as clumsy and hideous as possible to discourage their unexpanded use.

Sounds like Rust beat you to it.


Maybe instead of reading On Lisp and Let Over Lambda you should have started with something that explains and encourages the use of CLOS, because that's where Common Lisp really shines.


I know about CLOS, multi-methods, the meta-object protocol. It's a nice, dynamic OOP system (though I'm not a fan of dynamic languages). The thing is, CLOS isn't Lisp-specific: it can easily be recreated in a language with a normal syntax, e. g. Smalltalk. So there's no reason, ultimately, to sacrifice readability and maintainability to the demons of Parenthethes and Pervathive Macroth like the Lithp acolytes would have us do. Lisp is just a bad tradeoff from the get-go: reading code is more important than writing, language simplicity is more important than extensibility, and macro-writing is secondary to regular code writing.


Nice, but I think you are exaggerating a bit. Not every Lisper is a smug Lisp weenie. Actually I don't see many of those these days so I don't get the hate. Most of the annoying tribalist folks jumped ship to the Rust and Haskell camp.

Syntax is highly subjective, and funny that you mention Smalltalk, as I had a hard time selling that to some developer friends of mine because the method based syntax really put them off.

I also prefer statically typed languages, at least in the context of professional work with developers with various degrees of expertise contributing to the codebase. Then again, if I have to work in a dynamic language, I definitely take something image based with great tooling and feedback loop such as Pharo or LispWorks over Python or Ruby any day of the week.


Since people have different needs and tastes, there is no single type of syntax mechanism agreed upon.

Lisp generally favors flexibility, easy code generation/transformation, code as simple data structures, extensibility... that's attractive and useful for some.


A well-established implementation, but there haven't been many HN threads.

Just this from 2009: https://news.ycombinator.com/item?id=930514

and a bit from 2017: https://news.ycombinator.com/item?id=14605334


It's also mostly a full implementation of the Common Lisp standard with extensions, which is quite an achievement.


I'm amazed it literally has a class

    public final class Cons extends LispObject
    {
      public LispObject car;
      public LispObject cdr;
    }
I would have thought it would provide that abstraction to the programmer but use another representation internally. Chasing pointers like this on modern hardware is surely crazy?


We're using object-oriented languages like Java on modern hardware, so pointer chasing is quite natural. Typical Java programs will do that all the time. In many ways the Java runtime is relatively similar to a Lisp runtime - it's just not tailored on the low-level to Lisp - for example cons cells for linked lists are not provided on the virtual machine level.

Since lists in typical Lisp implementations are made with cons cells, some form of two element data structure is needed. Using Java objects has the drawback, that this is not the most efficient representation.

Good Lisp implementations typically make sure that the cons cell has very little overhead:

* cons cells are just objects made of two memory words without type tag. The tag is encoded in pointers to these cons cells.

* cons cells can directly encode some simple data like fixnum integers, characters, ...

* cons cells might be allocated in special memory regions and GCs might take special care

OTOH, more efficient low-level representations like CDR-coding, which allows some lists to be represented in vector like ways is mostly not used in current implementations, since it was not see worth the effort anymore - it was thought worth on older systems with much small main memories than we have today. Smaller main memories than the caches on a modern CPU.


>Typical Java programs will do that all the time.

True, especially considering that parametric polymorphism is done via "everything is a pointer" way (although I've heard something about introducing "value types" in Java in the future).

Also Common Lisp's pairs are heterogeneous, polymorphic by nature, so I doubt there could be any other solution but pointers.

For performant computations you would need unboxed arrays in Java, just as you would use SIMPLE-ARRAY in Common Lisp (which should be implemented efficiently in ABCL as well).


In addition to cons cells being forced in some scenarios, this is Common Lisp, not Lisp 1.5: Programmers who want performance for linear collections are expected to use data types like vectors, which are primitive and not implemented in terms of cons cells at all.

http://www.gigamonkeys.com/book/collections.html

> Vectors are Common Lisp's basic integer-indexed collection, and they come in two flavors. Fixed-size vectors are a lot like arrays in a language such as Java: a thin veneer over a chunk of contiguous memory that holds the vector's elements.2 Resizable vectors, on the other hand, are more like arrays in Perl or Ruby, lists in Python, or the ArrayList class in Java: they abstract the actual storage, allowing the vector to grow and shrink as elements are added and removed.


Because CONS cells aren't used only for lists, but are fundamental data type used as building blocks for few other things.

As such, they are required to hold any value or reference possible, and aren't exactly amenable to different format.

There is one technique which had somewhat large use, CDR-coding, based in the idea that you can tag a cell in a way that tells the system that the next field isn't a reference to a CONS cell, but the CAR of one, and thus linearize lists in memory. However, doing it fast requires a custom cpu so you can resolve this fast in transparent way, as well as it turned out to not give much benefit. However, widening gap between cpu and memory might make it attractive again...


A lot of things in Lisp force the implementation of lists as cons cells. It’s probably possible to do something different, but no one has, and the result won’t be a Common Lisp.

(Consider that the detail of improper lists is enshrined in the language.)


> A lot of things in Lisp force the implementation of lists as cons cells

This is one of the reasons that Clojure wasn’t written to be backwards compatible with CL. Rich chose to implement all of his core functions on the first/rest abstraction instead, as explained in his Clojure for Lisp Programmers talk.

https://youtu.be/nDm-QDEXGEA?t=32m47s


Common Lisp does have proper arrays as well, which can be used if the situation calls for it.

As it turns out, arrays aren't actually used that much (except for strings of course) in CL code. Most lists are just a few elements long, and regular lists are perfect for that.


The structure of a list is still exposed, and that decision is baked into the essence of Lisp. If you have a list, you cannot substitute a skip list or an array. If you have a skip list, it is (I think) not a language-level sequence and so things like rest and nth won’t work on it.

Arrays, struts, all other things are possible, but the things that make Lisp Lisp mean that you may get a list where repeatedly calling cdr may result in something that is not listp (cons or nil). And that’s weird. It enables some things that are wonderful, but I think the Lisps are practically unique in conflating pairs and list elements.


I bet CL allows generalizing nth and cdr. My point is still that the list structure in Lisp is very baked into the language.


Common Lisp has a standard abstract data structure called 'sequence' and generic operations for it. Sequences are either vectors or lists. Vectors are bit vectors, strings, general vectors, ... Sequence operations are then supported for all subtypes of sequences.

http://www.lispworks.com/documentation/HyperSpec/Body/c_sequ...

There are some problems with this design. The main motivations for it were:

* basic operations in Lisp are type specific, one wanted to have some generic operations and a generic sequence abstraction over lists and vectors

* the low-level data structures like conses, arrays, tables are not object-oriented and generic

* generic operations in a dynamically typed language are slower, since there is always a runtime dispatch necessary

* to remove runtime dispatch one can use type declarations (which are optional in Common Lisp) or a JIT compiler (which is not used in most implementations, they do AOT compilation)

* the language at the type of this design had no agreed OOP extension - CLOS came some years later -> sequences are non-extensible and don't support adding new sequence types

The design of this was later improved in languages like Dylan: see the Collections of Dylan

http://lispm.de/docs/prefix-dylan/book.annotated/ch12.html

Recently some CL implementations experiment with extensible sequences.

http://www.doc.gold.ac.uk/~mas01cr/papers/ilc2007/sequences-...


I use arrays much more than lists, for the same reasons arrays are useful in any language. Plus they're dynamically resizable and you can map across them in Lisp just like lists. For many purposes arrays are a much better choice than lists in CL.


Definitely nothing wrong with that. When manipulating large sets of data, they are definitely necessary.


Last I heard (years ago), some Lisp implementations had tried using arrays for non-branching cons chains, but found that performance wasn't actually better than actual conses in general.


https://en.wikipedia.org/wiki/CDR_coding

> In computer science CDR coding is a compressed data representation for Lisp linked lists. It was developed and patented by the MIT Artificial Intelligence Laboratory, and implemented in computer hardware in a number of Lisp machines derived from the MIT CADR.

> CDR coding is in fact a fairly general idea; whenever a data object A ends in a reference to another data structure B, we can instead place the structure B itself there, overlapping and running off the end of A. By doing this we free the space required by the reference, which can add up if done many times, and also improve locality of reference, enhancing performance on modern machines. The transformation is especially effective for the cons-based lists it was created for; we free about half of the space for each node we perform this transformation on.


CDR coding isn't really used because lisp now has standardied vectors, and one big reason to use lists over vectors is the ability to share structure.


Besides clojure, might be worth noting;

kawak - scheme for the JVM https://www.gnu.org/software/kawa/

bigloo is a scheme can compile to C. I think it can also make java class files with bindings https://www-sop.inria.fr/mimosa/fp/Bigloo/


Just "Kawa" - not "kawak".

(There is actually a very-incomplete Common Lisp implementation as well in Kawa, but the Kawa Scheme dialect is the main Kawa language.)

(Hoping to make an official Kawa 3.1 release by the end of the year - but no promises. Until then, use git HEAD.)


The title of this post should be changed to "Armed Bear Common Lisp - Common Lisp on the JVM." It's an iconic name, and then people would actually know what the post title was referring to.


Is it so iconic? I've always referred to it as ABCL.

Seems like we've had about 60 posts as Armed Bear: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

and 200 as ABCL: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


I believed so, but I guess that the numbers don't lie.


I guessed it was Aluminium Bank Common Lisp, some kind of lightweight version of Steel Bank Common Lisp.


That's actually a nice pun with a good explanation, while Armed Bear only raises questions for me. What kind of weapon would you arm a bear with? Why? Aren't bears unassailable enough as it is?


Bears are most commonly armed with 87.6 mm artillery:

https://en.wikipedia.org/wiki/Wojtek_(bear)

It seems it's the author's brand name for software, and is a pun on the US constitution's second amendment:

http://armedbear-j.sourceforge.net/

We may thus infer that you would arm bears because of the threat of armed bears. After all, to stop a bad bear with a gun, it takes a good bear with a gun.


Note that the first edition of Common Lisp the Language is the 'Aluminum Book'.


It’s pedantic, but I do agree.

Ahh, HN - never change. Sorry I mean, Hacker News - never change. :)


A bit off topic, but is Common Lisp still a good choice?

I spent a lot of time learning and coding CL during the last decade. I love lots of things about the language. Overall, it's great it's true multiparadigm. Few languages achieve that.

However, the implementations are too fragmented which makes the standard quite stagnant. And libraries are falling a bit behind.

I love Clasp is putting CL on top of LLVM. I wish we could have something with the best ideas of Racket, Clojure and CL on the LLVM.


Good choices depend heavily on circumstances, so, what are you trying to do, for how long, and with whom? Even if what you're trying to do isn't programming per se, but something like betting on how you'll make programming related money in the future, it matters to specify what exactly it is you're trying to do before you evaluate the choices.

I think at least it's hard to argue CL would be a bad choice in general, and it's certainly proved itself over the decades in many domains and circumstances... But there are always tradeoffs, and other languages can be better choices when circumstances push different ones.

A particular blog post that gets passed around (you may very well have seen it) is https://www.darkchestnut.com/2017/pragmatic-reasons-for-choo... It's just an example of the process, but it's still valid. His list of requirements might not be your list, or your weighting might be different. (It's been fun reading tweets from https://twitter.com/rainerjoswig this month just to appreciate how portable CL code is across CPU architectures and implementations and if you value that a lot why wouldn't you consider CL very highly; meanwhile others don't value portability and are fine with an effectively iOS-only language or whatever.)

I don't really follow your statement about implementations being fragmented --> stagnant standard. The standard is the standard, it's as immutable as C89. Or do you mean you're sad that there's unlikely to be a new revision to the standard that all the implementations will adopt, unlike C99 and beyond (which still isn't fully supported by all the big players)? That's understandable, but you probably used the portable libraries that offer the big things one might want from a new standard, so...


Clojure with a good c (stretch goal: c++) FFI on LLVM would be amazing. I somehow suspect the reason we don’t have it in a fully baked form is that it’s hard.

That said, there is Ferret which is Clojure-like, and compiles to c++: https://ferret-lang.org/

Racket is also lovely, and I hope that their port to Chez Scheme will enable alternative implementations, since more of Racket is now written in Racket rather than c.


SBCL is a de facto standard, and relying on it is fine.

Clasp is nice, but SBCL is fast enough that it's mostly extraneous unless you really need to talk to c++ code.


I’ve often wondered why SBCL hasn’t been used as the basis for some of the newer lisps and lisp-alikes. It’s performance is pretty great, and is worlds closer to Clojure than the JVM.


We've been implementing Arc over SBCL. It's called Clarc. I can't seem to find time to work on it.

There's a bit more description here: https://news.ycombinator.com/item?id=21550123


Qi and Shen are pretty interesting. Qi compiles to CL, and Shen compiles to a tiny Lisp subset quite easy to port. But Qi license is quite bad. I think Shen adopted BSD, but is still quite niche or close to abandonware.


I think it is. A lot of the 'stagnation' is due to the fact that the standard is a standard, so Lisp implementations aren't changing the definition of characters willy-nilly — and so code written twenty years ago is still correct and performant enough, so there's no need to update it.

I would love to see a slightly more modern CL, but it's good enough to get work done, and it's still light-years ahead of everything else.


I would say it is not a good choice today, unless you have a legacy code base in CL.

There are more modern Lisps which have a much better concurrent programming story — which is why after years of working in Common Lisp I switched to Clojure.

The other huge advantage of Clojure is ClojureScript: most of my business logic code these days compiles to both Clojure and ClojureScript, which is important for modern applications and lets me keep my line count low.

See my Quora answer (from several years ago, so some things have changed) for more on why I switched: https://www.quora.com/What-do-veteran-Lisp-programmers-think...


>A bit off topic, but is Common Lisp still a good choice?

As always, it depends on your application. For instance I decided recently that I would work my way through the cryptopals challenges, and opted to avoid Lisp since bit-twiddling is such a pain in that language.


Since bits are addressable directly in Common Lisp, I found the language to be very very nice for bit twiddling. Maybe you're mixing it up with Clojure? (bit twiddling in clojure without extra libs is a PITA).


as I recall bit-twiddling isn't actually too painful in common lisp, but certainly is in some other lisps.


What would be the best way to get (free) IDE support?


Emacs with SLIME is probably your best bet, as with most other Common Lisp implementations.


If you don't want to learn emacs, Sublime + the Sublime REPL package is quite nice. I've n using it to write SBCL but it can also use ABCL. Also REPLs for lots of other languages.


There was a recent thread[1] that shows how to setup Lisp development environment in Vim.[2] If you know Vim already, then check it out.

[1] https://news.ycombinator.com/item?id=21735148

[2] https://susam.in/blog/lisp-in-vim-with-slimv-or-vlime/


I'm using Vim with the SLIMV plugin (derived from SLIME for Emacs) and a bit of script glue to open a new Terminal tab for running SBCL with SWANK in. Like an IDE, and connected with a live Lisp.

https://github.com/kovisoft/slimv




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: