With respect, old is not synonymous with obsolete. I find this especially ironic since we are talking about a post written less than half a decade ago talking about a language invented more than half a century ago.
Of course, if you feel that the post is obsolete due to progress made in Lisp since the post was written, I'm extremely interested in hearing how things have changed.
True, although I wouldn't go so far as to say it is "obsolete." That being said, English is a rather ambiguous language. Obsolete might mean it has nothing of value for the reader, and it also might mean there have been some developments since its publication so be sure to do further reading.
Given that Clojure is not the only Lisp, I feel the article is still relevant although comments like yours and another poster pointing out the value of Clojure add value for readers.
I think the common complaint is that that it isn't really Lisp all the way down - it's Lisp syntax that drives the JVM. Therefore, it is a False Lisp; it dresses like Lisp, and convinces the unwashed that it's Lisp, but it's actually leading us astray.
See, this wouldn't bother me so much if it assumed the user knew the JVM instruction set and built up abstractions around it. But no, that's not what it does. It creates another "language", meaning "obscuring abstraction" that isn't friendly to inspection.
I should have expounded in original post, thanks for asking. There are many points, the one I'm thinking about in particular has to do with what Lisp is. I conced that Cojure is in the family of Lisp languages because it has the s-exp syntax and the macros.
However, it's a step back in The Philosophy of Lisp, because it doesn't allow user-defined reader macros. First let me explain what I see the philosophy of Lisp - it is the late-binding of all things. Late-binding, in this sense, means that the system by which you make the computer do the things you want it to (we usually call them "languages") makes as few decisions as possible, and lets the user overwrite and extend them. For example, the Python mailing list every once in awhile has an active debate if anaphoric `aif' should be allowed in the language [0]. This would mean adding a special symbol `it', such that:
aif expensive_function_call():
foo(it)
is an equivalent to:
it = expensive_function_call()
if it:
foo(it)
GvR decided against it, after weighing the needs of the Python community. The late-binding that I see as inherent in The Lisp Philosophy says that decisions like these are personal decisions to be made as late as possible (certainly not when you're designing a system of computational expression), and certainly not for all users. Each user should be able to define aif to mean whatever they feel it does. The current popular counter-argument is that this would shatter languages into a personal dialects that nobody but themselves and their best friend would understand. The Lisp family of languages disagrees, and hence, supports macros. Cojure, for this reason, supports macros [1].
However, there are two kinds of macros Lisp supports for the same reason: "syntax macros" which let you late-define how syntax is interpreted, and "reader macros", which let you define how the parser interprets your syntax (the Lisp parser is called the reader). For example, the reader of Clojure translates '(foo) into (quote foo), #{:x} into (hash-set :x), [x y z] into (vector x y z), etc.
However, the rules by which the reader translates those "special" deviations from normal s-expression syntax into s-expressions is closed off from modification. This goes against The Lisp Philosophy, and is not so in other Lisps [2]. The only case for this I've heard (in Stuart Halloway's "Programming Clojure") is that this would allow people to dilute the language into something which is no longer Clojure. This explicit early binding is a step back in what I see as the Philosophy of Lisp because brings us back to the religious tribalism of "this is Clojure, and the language decisions that were made are good decisions, and if you don't like them go fuck yourself".
1. The analogy I like best here is to mathematics. Any mathematician can create his own definitions to explain his ideas (as they usually do), and yet, mathematics doesn't break up into dialects that nobody can understand. Instead, mathematical language evolves as people participate - if you have a pet definition you like (maybe one you thought up all on your own!) you don't need to convince Guido or some council of language makers to be able to use it. You use it, and if your paper is worth reading, maybe you can convince other people to accept it. That way, mathematics as a language evolves, not by design, but by social interaction. The formal languages of computation can (and should) do the same.
Everything in Lisp, traditionally, is not extremely late bound. For this, look to COLA by Viewpoints Resaearch Institute.
The no reader macro convention basically boils down to making it easier to design a language.
If you want an acceptable Lisp, you effectively have to combine the best features of all Lisps. This is surprisingly difficult to do and ensure portability and stability. So your comments about "direction of Java" are puzzling, as it guarantees portability and stability to the extent the JVM is. Really the only downside is compatibility with type erasure, which is unfortunate but hey.
The usual default is that Lisp is late bound. You would tell Lisp when or where you don't want that. If you use CLOS for example, any method can be removed or added at runtime.
I'm not sure how not adding a reader macro mechanism makes the design easier. Actually the point of read macros is to enable incremental addition/change of syntactic elements of s-expressions.
Stability of any problem space makes reasoning about any problem space easier. The statement is a tautology. 'How' is a rabbit hole.
I wish, though, that Clojure had a versioning system similar to Curl or even .NET's module system, or even OSGi/Jigsaw. Syntactically, I'm thinking of something like Curl.
Thanks! I was thinking of dropping that into the post as an example of the late-binding philosophy going into the future, but I've been linking it quite a bit lately, and decided to go without it for once.
In my view of it, Piumarta's COLA is very strongly "of The Lisp Philosophy". As a matter of fact, I would call it the future of Lisp (the philosophy) :)
When things are truly late-bound, it makes as little sense to split computational instructions into these "language" silos. Allowing users (and creating tools) to modify syntax is fundamental to breaking this tribal idiocy around "languages" as atomic sets of programming system design decisions.
@When things are truly late-bound, it makes as little sense to split computational instructions into these "language" silos.
I can't understand you. Can you rephrase?
@Allowing users (and creating tools) to modify syntax is fundamental to breaking this tribal idiocy around "languages" as atomic sets of programming system design decisions.
It's far from tribal idiocy, and has more to do with modularity and substitutability. Most programmers do not realize what heights of engineering .NET|Mono, the JVM, Smalltalk, Symbolics Genera all are. .NET's MSIL, in particular, is the most modular stack-based assembly language I know of... especially version 2. It's pretty impressive once you realize you can specify modules at the assembly level. Granted, even in the '90s people were designing typed assembly languages, like TAL/T, which supported run-time code generation and could be certified for safety at compile time, link time and run time.
Really, I think talking about "not an acceptable Lisp" in broad strokes undermines attention to good engineering and striving to be great engineers.
>> @When things are truly late-bound, it makes as little sense to split computational instructions into these "language" silos.
> I can't understand you. Can you rephrase?
Oops, think-o, that should have said "it makes as little sense to split computational abstractions into these language silos".
To clarify: there are "languages" like Haskell/Ruby/Python/Java/C that make an enormous number of decisions about the kinds of abstractions we use to describe what it is a computer is to do. A "language" is a set of decisions about syntax, type systems, functional or object-oriented abstractions, memory allocation and garbage collection, call stack traversal. If one decides to compile to a VM instead of machine code, that introduces another set of variables - from JIT compilation techniques to the kind of VM specifications you publish (do you publish a reference VM like Squeak, or a bunch of papers and tests like the JVM). "The Java language" is a set of hundreds of those kinds of decisions. So is "the Python language". Comparing the two is really difficult, because for almost any problem, some subset of Java's decisions will be better, while another subset of Python's decisions will be better. But these sets of decisions aren't atomic (they can be split!).
Every time you late-bind a decisions to the user, it removes a dimension from the decision space that the "language" lives in. It expands the power of the language to span that entire dimension. Hence, the most powerful language is one that makes the most irreversible decisions.
talking [...] in broad strokes undermines attention to good engineering and striving to be great engineers
That sounds right, and I should avoid it if it's needless. However, it's also important to always try to unify concepts to simplify our models of things. The important thing is to make sure the simpler model is just as accurate.
The hallmark of COLA is recursive design, which sort of makes it more like Smalltalk than Lisp (but this is a fruitless point, and the key is to encourage you to dig deeper into what COLA is rather than what "it's like...").
The hallmark of COLA is recursive design, which sort of makes it more like Smalltalk than Lisp
Late binding is certainly a big part of the smalltalk philosophy as well, and recursive design is a great way of implementing late binding. A system uses the same "user interface" internally, it certainly makes it easier to let people modify the internals.
The big difference I see between Smalltalk and Lisp (correct me if I'm wrong) is that Lisp systems tend to use compile-time abstractions Lisp->machine code->execution, while Smalltalkers from early on started thinking about interpretation and VM's - using abstractions which require run-time data to make decisions.
It depends who you ask what the big difference b/w Smalltalk and Lisp is.
Model-driven architecture weenies like me will tell you the biggest difference is methodological, and that real-time object-oriented systems engineering has its roots in most of Kay's ideas.
I'm not sure what you mean by the following:
@Lisp systems tend to use compile-time abstractions Lisp->machine code->execution
I don't think so. The whole reason I enjoy Lisp is for how it has inspired me to do streaming models of compilation. When you're updating things dynamically, the most important thing is to have a logical object model - starting with getting your "(UML) package diagram" correct. Otherwise you end up with a large system that requires the entire system to be locked up for minutes while you do the upgrade, because different parts of the system depend too much on physical model details. That's why you separate message from method. So you can do this in any language, some easier than others.
I agree. It was only written 3 years ago. Almost nothing changed in the last 3 years in the Lisp world. Maybe Clojure -- but there is some controversy.
Of course, if you feel that the post is obsolete due to progress made in Lisp since the post was written, I'm extremely interested in hearing how things have changed.