> McCarthy built Lisp out of parts so fundamental that it is hard to say whether he invented it or discovered it.
I loved that sentence! I'm guessing epistemology or a similar field has pondered the "invented or discovered" question already, and if so, I want to read about it.
> [on SICP:] Those concepts were general enough that any language could have been used
What?? In chapter 4, you write your own Lisp interpreter. If they had chosen C++, would you be writing a C++ compiler? Or a Lisp interpreter in C++? Either way, it would be ugly. And most languages would encounter problems even before they got to chapter 4. What made SICP great was building abstractions out of primitives. Most languages give you some abstractions, and others simply can't be built (at least not elegantly). I can't imagine SICP using a language that doesn't feel a lot like Lisp.
That said, count me among the people whose first Lisp exposure was SICP. And yeah, it was really fun and really enlightening. I am loving Racket now, but Racket is big and practical. SICP is small and beautiful - as I recall, the authors deliberately avoid using most of the language. (They used Scheme, but I think Lisp would have worked fine too, right?)
> In chapter 4, you write your own Lisp interpreter. If they had chosen C++, would you be writing a C++ compiler? Or a Lisp interpreter in C++? Either way, it would be ugly.
It's an interpreter for a small subset of Scheme, but Scheme is of course itself small and simple†, which means that it's not hard to mentally extrapolate from the small subset to the rest of the language. C++ is a large language, and so it would be less compelling to implement a small subset of it. And you could of course use C++ to implement Lisp, as you say; chapter 4 actually covers several programming paradigms quite alien to Scheme itself, so implementing a functional programming language in C++ would be quite reasonable. As I read it, the main point of chapter 4 is that it's not true that "Most languages give you some abstractions, and others simply can't be built." Metacircularity makes chapter 4 very effective at convincing you that what you've written is actually a real programming language, but it has confused you into missing (what I read as) the main point of the chapter. Perhaps it was counterproductive.
If not, though, it would be quite reasonable to achieve the same pedagogical metacircularity in Lua, Prolog, Forth, ML, Smalltalk, GHC Core, my own Bicicleta, or any number of other small, simple languages. I don't know about you, but those languages feel very different (from Lisp and from one another) to me.
† "Simple" here refers, for better or for worse, to the observed behavior of the language, not the difficulty of its implementation. Much of this is easy to gloss over in a metacircular interpreter, where you can inherit garbage collection, procedure call semantics, dynamic typing, threading, call/cc, and even parsing, lazy evaluation, and deterministic evaluation order, from the host language. (The book points out many of these points.) If you were going to design a language at Scheme's programming level with an eye to making the implementation as simple as possible, Scheme would not be that language.
Even though it doesn't look like it, I could've sworn Prolog was homoiconic.
Smalltalk can also do a lot of the cool lisp wizardry. I know Alan Kay (a user on here who was on the team that designed Smalltalk at Xerox-PARC) likes both langs. I've never heard him describe what he thinks are the pros and cons of both though.
You can write a lisp in a few lines of Forth. I've never done it, but people talk about it a lot, so I'm going to assume fact.
In addition to not technically needing a GC, a simple GC can be small if you don't care (much) about performance. One variation over a basic tricolor mark and sweep using linked lists for each "color" is basically (ruby-ish pseudocode):
# At this point all objects are in the "white" set. We don't know if they're reachable (in which case they're garbage)
# Mark roots
push_registers_to_stack
stack.each {|ob| grey.append(ob) } # You need to do this to any other roots as well, e.g. any pointers from bss.
# Iteratively scan every object, moving unscanned objects into the "grey" set (reachable but unscanned)
grey.each {|ob|
ob.referenced_objects.each {|ref| grey.append(ref) }
# "ob" has now been scanned, so moving it into the black set (reachable *and* scanned)
grey.delete(ob)
black.append(ob)
}
# What remains in the "white" set is now the garbage, so free it. You can also, if you want, move this aside and free it lazily.
white.each {|ob| ob.free }
white = black
A "real" implementation of the above can be done in <100 lines in many languages. Doing efficient gc gets more complex very quickly, though.
The mruby gc is a good place to see a more realistic tricolor mark and sweep that is also incremental and also includes a generational collection mode. It's about 1830 lines of verbose and well commented C including about 300 lines of test cases and a Ruby extension to interact with it. A simpler one like the above could certainly be much smaller even in C.
It's remote in that it looks very different, but not in terms of complexity. The operations used in the pseudo code are basically just.
- Pushing registers onto the stack
- Iterating over the stack
- Appending to a linked list
- Popping the first entry off a linked list
- Iterating over a linked list
- Calling a function/method ("free") to release the memory. In it's most naive form this would mean appending it to a linked list of free memory.
The two first requires dipping into assembler in a lot of languages. The rest will be tiny in most languages, including Forth, but my Forth knowledge is very rudimentary hence why I didn't try.
I always have problems with the definition of homoiconicism...
Prolog code is represented the same way as data, but (AFAIK) there is no functionality for manipulating code at run-time or interpretation-time. I believe allowing macros is a necessary condition for homoiconicism, if I'm right, then Prolog isn't it.
You can do all that with term-rewriting in Prolog, you can use a grammar to parse a file of arbitrary text and transform it into prolog terms to be executed etc
>>> It's an interpreter for a small subset of Scheme, but Scheme is of course itself small and simple†, which means that it's not hard to mentally extrapolate from the small subset to the rest of the language. C++ is a large language, and so it would be less compelling to implement a small subset of it
Exactly. This is the point of Lisp. I think many languages are over complicated for no good reason.
① Yes, in a sense, this is the point of "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I." But Lisp is a much larger and more diverse phenomenon than McCarthy's 1960 paper. For example, Common Lisp is not small and simple, although it is not as complicated as C++, and it's still part of "Lisp".
② Regardless of whether it is or is not the point of _Lisp_, it is not really the point of _SICP_, which is about different approaches to structuring programs. One of them is the Lisp approach.
McCarthy himself said, "Pure Lisp was a discovery, everything that has been done with it since has been an invention." He said this to a class at Stanford in 2008,
reported in HN! [1]
Chuck Moore always says he did not invent FORTH, but discovered it.
It's fascinating the relationship between Lisp and Forth. I am quite sure there is something about these two old simple languages and the concept of duality in mathematics that we are missing.
When I read SICP, I worked through the exercises in Forth. I had to implement my own garbage collector, create dialect extensions for nice list notation, build a thunk-based mechanism for closure... with the right groundwork laid, Forth can keep up pretty well. I was always somewhat dissatisfied that cute metacircular lisp interpreters leave as tautologies so many important details of their own mechanics.
It functions as a good way to whet the appetite of the reader or student. SICP's meta-circular implementation occupies a kind of sweet spot, it imparts to students that only just grokked recursion to understand how the programming language itself is just a program they can understand and tinker with. The state machine implementation of the Scheme interpreter sort of breaks the meta-circularity to dismantle the tautologies.
You're not the only one to observe this. It was observed and explored by Manfred von Thun with Joy, a purely functional programming language based on Forth. Modern "concatenative" languages continue this tradition:
Sure! I developed an interpreter for Joy in Continuation-Passing Style in Python: http://joypy.osdn.io/
I had just implemented type inference[1] when I got smashed over the brain by Prolog. So now the Python code is in limbo because the Prolog implementation of Joy is blindingly elegant[2].
In Prolog, the Joy interpreter is also the Joy type inferencer.[2]
The compiler (from Joy expressions to new Prolog rules) is five lines long.[2]
Yes, there's a real connection between concatinative languages and logic languages. In some sense you can think of a concatinative logic language as Joy with the ability to push "unknown" values onto the stack, as well as a few constraint manipulatives. That's somewhat orthogonal, but it makes sense an interpreter for Joy would be quite compact in prolog.
Thanks for the links, I'll review your work later!
Yep. And back in the 1970's Chuck Moore wrote a book describing the implementation and design of FORTH, and it is pretty close to LISP. And I asked Chuck Moore directly, in person, if there was a connection to LISP, and he said "yes".
They are both very different, technically, but culturally they are very similar. Both of them are extremely simple and build complex programs from a few fundamental principles. Both of them are disengaged from the more mainstream languages. Both of them broaden your perspectives as a programmer. Both of them have excellent books that were written a long time ago but are still relevant today. Both of them gave rise to specialized hardware (stack computers and Lisp machines). Both of them encourage you to write your own interpreter when you are learning them... Although both languages are very different, learning one or the other actually is a quite similar experience.
Forth is actually disengaged from mainstream languages; Lisp being disengaged from mainstream languages is just a meme believed by some non-Lisp programmers (something to do with those parentheses). Basic Lisp is very similar to mainstream languages in the Algol family. Lisp dialects usually have nested block structure, with local variables that can be assigned, functions with formal parameters with ordinary by-value argument passing and so on. Javascript is a mainstream language and has strong ties to Lisp, much more so than to Forth. Python is more closely related to Lisp than to Forth. C is more closely related to Lisp than to Forth. Java is more closely related to Lisp than to Forth ....
Really, no. They are fundamentally identical executable lambda calculus, despite Lisp's attempts to look abstract by shrouding itself in parentheses and Forth's attempts to look like "portable assembler" by exposing most implementation details.
Forth isn't written as a tree; the tree structure in the evaluation/compilation comes from the semantics of the words (how many elements of the value stack they consume).
In Lisp we know that (eql 1 2 3) is a "too many arguments" error. It doesn't just take 1 2 from the stack, push back the answer, and leave 3 beneath it.
Lisps are actually fairly conventional block-structured languages, having more in common with Pascal than with Forth. Lexical scopes, local variables, function calls with arguments going to formal parameters and all that.
> In Lisp we know that (eql 1 2 3) is a "too many arguments" error. It doesn't just take 1 2 from the stack, push back the answer, and leave 3 beneath it.
That error also comes from the semantics. You can't tell that (eql 1 2 3) is a too-many-arguments error just by looking at the tokens. That could be a 3-ary function, or an "eql is not a function" error.
The parentheses in Lisp code certainly suggest a particular tree structure, but they don't require it by virtue of their graphical form. And once you're willing to look up the arity of functions, Forth is written directly in its own tree structure to exactly the same degree that Lisp is, just with structural markers elided.
Think about it another way: a classmate in compilers class asked me once how Lisp handled operator precedence. Obviously, with the tree structure of the code fully specified, Lisp doesn't handle operator precedence because the question can't arise.
We can tell that 3 arguments are being applied to eql from that expression; when this is compiled, the generated call will pass 3 arguments, regardless of whether eql likes that, or whether such a function even exists. In the absence of any static checks, it will have to be caught at run time; but it will be caught.
Also, Lisp supports variadic functions with the same safety. Delimitation of arguments is necessary for variadic functions, like list.
The structure of every top-level form is an actual tree object in memory that is constructed before anything else happens. It can be walked by macros, or quoted so that the program accesses it as data directly.
Ah, but the argument was made that Forth code has an implicit tree structure based on looking up the arity of functions, in place of explicit delimitation with parentheses. How can that be if some function takes N arguments, where N is, say, a run-time value, itself on the stack.
Starting Forth by Leo Brodie is a classic book. Forth is a very elegant language. I came across a javascript implementation of it recently, which you can look up yourself :)
As a kid I went from basic to assembly. When I was shown Forth by my dad's friend I was totally caught up in Forth on my C64. I later tried Pascal which is also great but it wasn't close to forth in my mind. It was awesome Except you had to pay for the language and I am guessing that is why it died.
Currently my preferred language is Racket because it is so fun and practical, reminds me of the earlier days of Python but much more thought out. You write you code (which is fun) and you have an executable all within one setting.
Armed with that and a copy of Threaded Interpretive Languages by Loeliger (How I found out about the book is beyond me) from the library I managed to put together a complete(ish) Forth for the Apple ][ running ProDOS.
I think the magic bit with Forth is that it's easy to see how to write one by the time you've finished Starting Forth.
What does Factor have to do with my question? Yes it's a concatenative language that is still maintained although Slava left. I'm guessing it wouldn't make a great example if you're suggesting going through source.
This resonates. I once sought to create a DSL on top of F# for workflows only to end up inventing lisp again. It was a humbling experience.
The thing about lisp and why I dont use it in the workplace regularly is that it's dense and inconsistent. Dense because it's easy to write a profound amount of logic in one line which hurts readability if youre not careful. Inconsistent because you and I could be master programmers and write perfectly effective solutions that do the same thing and the code can look entirely different. This hurts interchangeability and it doesnt scale well, which disqualifies it from most enterprise projects. Similar to perl.
That being said all my personal projects are done using clojure because it's such an easy/fun platform to work with
We had a follow-up course to SICP that started with the meta-circular interpreter, broke that one down even further and eventually moved completely to a C-implementation. Most insightful course I ever had. We got assignments such as: "add co-routines to the continuation-passing style meta-circular interpreter, and to the stack-based C interpreter."
> What?? In chapter 4, you write your own Lisp interpreter. If they had chosen C++, would you be writing a C++ compiler? Or a Lisp interpreter in C++?
That's in chapter 5 :)
Exercise 5.51. Develop a rudimentary implementation of Scheme in C (or some other low-level language of your choice) by translating the explicit-control evaluator of section 5.4 into C. In order to run this code you will need to also provide appropriate storage-allocation routines and other run-time support.
Ha, well I guess my secret's out - I skipped some of the exercises! And I'm not surprised to see I skipped that one. Even now, 5.51 and 5.52 seem like they would be more hard than fun. But given that SICP is my favorite CS book, maybe I owe it to myself to go for 100% completion.
> I can't imagine SICP using a language that doesn't feel a lot like Lisp.
SICP's main value proposition is not a language but developing thinking in terms of creating abstractions through compositions. LISP like language certainly helps but most modern languages does have this ability although verbose and possibly subjectively ugly in many cases.
"epistemology or a similar field has pondered the "invented or discovered" question already, and if so, I want to read about it."
Yep. Me too. In fact, request for essay! It sounds like a topic I'd love to read pg meander into, for example.
I think anytime invention starts to seem like a discovery, it's probably a good invention.
I'm not sure there is a fundamental difference between the two. If you strain hard enough and get precise enough with your definitions, you could probably describe any invention as a discovery and the reverse. Recursively, that probably means any such definitions are inventions rather than discoveries.
> I'm guessing epistemology or a similar field has pondered the "invented or discovered" question already, and if so, I want to read about it.
Any course or book on the Philosophy Of Science would cover that. Are numbers discovered or invented? Did the ancient hindu mathematician invent the number zero or did he discover it? You could argue that the symbolic representation of zero was invented ( or was it? ) but the idea of the number zero was discovered. The more you delve into it, the more you realize things aren't as straightforward as you might have imagined.
Or quicksort. Is it an invention or a discovery? You could go back and forth on it forever.
Lisp is a powerful language that allows one to mix multiple logical levels within a program (especially through it's homoiconicity).
Shrdlu is perhaps the most classic illustration of Lisp's power[1], A classic program from 1968-70 that allowed natural language communication about a micro world. It was written in a version of Lisp even more free-form than today. When I looked at the source code a while back, it's parsing of natural language involved a wide-variety of on-the-fly fix-ups and such to take into account the multiple irregularities of human language.
The thing about Lisp's power is it allows the production of complex program quickly but doesn't particularly have a standard way of gluing these programs together. The other classic Lisp work, The Bipolar Lisp Programmer[2], describes how Lisp has multiple partial implementations of important libraries and programs simply because it allows a certain type of personality to by themselves produce something that's remarkably good but doesn't encourage any particular group effort.
Lisp is certainly evidence that "language matters" but not evidence that Lisp is where one should stop.
> The thing about Lisp's power is it allows the production of complex program quickly but doesn't particularly have a standard way of gluing these programs together.
I'd argue Clojure is setting some new standard here with its powerful literal notation for basic data structures (list, vector, set, and maps) and its practice of integrating/composing first on data. Most of the time that's enough to get some libraries work together. If not enough, integrating/composing on functions is the next stage. Only after having covered data and functions should you consider macros, which should be easier to write if you have done a good job at the data and function levels.
If such powerful and ubiquitous data structures work so well for programs written in clojure(script), why would that not be true in the larger picture for integrating systems? JSON worked much better than XML because it actually represents simple data structures, not objects talking to each other. Redis, Avro, and similar tech are continuing that story while Kafka adds a fantastic transport and storage mechanism. Sounds like we're closer to having better foundations for integration in the small and in the large.
I have used Clojure on several customer projects and my own Cookingspace.com hobby project. I like Clojure except for the way recursion is handled. I have been using various Lisps since 1982 and Clojure’s support for recursion bugs me.
Yeah, because the JVM can't do TCO you can't have nice recursion. The options in the language provides are good enough but definitely not the most ergonomic.
I can't remember the last time I wrote a recursive call in Clojure, other than for multi-arity. In general, if I were in a code review and saw the "recur" keyword, I think I would regard it as a code smell and see if I can re-implement it using higher order functions.
> Lisp is a powerful language that allows one to mix multiple logical levels within a program (especially through it's homoiconicity).
Statement like these are what make me suspect Lisp code would be a maintenance nightmare. I am regularly refactoring code to separate multiple logical levels so we can more easily maintain our codebases. If this is a misunderstanding, can someone clarify?
>>Statement like these are what make me suspect Lisp code would be a maintenance nightmare.
After having maintained Java code bases that do 300 classes just to post a JSON to a REST end point, I'm fairly confident Lisp code can't be that hard to maintain.
There are perhaps languages out there which are better than Lisp for maintenance in some respect, but it's none of the mainstream ones. Lisp code is not particularly harder or easier to maintain; most of that depends on the original author(s) as with any codebase.
It's the reason Aaron Swartz gave for rewriting reddit in python:
>The others knew Lisp (they wrote their whole site in it) and they knew Python (they rewrote their whole site in it) and yet they decided liked Python better for this project. The Python version had less code that ran faster and was far easier to read and maintain.
I suspect the problem is that LISP is too powerful. Language power is inversely related to maintainability/readability, as per the greatly underrated rule of least power: https://en.wikipedia.org/wiki/Rule_of_least_power
> Language power is inversely related to maintainability/readability,
Is that really True?
Having worked on VB6 codebases in the past, the less powerful nature of the language just meant a LOT more code which is definitely harder to maintain.
No, it’s not, and even if it were true it wouldn’t disqualify lisp, since you can quite happily wield lisp in a context which lets you express logic in simpler/constrained forms. Consider hiccup, datalog DSLs, etc.
It's inversely proportional to length too. Maintainability is about maintaining a delicate a balancing act between several competing fundamental concerns - two of which are expressiveness and verbosity.
I think python's popularity is partly because it found a sweet spot that was neither too expressive nor too verbose. It's certainly possible to dial up the expressiveness (LISP, Perl) or dial it down (Golang, VB) and get something that is harder to maintain both ways.
Python derives a fair amount of expressiveness from the power it gives to library authors. A lot of this in turn comes from its dynamic implementation.
As a library author you can write effective python at different levels. Some involve quite heavy meta-programming (jinja2, collections.namedtuple, django's ORM) and most users would never find this out.
The law of power linked above says to use the least powerful language appropriate for the task. So it's possible VB6 fell into the category not-powerful-enough/not-appropriate.
This. I find Lisp optimal for the small band of coder heroes scenario. They must align on cs philosophy, and taste. Grow the team, or have divergent approaches: the efficacy of Lisp quickly disappears and problems appear.
I have experience of great success and productivity in small teams with Common Lisp and Clojure. I don't have direct experience of dysfunction in a larger Lisp team. That was a guess based on how being a senior dev in a 50 man C# shop felt :)
Well, at some point we have to face reality. There have been hundreds if not thousands of successful projects with teams of hundreds if not thousands of developers working on them, at the same time or over time. These projects have been generally in Cobol (ok, we can chalk that up to brute forcing a tech from they didn't know any better), C, C++, Java, C#.
There seems to be a connection between scaling a project regarding developer numbers and programming language "power". Having a smaller, shared vocabulary seems to greatly outweigh programming language "power" as the number of programmers goes up greatly.
Lisp had 60 years to prove its human scaling capabilities. So far it hasn't convinced.
> There seems to be a connection between scaling a project regarding developer numbers and programming language "power". Having a smaller, shared vocabulary seems to greatly outweigh programming language "power" as the number of programmers goes up greatly.
That goes against evidence from current enterprise software experience. The Java eco-system has the absolute largest programming vocabulary ever (J2EE, JEE, ...) and is widely used.
AT&T/Lucent once wrote the software for a telephony switch in Lisp - I think the project ran over a decade and created more than one generation of working/shipping hard&software. The team was easily 100+ people. I heard a talk of the responsible manager years ago. They wrote basically the same functionality as a ten times larger C++ project and the Lisp team lead was extremely satisfied with what they wrote.
AT&T Management favored the C++ project for mostly non-technical reasons - C++ being an 'industry language' with a larger supply of developers. Not surprising for AT&T - Ericsson made a similar decision at some point in time with their (later lifted) 'Erlang ban'.
> Lisp had 60 years to prove its human scaling capabilities. So far it hasn't convinced.
It can take more than you might expect to "convince".
> In 1601, an English sea captain did a controlled experiment to test whether lemon juice could prevent scurvy. He had four ships, three control and one experimental. The experimental group got three teaspoons of lemon juice a day while the control group received none. No one in the experimental group developed scurvy while 110 out of 278 in the control group died of scurvy. Nevertheless, citrus juice was not fully adopted to prevent scurvy until 1865.
I have heard that the British knew what caused scurvey, and how to treat it with citrus, but they kept it a closely guarded secret to keep an advantage vis-a-vis other countries navies.
That is more or less correct, but it's a very weird situation. Scurvy had been known -- along with effective cures -- for thousands of years. The method of curing scurvy hasn't been secret since -- at the latest -- the Ebers papyrus of ~1500 BC, which correctly prescribed feeding the patient an onion. And in fact cures were widely known throughout the world since that time, including in Europe. Note again that James Lancaster had heard that lemons were effective against scurvy, 200 years before the Navy got around to requiring them.
> from about 1500 to 1800 two million sailors are estimated to have died from scurvy on expeditions to Asia, Africa and the New World.
> Why no cure? In Tudor England an effective treatment, scurvy grass (a corruption of cress), was commonly recommended; at the same time the Portuguese knew a cure, and so did the Spanish and the Dutch. For unknown reasons such wisdom was applied inconsistently (even by Britain’s Royal Navy after the Napoleonic Wars) until the identification of vitamin C in the 20th century.
It looks more like a case of the people making the decisions being unfortunately disconnected from the people who knew what scurvy was and how to deal with it. (And wilfully ignoring those who tried to point it out to them.)
I know about that the whole lime - lemon thing. Two things:
1. That was before the scientific age. They didn't know about vitamin C at the time, they couldn't even see it, even if they'd somehow believe in it.
2. There is no scientific proof, 0, none, that Lisp is superior. And it's almost impossible to prove it, since it requires controlled studies at a large scale - good luck with taking away hundreds of productive programmers for that study :) All we have is hearsay and personal opinions.
Be careful; my quote has nothing to do with the confusion between lemons and limes that occurred hundreds of years later. The Navy instituted a lime juice ration in 1799. They switched out lemons ("limes") for what we would call limes in 1865, setting themselves up for the reintroduction of scurvy. But James Lancaster performed his experiment (and reported his result of 100% scurvy prevention to Naval authorities) in 1601.
I love how I'm being downvoted for saying that there's no scientific proof that Lisp is superior. Come on, show it, I want to see it. I want to see thorough, statistically representative studies that show Lisp's superiority :)
That's nowhere near something like medical clinical trials, though.
Think of studies where you:
a) have statistically representative sample size (100+ developers)
b) actual numbers and compare them (the average development time needed for the Java/C++/etc. applications was Y and the average development time needed for Lisp applications was Y - Z, where Z > 0, etc.; the average execution time for Java/C++/etc. applications, etc., you get the idea)
Computer Science studies are still in their infancy.
We have to decide what's enough to convince (and convince whom) I guess.
Just to throw out a couple of examples:
One of the larger businesses I'm aware of that's using a Lisp is Nubank in São Paulo. Valued at $1bn+ and 4 million paying customers[1]. Finance is a fairly complex domain.
King in Stockholm has also fairly recently rewritten their main game creation tooling in Clojure[2].
It strikes me as a matter of structure. Lisp provides almost none, whereas python provides more. Anything not provided by the language and environment must be provided by the team. If they can, that's great, but it's going to be easier to get a small team to agree on a structure than a large one.
Wouldn't say so, there are plenty of conventions and common practices shared by overwhelming majority of Lisp developers. Project structure, system definition, development environment, naming conventions, idiomatic use of data structures and object system, all that is established across the board.
Well I don't find mature Python codebases (e.g. Homeassistant) particularly easy to inspect or change. Guess that's where me and 2005 Aaron Swartz would disagree.
When I talked with Alexis Ohanian he told me that the Lisp version was difficult to keep running. I think the term he used was that it often fell down.
I agree, strong barriers between layers are important. Right now the best way to build them is by using different languages or runtimes - it’s hard to mix layers if they require explicit network or shell calls.
If everything was in lisp the layers are naturally blurred.
As if they've tried them and then abandoned them? Java, fone one, didn't even have generics and closures when it came out.
It didn't have macros because it was intended for the ho-hum enterprise programmer of the time, who, the thought was, would not know what to do with them.
Instead, they re-invented all those things badly (e.g. through gobs of XML and ugly reflection based metaprogramming).
Nobody still uses them, even if they are available. In fact anything that doesn't looks enterprisey enough never goes past code reviews.
The adoption cycle for even the simplest of Syntactical features when it comes to Java enterprise is >10+ years. In some companies its never.
The saddest part isn't even that. The sad part is a whole generation of programmers have been raised, and turned to be architect astronauts without ever using something like a lambda or a closure.
The only hope for programming as a craft now is hoping Oracle kills Java(even if by mistake), and then some thing like Perl 6 comes along to replace it.
I'm not sure that will help. I see a kind of "family resemblance" between COBOL and enterprise Java. I wonder if any language that is going to play in this space is destined to become a monstrosity - destined by the nature of the problem space rather than the nature of the language.
I am aware that when I say this, I am basing my opinion on a sample size of two...
> Nonsense, the enterprise programmer of the time was programming in C or C++ and was making heavy use of macros.
C macros are entirely different in use from Lisp macros. C macros are written in a text substitution language that knows virtually nothing about C, whereas Lisp macros are written in Lisp and operate on regular Lisp data structures. Even trivial things are very difficult to get right with C macros, whereas very complicated things are often elegant and simple with Lisp macros. It's an entirely different experience. They're not at all comparable.
C programmers learn to fear macros because C macros have a lot of problems. Few of those carry over to Lisp macros.
then again enterprise software and languages are also full of sdls, plugins, frameworks, libraries and other pieces and parts that enable the same kind of power that lisp has.
People haven't really given up on the idea of building out functionality for specific portions of their codebase (and we really shouldn't) we've just seemingly exchanged one very powerful solution for a sea of different tools.
Makes me think of Cyc which is still around ( https://www.cyc.com ). Wonder if any HNers have experiences with it? I tried opencyc and was fascinated. It's a prology lisp married to a huge database of rules and knowledge. The open source version doesn't ship with the knowledge part so it's hard to evaluate if it's of any practical use. Was meaning to apply for a research license but didn't find time yet.
Lispers praise Lisp's homoiconicity but today's networked world calls for a strict separation of data and executable code as in NX bits (and yes I'm aware of the classic "Can programming be liberated from the von Neumann style?", and don't agree with it).
I don't know. I've always seen Lambda calculus as a model for computation to reason about program execution, but not as an actual implementation technique.
You are mixing up concepts. The Lisp concept of "the program is data" has little to do with modifying the program at runtime. It might be used for that in uncommon cases, but it is not recommended. A Lisp program usually is compiled into a static executable with a strict separation of data and executable code.
The mixing between program and data happens at build time in the compiler through the macro facilities. The first important thing is, that a Lisp compiler is not something which runs as an abstract process, but runs in a Lisp system and can execute arbitrary Lisp code. So Lisp macros can execute any Lisp functions - written by the user - to transform the input code, represented as data which can be easily processed, into the output code, which gets compiled.
The whole point is, that with macro expansion you are not tied to some pattern language, but can run any user-written code to process the code for output. This gives Lisp great power and extensibility. But this happens at compile-time, not at run-time.
A good example of the power of this is, that the Common Lisp Object System (CLOS) can be entirely implemented in Common Lisp. You take any Common Lisp implementation which doesn't have an object system and load the CLOS code and you have the object system available.
Well, it's not like that's a thoroughly unknown problem. eBPF (https://lwn.net/Articles/740157/), for example, which effectively relies on running user-supplied bytecode in kernelspace, attempts to solve this (and does it reasonably well) by imposing a couple of constraints on the code you supply. These constraints, in turn, allow the program to be statically-analyzed before running it.
In my experience, 90% of the cases where homoiconicity is useful are cases where the code you execute is trivial enough that you can statically analyze it, for example. E.g. you use the data to generate trivial processing code that would be tedious to write by hand.
I don't think that's the case. Template Haskell lets you generate Haskell code at compile time, including by doing arbitrary IO. This is technically unsafe but is usually okay because you're only reading local files and so forth. It's occasionally really useful for generating things like database mappings, lenses, etc.
>> Two decades after its creation, Lisp had become, according to the famous
Hacker’s Dictionary, the “mother tongue” of artificial intelligence research.
More precisely, Lisp became the "mother tongue" of AI research in the United
States. Europe and Japan, which at the time also had a significant output
into AI research, instead used Prolog as a lingua franca.
This is interesting to note, because a common use of Lisp in AI was (is?) to
write an interpreter for a logic programming language and then use that
interpreter to perform inference tasks as required (this approach is evident,
for example, in Structure and Interpretation of Computer Programs, which
devotes chapter 4.3 to the development of a logic programming language, the
query language, which is basically Prolog with Lisp parentheses).
Hence the common response, by Prolog programmers, to Greenspun's tenth rule,
that:
Any sufficiently complicated Lisp program contains an ad-hoc,
informally-specified, bug-ridden, slow implementation of Edinburgh Prolog.
> This is interesting to note, because a common use of Lisp in AI was (is?) to write an interpreter for a logic programming language and then use that interpreter to perform inference tasks as required
Not sure how you come to that conclusion. Literally none of the notable Lisp AI programs (ELIZA, STUDENT, SHRDLU, AM, EURISKO, MICYN) had anything to do with reimplementing Prolog.
Implementing Prolog however is trivial in Lisp, so many textbooks used it as an intermediate level exercise.
It's the response to Greenspun's tenth rule that mentions Prolog explicitly.
My previous comment states that AI programs written in Lisp implemented an
interpreter for a logic programming language, unspecified. Although to be
fair, that usually means "an informally-specified, ad-hoc, bug-ridden, slow
implementation of Prolog" (1).
Now, my knowledge of Eliza, Shrdlu, etc is a little limited (I've never read
the source, say; btw, have you?) but let's see. Wikipedia is my friend, below.
According to wikipedia MYCIN was a backwards chaining expert system. That
means a logic programming loop, very likely a resolution-based theorem prover,
or, really, (1).
ELIZA was first implemented in MAD-SLIP, which, despite the name was not a
Lisp variant (although it was a list-processing language). STUDENT is
actually part of ELIZA- it's one of the two scripts that came with the
original implementation, by Wizenbaum. The other script is DOCTOR, which is
the one more often associated with ELIZA (it's the psychoanalyst). If I
understand correctly, STUDENT solves logic problems, so I'm guessing it
incorporates an automated theorem prover; so basically, (1).
Eurisko was written in a frame-based (think OOP) representation language
called RLL-1, the interpeter for which was written in Lisp.
SHRDLU was written in Lisp and MicroPlanner an early logic programming
language (with forward chaining, if memory serves). In the event, (1) was not
necessary, as MicroPlanner was already all that- and more!
The Automated Mathematician (AM) was indeed written in Lisp, but I don't know
anything about its implementation. However, from the description on its
wikipedia page it appears to have included a very basic inference procedure
and a rule-base of heuristics. Sounds like a case of (1) to me.
MYCIN is a fuzzy logic inference engine, among the first ones. Would love to see that implemented in period-true Prolog, but tbh won't hold my breath.
RLL-1 and Microplanner are domain-specific languages, traditional Lisp approach to building complex applications. Fact is both were written in Lisp, that's just sophistry.
Neither AM nor EURISKO would map naturally to unification with their very special heuristic-driven task prioritizing and scheduling algorithms.
I don't know what "period-true Prolog" is. You can still run programs found in
Prolog textbooks from the 1970's in modern Prolog interpreters- just like with
Lisp.
>> Fact is both were written in Lisp, that's just sophistry.
I don't see the sophistry. If I implement Lisp in Prolog, the interpreter may be Prolog, but the programs it then evaluates are Lisp.
I don't see why unification would be an impediment in implementing any
heuristics or scheduling algorithms.
Period-true as in from 1970s. There were attempts to build fuzzy Prolog dialects in late 1980s, although it seems not particularly successful.
> If I implement Lisp in Prolog, the interpreter may be Prolog, but the programs it then evaluates are Lisp.
But it does not implement Lisp in Prolog. It implements a domain specific language in Lisp in a time proven manner. Googling "DSL Lisp" will give you probably hundreds of references. I did it before for my own projects and my "languages" even had no name. I could call it VARLANG-22 or whatever, but unless I told you all you'd see is some Lisp code.
> I don't see why unification would be an impediment in implementing any heuristics or scheduling algorithms.
It does not bring anything to the table there, because that's not how those systems work. Not every project is a first order predicate logic expression, as incredible as it sounds.
We're in a weird situation here. In this thread, you're arguing that a programming language implemented in Lisp, which retains the syntax of Lisp, is a DSL, and so still just Lisp; but in the other thread, you're arguing that Prolog implemented in Lisp, with the syntax of Lisp, is not a DSL, but Prolog.
I recognise that the above comes across as an attempt at sophistry, again, but I sincerely think you are not being objective, with this line of thinking.
>> Implementing Prolog however is trivial in Lisp, so many textbooks used it as
an intermediate level exercise.
I've heard this kind of thing before- but it's not true. What is usually
implemented in Lisp textbooks, is not Prolog. It's a logic programming loop
with pattern matching, sure, though I'm not even sure it's unification (i.e.
Turing-complete pattern matching) rather than regular expression matching.
Even the SICP book makes sure to call the language it implements the "query
language", rather than Prolog. The whole point of Prolog is to use the syntax
and semantics of first order logic as a programming language. You can't have
Prolog without the representation of programs as Horn clauses, any more than
you can have Lisp without S-expressions.
And those "trivial" implementations you note, don't do that- they implement a
logic programming loop, but retain the full syntax of Lisp. That's why they
are "trivial" to implement- and that's why they are not Prolog.
> What is usually implemented in Lisp textbooks, is not Prolog
When a Lisp textbook says it implements Prolog, it implements it with unification. See for example Paradigms of AI Programming, by Peter Norvig - the chapters on implementing a Prolog compiler...
No, they do implement Horn clauses, it's not really super hard. Had it as a homework long long ago. Syntax is irrelevant, and it's not like Prolog has complicated grammar if you want to go for canonical syntax anyway.
Oh, syntax is important. Otherwise, I can implement Lisp in Prolog trivially,
by rewriting each function of arity n, by hand, as a predicate of arity n+1
(the extra argument binding to the output of the function). After that, I
wouldn't even need to write an explicit Lisp interpreter- my "Lisp" program
would simply be executed as ordinary Prolog.
I believe that if I were to suggest this as a honest-to-God way to implement
Lisp in Prolog, any decent Lisper would be up in arms and accusing me of
cheating.
And yet, this is pretty much the approach taken by the Lisp textbooks we're
discussing. Their "Prolog interpreters" cannot parse Prolog syntax, therefore
programs must be given to them as Lisp. Then, because Lisp is not an automated
theorem prover, one must be implemented, accepting the Lisp pretending to be Prolog. That's not a
trivial implementation of Prolog- it's an incomplete implementation.
Yes, Prolog has very simple syntax: everything is a Horn clause, plus some
punctuation. That's simpler even than Lisp. You keep saying how simple Prolog is to implement, as if it was a bad thing. According to the article above, the
simplicity of Lisp is where its power comes from.
> Otherwise, I can implement Lisp in Prolog trivially, by rewriting each function of arity n, by hand, as a predicate of arity n+1 (the extra argument binding to the output of the function)
Programming languages don't consist only of functions. Lisp is no exception.
> Their "Prolog interpreters" cannot parse Prolog syntax, therefore programs must be given to them as Lisp.
There are Lisp-based Prolog implementations which can parse Edinburgh syntax and have a Lisp syntax variant. I have a version of LispWorks, which does that.
But the syntax is just a surface - the Lisp-based Prolog does unification, etc. just as a typical Prolog implementation. It supports the same ideas of adding, retracting facts and etc. The MAIN difference between REAL Prolog implementations and most Lisp-based is that the real Prolog implementations provide a much more sophisticated version of the Prolog language (and more of the typical Prolog library one would expect) and some have extensive optimizations and native code compilers - thus they are usually quite a bit faster.
Interested Lisp users are usually only looking for the reasoning features of Prolog (to include those in directly in programs) and not in the specific Prolog syntax. Sometimes, also to have Prolog-based parsing techniques in a Lisp program.
>> There are Lisp-based Prolog implementations which can parse Edinburgh syntax and have a Lisp syntax variant.
I was talking specifically about the Prolog implementations given as exercises in Lisp textbooks. Those, usually, are incomplete, in the ways that I describe above. LispWorks for instance, seems to be a commercial product, so I'd expect it to be more complete.
I don't expect a Prolog-as-a-Lisp-exercise to go full-on and implement the Warren Abstract Machine. But, equally, I think the insistence of the other commenter, varjag, to the simplicity and even triviality of implementing Prolog as an exercise in Lisp textbooks is due to the fact that those exercises only implement the trivial parts.
>> Interested Lisp users are usually only looking for the reasoning features of Prolog (to include those in directly in programs) and not in the specific Prolog syntax. Sometimes, also to have Prolog-based parsing techniques in a Lisp program.
That's my intuition also- as encoded in the addendum to Greenspun's tenth rule. Tongue in cheek and all :)
Lisp books, especially in AI Programming, have implemented various Logics - predicate logic based programming like standard Prolog is just one. Peter Norvig has a relative extensive example how to implement Prolog and how to compile it. Others concentrate on some other logic calculus. It's not surprising that various books have different depths in showing how to implement various embedded languages. In case of Norvig's implementation, others have used in their application.
It has nothing to do with Greenspun at all. Lisp and then Common Lisp was used in many AI programming frameworks and these integrated various paradigms: rule systems, various logics, objects, frames, constraints, semantic networks, fuzzy logic, relational, ... Lisp was explicitly designed for this - there is nothing accidental about it. It is one of the main purposes of Lisp in AI Programming to enable experiments and implementations of various paradigms. That means that these frameworks EXPLICLTY implement something like predicate logics or other logics. These features are advertized and documented.
Greenspun claimed that complex C++ programs more or less ACCIDENTALLY implement half of a Lisp implementation, because the large enough implementations need these features: automatic memory management, runtime loading of code, dynamic data structures like lists and trees, runtime scripting, saving and loading of state, configuration of software, a virtual machine, ... These programs implement some random subsets of what a Common Lisp system may provide, but they don't implement Common Lisp or its features directly. Most C++ developers don't know that these features actually exist in Lisp and they don't care. Similar the Java VM implements some stuff one could find in a Smalltalk or Lisp system: virtual machine, code loading, managed memory, calling semantics, ...
Well, the addendum to Greenspun's tenth rule works even if the ad-hoc etc implementation of half of Prolog is not accidental.
Like I say, it's tongue in cheek. I think (hope) programmers are somewhat over having any serious arguments about which language is "best" at this point in time. Although perhaps my own comments in this thread disagree. I am a little embarrassed about that.
> Their "Prolog interpreters" cannot parse Prolog syntax, therefore programs must be given to them as Lisp
Unless the project has a clear requirement to re-use unmodified/untranslated Prolog code, or to produce code that will be shared with Prologs, this would only be a disadvantage. The Prolog syntax as such has no value; it is all in the semantics.
> Then, because Lisp is not an automated theorem prover, one must be implemented, accepting the Lisp pretending to be Prolog.
Also, Lisp isn't a virtual machine, so one must be implemented. There is a compiler routine which pretends to be Lisp, accepting the special forms and translates them into executable code. It's just a big cheat.
> I believe that if I were to suggest this as a honest-to-God way to implement Lisp in Prolog, any decent Lisper would be up in arms and accusing me of cheating.
If you had quote and macros and such working in this manner, just with the f(x, y) style syntax, that would be a valid implementation. There is a convenience to the (f x y) syntax, but the semantics is more important.
We can have infix syntax in Lisp as a separate module that works independently of a given DSL.
Separation of concerns/responsibilities and all that.
I'm guessing they were trying to entice the Japanese market. If I have that right, it must have all been around the time of the Fifth Generation Computer Project, which aimed to create an architecture that would run a massively parallelisable logic programming language natively.
That eventually imploded and took logic programming and, I think, the whole of symbolic AI with it. I think Symbolics, selling a very expensive computer running Lisp natively, failed soon after?
That must have been a brutal time to be involved in AI research.
Even if that is true, it's probably an issue of low severity, since that Greenspunned Prolog will probably be used for a very specific task in the overall application, not as the master glue to hold it all together. (Though we can't rule that out, of course). Also, there have been some decent inference systems written in Lisp; they can just be used off the shelf.
I would say that Prolog as a standalone language is silly: an entire programming language developed from the ground up, with numerous details such as I/O and memory management, just for the sake of supporting a logic programming DSL.
Writing all this stuff from scratch for every domain-specific language is very unproductive, and has the effect of segregating all those languages into their own sandboxes that are hard to integrate into one application.
I've assisted with the teaching of Prolog at my university (as a TA) and the first task is usually something like the append/3 predicate (used to append two lists, or split them, etc).
Last year, the end-of-term exercise involved the farmer-wolf-goat-cabbage problem, which can be (and was) solved just fine with Prolog's built-in depth-first search.
I think though you may be talking of forward chaining, rather than depth-first search? That is, indeed a classic exercise in Prolog.
http://kingjamesprogramming.tumblr.com/ contains a selection of "verses" generated with a markov chain trained on SICP, the King James Bible and some other works. The results are oftentimes hilarious and some of the Lisp related quotes are very thematic:
13:32 And we declare unto you the power and elegance of Lisp and Algol.
Lisp, whose name is Holy
(and one that doesn't necessarily mix in something from King James)
A powerful programming language should be an effective program that can execute any Lisp program.
For whatever reason I really enjoyed the Symbolics graphics and animations demo[0] linked in the article.
I was born in the 90s and have constantly heard the narrative that technological progress is getting faster and faster, and that we're currently at the forefront. And then I see something like this. They had basically photoshop before I was even alive!?
Paint programs go back to Xerox Parc in the 70's[1], at the very least. In the 80s and 90s there were a bunch of high-end paint programs for Symbolics, SGI workstations, Quantel Paintbox [2] etc. They were more video-graphics oriented than Photoshop, which historically at least, had a strong print emphasis.
I love computer graphics history, and loved the mystique of "high-end" computing when I was making crap pixel art on my Amiga:)
Another cool thing is, in that video, the artist is basically using the same box modelling techniques I did when I took a Maya course a couple of years ago. In 1991 I was 3d modelling on the Amiga, and although it was awesome to have access to that software at home, modelling was much more cumbersome and limited.
Now, to blow your minds even more here [3] is a recent demonstration of that same 3d modelling program. At 19 minutes in, he switches to the Lisp Listener console, and uses Lisp to inspect and modify the data belonging to the 3d model, before switching back to the 3d app to view the changes. Lisp machines were incredibly well-integrated platforms for the expert user.
Imagine 2.0 from an Amiga Format coverdisk (Like most of my application software when I was a teenager) I remember reading about Caligari, but I never used it.
"The Apple Macintosh combined brilliant design in hardware and in software. The drawing program MacPaint, which was released with the computer in January of 1984, was an example of that brilliance both in what it did, and in how it was implemented."
"The high-level logic is written in Apple Pascal, packaged in a single file with 5,822 lines. There are an additional 3,583 lines of code in assembler language for the underlying Motorola 68000 microprocessor"
But, almost two decades before that, in 1968(!) not directly the Paint program, but all kinds of interactivity and simple drawing using the first mice ever:
* heard the narrative that technological progress is getting faster and faster*
What direction though? Some would claim upward, others would claim circular. Given the prevalence of "x implemented in y" I tend toward circular myself.
What are you talking about? Paint is still incredibly barebones and Photoshop 1.0 already had lots of major features that Paint lacks. Magic wand select, stamp, filters, perspective transform, etc.
>> They do this even though Lisp is now the second-oldest programming language in
widespread use, younger only than Fortran, and even then by just one year.
And the third-oldest is COBOL (it's at least as widespread as FORTRAN;
arguably, it's even more widespread than both FORTRAN and LISP together,
considering that it's used by pretty much every financial org on the planet).
It seems that, alredy from such an early time, the kind of languages we would
end up creating was already pretty much set in stone: FORTRAN, as the grandady
of languages aimed at scientists and mathematicians, that modern-day R,
Python, Julia etc draw their heritage from; LISP as the grandmother of
languages aimed to computer scientists and AI researchers, still spawning an
unending multitude of LISP variants, including Scheme, ML and Haskell; and
COBOL, the amorphous blob sitting gibbering and spitting at the center of the
universe of enterprise programmers, that begat the Javas, VBs and Adas of
modern years.
(Note that I'm referring to language philosophy and intended uses- not syntax
or semantics).
(I'm also leaving out large swaths of the programming community: the Perl uses
and the C hackers etc. It's a limited simile, OK?).
In the history of Lisp, the paper by Richard Gabriel: https://www.dreamsongs.com/WIB.html, “Lisp: Good News, Bad News, How to Win Big” is insightful and beautifully written.
I am surprised it is not mentioned in the article wrt the “winter period” in which Lisp popularity waned.
I think this article reverses cause and effects. It seems to start from the assumption that there's nothing special about LISP and then points to big cultural moments where programmers revered it and said "this accounts for 20% of the meme" etc.
I'd say those cultural moments exist because LISP /is/ something special. You could write SICP in Java (someone probably has) but the code would be way longer and less beautiful.
As a dev from non-CS major, I personally havn't learnt anything about Lisp but I would like to know. Is Clojure (specifically ClojureScript) a good start to study about it?
Clojure is a modernised version of Lisp. It compiles to Java bytecode and runs on the JVM. Clojurescript compiles to Javascript for use in the browser. Some amazing programming tools have been written for Clojurescript. The community is very robust and opinionated, in a good way I think.
Clojure is very modern with its vectors and maps. Lisp is more of an antique, a very valuable antique though. its very interesting to learn about both at the same time, as I did :)
I just want to say that the tooling setup isn't great and you kinda have to buy into a small ecosystem to do clojurescript Dev and the easiest is probably to use lein+lein-figwheel. If you have any questions feel free to ask!
The principle of orthogonal design is something I learned in CS, but hardly anyone mentions any more. The idea boils down to building software parts in a consistent way such that can be combined and re-used to form new things. The way you can accomplish this is by having very few rules. The more "syntaxy" a language is, the less orthogonal it is.
I loved this book as a kid, but I'm really not sure I would recommend it today. The Lisp of that era had "dynamic scope," and a great deal of Allen's book is concerned with fancy data structures and techniques to implement it in a reasonable fashion. But today I think we understand that stuff as basically wrong, and that "lexical scope" works better (and is much closer to the original lambda calculus that served as an inspiration). There are probably some proponents of dynamic scoping, but I think it's a lost battle.
So definitely yes, if you want to implement a historic Lisp. Otherwise, not so much.
I know what you mean and I think that feeling comes from a difference between lisp the language (learn in a matter of hours) and lisp the ecosystem (takes much longer to get confident with). And, you really need to learn both, unless you have time to make everything from scratch.
Well, you need to learn both if you're going to make a career of working in Lisp.
But even if that isn't in the cards (and it isn't for most of us, for all sorts of reasons), there's still a whole lot of value in learning the language enough to go through and make a few things from scratch.
I tend to agree with ESR on the subject: "LISP is worth learning for a different reason — the profound enlightenment experience you will have when you finally get it. That experience will make you a better programmer for the rest of your days, even if you never actually use LISP itself a lot." (http://www.catb.org/esr/faqs/hacker-howto.html)
Look, here is the deal with Lisp. It shows you the data structure of your computer program and lets you operate on it and that is neat. But it's not useful for actual work because the way you manipulate it makes it hard for you to understand what is happening without knowing all the ways your code is being manipulated. I have to write lisp for my editor (emacs) and I don't hate it, but I don't love it either. The syntax is hard to read quickly because it's cluttered and (usually) nest-y.
If you're willing to trade complete purity away, try Ruby. It's basically everything you want from Lisp without the mess. Give up purity, get comprehensibility.
Blocks are a really great way of doing things. You can even investigate the block source code as a string if you really want to.
Dynamic method definition is well supported and predictable. Data structures are easy to compose and operate on. It's basically all the power but in a more comprehensible way. There's a reason why Rails came out of Ruby. It's naturally powerful.
I don't get the whole "macros make your code incomprehensible" thing that is always brought up. A macro is just another way to add abstraction to your program.
You might as well say functions are bad because without reading the source code for the function you don't know what the function does.
A macro that looks like one thing but does something else is a bad macro. We don't throw away functions just because someone can write a function named "sort" that actually randomizes it's argument rather than sorting
Macros are way more powerful than functions. They can hide way more side effects, can fill a namespace, and can surprise you on many other ways.
This wouldn't be a problem if those surprises were rare and clearly marked, but the entire reason for macros to exist is to carry the surprises. As a consequence, having macros as the default (ok, second choice, not much better) tool of your language is bad. It's not that macros are bad by themselves, but they shouldn't be used often.
Besides, powerful tools do not get well together with dynamically typed languages.
> Macros are way more powerful than functions. They can hide way more side effects, can fill a namespace, and can surprise you on many other ways.
Abstractions that surprise you are bad. That doesn't mean the tool used was necessarily bad.
> This wouldn't be a problem if those surprises were rare and clearly marked, but the entire reason for macros to exist is to carry the surprises.
See above RE: surprised. Also, I can usually identify a macro from indentation, as most macros tend to have lambda lists similar to:
((FOO BAR &key BAZ) &body b)
which slime will pickup on and indent appropriately.
> It's not that macros are bad by themselves, but they shouldn't be used often.
If by "use" you mean "write" I agree. I do write macros far less often than I write functions, and this is common advice for lisp programmers anyways.
> Besides, powerful tools do not get well together with dynamically typed languages.
Not all lisps are dynamically typed, even common lisp has optional type declarations and typed racket takes this further. Also the GP post suggested Ruby, so that's not a great alternative by this argument.
I upvoted you, as I share your sentiments with regards to Ruby and Lisp, but language evangelism belongs in discussion centered on that language. It's just offtopic otherwise. This is the thread glorifying Lisp. People want to hear what makes Lisp great.
Having spent months learning Haskell, I'm interested to pickup another mind-expanding language. If I read SICP (and also watch the MIT lectures), what dialect should I follow along in? Ideally I would learn something people are using today so there would be usable libraries. I mostly write website backends and APIs.
DrRacket IDE [0] + the SICP compt language [1] and you can start writing it instantly in a well built and maintained environment that’s racket based and pretty fleshed out library wise, certainly nothing compared to Clojure but among the rest, it’s the best (imo), I recall Carmack writing a server in Racket for fun and praising the experience a few years back.
Additionally, if SICP proves too slow going or difficult math wise [3] you can always use drracket for HtDP [4] and it’s corresponding misnamed edX course(s) [5] and later on, PLaI [6].
It's the one that was used for the course at MIT. You're not really going to dig into real-world useful libraries when doing SICP anyway. It's a bit barebones, personally I enjoyed writing a bit of tooling while working on the book, admittedly it doesn't have the best dev experience out of the box.
Then when you do an actual project you can pick what works for that project. The fundamentals should carry over easily enough.
DrRacket has an SICP language mode, and I can attest that most of the code in the book works without modification. After that you can easily switch to the Racket language, which seems to be a modern, popular dialect of Scheme with plenty of libraries.
SICP is not so much about a particular language. I think there's a Clojure port of SICP somewhere, and Clojure has a good impedance match with web backends. If you want an "authentic" experience, Scheme would be a good choice.
Yep, and he made mucho dinero when they sold it to Yahoo. As far as I understood it, Lisp enabled them to implement features so fast that their would-be competitors couldn't keep pace.
and yet. pg has been cheerleading for lisp since 2002 or so, and almost nobody has followed in his footsteps. the reddit guys believed him, tried it, and wound up doing a complete rewrite in python.
Seems to me that the uptake has been picking up, in no small part to "Hacker News". Bear in mind that since Lisp programs get compiled to machine code, you might be using software written in it and not know it. And that's how it should be: high quality software should be small, fast, easy to install and the user shouldn't have to care what it's written in.
I have seen his essay on this and I am curious as to how relevant it is today, not so much because of languages, but because most languages have frameworks that will speed up a lot of the development time when used properly.
You’d get 90% of the way there with just font selection, paragraph spacing, and margins. Maybe colors as well.
Butterick’s typography stuff and tufte will carry you through, and then just steal some more specific detailing from sites like this, gwern, xxyxyz etc
I compiled a list of such sites I was planning to steal details from for my own blog, but never actually followed through on, if you care: https://github.com/setr/pollen-site/
Would it be worth going through SCIP without prior knowledge of LISP (i.e. could I pick it up from the book) or is it better to have some knowledge beforehand?
At the time the book was written it was not uncommon for a new MIT student to arrive without having used a computer at all. 6.001 was intended to be the first introduction to a computer so not only is no lisp assumed but no programming at all!
It also depends on how motivated enough you are. Even as early 2005 most of us in India couldn't afford computers. I remember we wrote 8085 programs all on paper. Limited time was available, with limited kits in the lab, but we were motivated enough to do it on paper alone.
These days where everything is supposed to be easy, newbie friendly, accommodating and all that. People have a tendency to quit early and expect the ecosystem to make it easy for them.
These days you get a decent computer under $100 if you use a Rasberry Pi. I could do anything for something like that a decade back.
I wouldn't quite say motivated - we had no choice. You either wrote and debugged the 8085 assembly on paper before you ran it on the board, or you didn't do it at all :)
While I understand (and respect) the sentiment, having gone through much the same, I wouldn't disparage something being newbie-friendly. That isn't necessarily a bad thing.
Yes. Lisp is fundamentally simple and is introduced in the beginning. I can recommend Racket with the sicp language extension to get the dialect used in the book.
I’m just finishing chapter 1 (1/5th of the book) and the math so far hasn’t been too challenging, but you should certainly go in with the expectation that you’ll do some Googling to figure out some context to complete the exercises.
SICP only uses a small subset of Scheme, and it's only using it as a minimal vehicle for exploring computer science. You can absolutely learn the amount of Scheme used in the book as you go (that's why they chose that subset: it's not supposed to distract you from the topics they're discussing; they're trying to get as close as they can to simple axioms), but you won't really "know Lisp" (or Scheme) in practical terms when you're done, either (you'll be well-equipped to pick it up quickly, though).
Assembly is too honest about what the program is doing to be the Devil's work. That would be C with undefined behavior kicking in on malicious input from the Devil's children.
Indeed. Assembly is the language of the nameless ancient horror that was there before the devil was born. The language of the devil is C and the languages of his demons are JavaScript and PHP.
i think there are way too many widely divergent CPUs for that statement to make any sense. there is 6809 assembler, 8086 assembler, 68000 assembler, etc. some of them are simple, quite a few are not.
And before Assembly there was VHDL and Verilog, and the eldritch instruments that turn them into baroque patterns of silicon that cannot be contemplated by man (well, not since the 1970s...)
I mean, if we stick with the spirit of the article I think that it's even more likely to be Fortran. Seemingly interesting, accomplishes certain very specific tasks well, turns into an absolute monolithic nightmare when encountered in the wild.
Not if you plan and architect it well. BT back in the day wrote a billing system for its dialcom / telecom gold online service in mostly Fortran 77.
Apart from one module which our US colleges wrote in a completely different language (PL1/G) we only found that out when it was delivered.
And at my first job at BHRA /BHR Group our fluid dynamics simualor was written in Fortran and was well structured I certainly don't recall any major problems
No, the devil's programming language is lisp in mid-size or larger team settings. People feel so empowered individually with the language, but I doubt all those same people have spent time maintaining someone else's code. Death knell of perl, too.
I think the test to find out if a programmer can eventually learn to work on a Lisp codebase lies in his/her opinion about the conditional (ternary) operator.
If the programmer hates the C(T)O because it is too confusing, that programmer is hopeless about using Lisp.
If the programmer sees the C(T)O as a trivial syntax that helps to make the code short and neat, then that programmer will love Lisp.
I work in JavaScript on a regular basis, and I have grown to despise them because of their overuse when a simple if statement would have made the code much clearer.
The former was, at the time, very influential (and deservedly so); the latter was and remains the best reference for actually using Common Lisp to write real software (Edi Weitz's Common Lisp Recipes is an excellent companion volume).
Back in 2007 I thought it was silly that SBCL (and SmallTalk too) distributed its applications as "images". Seems they've been re-invented today as "containers", which are suddenly an amazing idea.
Lisp and Scheme are great. They would be my favorite programming languages, if they had a statically typed, Hindley-Milner type system.
As they stand, they are great learning tools, but I would never build something serious with them. Let alone questions about parallelism, concurrency, available libraries, development tools, etc.
I think that part of the power of LISP is it's dynamic nature but if you want typed options they exist for Clojure and Racket. There is also development on a language called Carp that aims to be a Clojure style language for C https://github.com/carp-lang/Carp.
I'm interested to see how contracts (`clojure.spec` is a new implemenation of them Clojure) can be used to make reading, debugging and maintaining Clojure easier.
Everybody's just praising lisp like it's the best language ever, yet very few people are actually using it - and I think it's because it's really easy to write smart code which has to be explained over and over again to new people (and to your future-you).
"Thanks a lot for this insightful reply! I've read about how
powerful are Lisp languages (for example for AI), my question is:
does Emacs really use all this theoretically powerful functionality
of these languages? In what way is this metalinguistic abstraction
used? In the built-in functions of Emacs, the powerful packages
made by the community, or the Elisp tweaking of a casual Emacs user
to customize it (or all three of those).
I've read a lot of people praising and a lot of people despising
Elisp. Do these people who dislike Elisp do it because they want a
yet more powerful Lisp dialect (like Scheme) or because they want
to use a completely different language?
PD: Excuse my ignorance, I'm still learning about programming. As a
side note, would you recommend me to read SICP if I just have small
notions of OOP with Python and Java and I want to learn more about
these topics? Will I be able to follow it?
Let me start from the end: Reading SICP changed everything I thought I knew about programming and shattered any sort of non-empirical foundation - that I had built up to that point - regarding how my mind worked and how I interfaced with reality. It's not just a book about programming, there are layers of understanding in there that can blow your worldview apart. That said, you do need to make an effort by paying attention when you go through the book and (mostly) doing the exercises. The videos on youtube are also worth watching in-parallel with reading the book. The less you know about programming when you go through SICP, the easier it will be for you to "get" it since you'll have no hardwired - reinforced by the passage of time and investment of personal effort - prior notions of what programming is and how it should be done.
* Metalinguistic abstraction
Short answer: all three.
Long answer: The key idea behind SICP and the philosophy of Lisp is metalinguistic abstraction which can be described as coming up with and expressing new ideas by first creating a language that allows you to think about said ideas. Think about that for a minute.
It follows then that the 'base' language [or interpreter in the classical sense] that you use to do that, should not get in your way and must be primarily focused in facilitating that process. Lisp is geared towards you building a new language on top of it, one that allows you to think about certain ideas, and then solve your problems in that language. Do you need all that power when you're making crud REST apps or working in a well-trodden domain? Probably not. What happens when you're exploring ideas in your mind? When you're thinking about problems that have no established solutions? When you're trying to navigate domains that are fuzzy and confusing? Well, that's when having Lisp around makes a big difference because the language will not get in your way and it'll make it as easy as possible for you to craft tools that let you reason effectively in said domains.
Let's use Python as an example since you mentioned it. Python is not that language since it's very opinionated and constrained by its decisions in the design space and, additionally, has been deliberately created with entirely different considerations in mind (popular appeal). This is very well illustrated by the idiotic Python moto "There's only one way to do it" which, in practice, isn't even the case for Python itself. A perfect example of style over substance, yet people lap it up. You can pick and choose a few features that superficially seem similar to Lisp features but that does not make Python a good language for metalinguistic abstraction. This is a classic example of the whole of Lisp being much more than the sum of its parts, and in reality languages like Python don't even do a good job of reimplementing some of these parts. This is the reason I don't want to just list a bunch of Lisp features that factor into metalinguistic abstraction (e.g. macros and symbols).
* Feedback loops
The other key part of Lisp and also something expressed fully by the Lisp machines is the notion of a cybernetic feedback loop that you enter each time you're programming. In crud, visual terms:
[Your mind - Ideas] <--> Programming Language <--> [Artifact-in-Reality]
You have certain ideas in your mind that you're trying to manipulate, mold and express through a programming language that leads to the creation of an artifact (your program) in reality. As you see from my diagram, this is a bidirectional process. You act upon (or model) the artifact in reality but you're also acted upon by it (iterative refinement). The medium is the programming language itself. This process becomes much more effective the shorter this feedback loop gets. Lisp allows you to deliberately shorten that feedback loop so that you _mesh with your artifact in reality_. Cybernetic entanglement if you will. Few other languages do that as well as Lisp (Smalltalk and Forth come to mind). Notice that I emphasized your mind and reality/artifact in that previous diagram, but not the medium, the programming language. I did that in order to show that the ideal state is for that programming language not to exist at all.
* Differences between Lisps
All Lisps allow you to express metalinguistic abstraction (they wouldn't be Lisps otherwise). Not all Lisps allow you to shorten the feedback loop with the same efficiency.
The Lisps that best do the latter come out of the tradition of the Lisp machines. Today this means Common Lisp and Emacs Lisp (they're very similar and you can get most of what Common Lisp offers on the language level in Emacs Lisp today). For that reason, I don't think Scheme is more powerful than Emacs Lisp, since Scheme lacks the focus on interactivity and is very different to both CL and Emacs Lisp.
As far as other people's opinions go, my rule of thumb is that I'd rather educate myself about the concepts and form my own opinions than blindly follow the herd. Which is why I also think that people who are sufficiently impressed by an introduction to Lisp (such as the OP article) to want to learn it and ask "Which Lisp should I learn? I want something that is used a lot today" are completely missing the point.
You'll notice that most programming today is done for money, in languages that are geared towards popularity and commoditization. For me, programming is an Art or high philosophy if we take the latter to stand for love of wisdom. And as someone has said, philosophical truths are not passed around like pieces of eight, but are lived through praxis.
P.S. The talk by Gerry Sussman (https://www.infoq.com/presentations/We-Really-Dont-Know-How-...) that I saw mentioned in HN yesterday provides an excellent demonstration of metalinguistic abstraction and also serves as an amalgamation of some of my other ideas about Lisp.
What I love about Lisp, ML and logic languages is how they come down to the basics of CS, Algorithms + Data Structures.
Ideally the same solution can then be applied to a single core, hybrid CPU + GPU, clustered environments, whatever.
Yes, abstractions do still leak, specially if optimization for the very last byte/ms is required, but productivity is much higher than if that would be a concern by each line of code being produced.
And 90% of the times the solutions are good enough for the problem being solved.
Are they? They are the very thing that lambda calculus models computation on. They are not the very thing that Turing Machines model computation on. They also are not the very thing that CPUs use, or that assembly uses, or even that most high-level languages use. (Unless you're going to say that most high-level languages use a recursive descent parser to generate the binaries that the CPUs run, and that's the basis for your statement...)
Neither lambda calculus, nor Turing machines "model computation on" recursive functions. Rather, these two, and all the other models of computation are equivalent as they compute the same class of functions from the naturals to the naturals. And we call that class the class of recursive functions.
That being said, a computer is a physical device that is able to compute this entire class of recursive functions. It is what makes it a computer and not a pen or a chair. It's not some esoteric notion. It's the whole thing about it.
That's like saying that the basis of computation is the standard model, since all actual physical computers are made out of particles that are in the standard model. It may be true, but it's so far abstracted from what's actually going on that it's not useful at all.
I must agree. "God's Own Programming Language" (not meant as satire) is certainly indicative of the psychological phenomenon of a superiority complex in the Lisp community.
Yeah, even MIT eventually gave up on using Scheme as a first programming language.
The Lisp community may have the worst ratio of self-regard to actual accomplishments in our entire industry. Not one of the great companies is built on Lisp. I don't think it's even a second-tier language at the FAANGs.
Why do you think it's a circlejerk? I'm yet to see an idea in the domain of PLs that's as effective as lisp (dependent types come close). Lisp is a whole package, it solves bunch of problems very elegantly. All "good" languages solve problems one by one by introducing exceptions and rules and features whereas lisp did it by introducing a totally novel, elegant concept. It's hard not to be mystified by lisp. I'm not an old school hacker, but there is certainly something fascinating about lisp.
Lisp - oh what it could have been. It had such potential, but then it got broken. I find it fascinating that those who are dedicated to the proselytisation of Lisp don't see the brokenness of the language. For them, all of the broken things are the features of the language.
Scheme was one attempt to fix some of those flaws.
In latter times, we see the development of Kernel to fix other flaws.
So many second class citizens, so many exceptions to the rule.
I am going through the source code for Maxima CAS (written in Lisp) and in so many ways, it's a mess. I am not at all disparaging those who have been involved in writing the Maxima CAS system and its source. They have done an incredible job and what they have achieved is remarkable.
However, like any software system of any complexity in any language, it has lots of areas that are difficult to maintain, let alone advance. In that regard, Lisp has not been as an advantageous language as it could have been.
Lisp (as in Common Lisp and its add-ons) is not a simple language and it is not a consistent language (see CLHS - Common Lisp Hyper Spec docs).
When I first came across it in the latter 1970's, I thought "wow". But its flaws quickly came to the fore.
So, there is no way that it would ever be God's own programming language. Especially since, God doesn't need to program, that's just for us very limited mortals.
Maxima (née Macsyma) is a program dating back to the 1960s—1968 specifically—that has largely kept the style from the 1960s. Isn’t it a bit amazing you’re even able to read a modern, running, maintained program that’s lasted for 50 years? Five decades is an enormous amount of time that should really be appreciated, even if for a moment, especially in the era of extremely rapid technological growth. I’d estimate at least three, if not more, generations of programmer have worked on Macsyma/Maxima.
Maxima could be modernized with even the features of Common Lisp, but it’s very architecturally stuck in its ways. It’s rife with lists-as-data structures, symbol-plists, and other early Lisp baggage that isn’t found in modern Common Lisp code. The Maxima developer team is also not interested in gratuitous modernization.
I don’t think Maxima’s style is a reflection on Lisp so much as it is a reflection of the style of programming at the time, and a reflection on the style of maintenance.
> Isn’t it a bit amazing you’re even able to read a program that’s lasted for almost 60 years?
Not really. One would expect to be able to read most kinds of programs once you get into the language used. Though there are various languages that deliberately inhibit understanding - these tend to be somewhat esoteric in nature and were intended to be difficult to read.
Maxima uses Common Lisp and many of those who support the code base use Common Lisp to do so. The source code problems are a reflection of Lisp as is the source code of other major Lisp programs still being actively maintained. I have a number of these and my investigations have led me to believe that Lisp and Common Lisp, in particular, are in no way the panacea that the Lisp community portrays.
It is more about how you go about your development and how you document your than about the language you use to write your code. I personally have a language of choice. It allows me to solve the problems I have in an easier manner than other languages. However, there are plenty of warts on it and it can quite easily get in the way of problem solving.
Regardless of whether you like Lisp or hate it, the language has inconsistencies and problems which will come back and bite you. If you love the language and it works for you, good. But be honest when talking to others about the problems with the language and what it cannot do. This will at least let others who have not been introduced to it to make a fair value judgement about whether or not to give it a try.
As a matter of course, when discussing programming languages with young people, I encourage them to look at all sorts of languages apart from the ones they are familiar with. These languages include Lisp, Fortran, Algol 60, Simula, Smalltalk, Forth, Icon/Unicon, OCaml to just name a few.
I let them know that each language allows them to look at different kinds of problem solving techniques and that each is a welcome tool in their repertoire that will allow them to continue advancing in their skills.
Maxima is a direct port of Macsyma to Common Lisp and has been maintained in CL since the mid 80s. But it has not been re-architectured to take advantage of the many improvements in CL. Macsyma took advantage of the backwarts-compatibility of CL to Maclisp.
> The source code problems are a reflection of Lisp as is the source code of other major Lisp programs still being actively maintained
Many other Lisp programs have radically different source code / architecture from Macsyma. Check out the architecture of Reduce or Axiom. Maxima itself implements a language on top of Lisp.
Reduce is written in Portable Standard Lisp (PSL). It uses an algebraic Lisp (without s-expression syntax), written in (PSL).
Axiom implements an advanced statically typed language on top of Lisp and the original implementation makes EXTENSIVE use of literate programming.
If you want to see what actual CL specific code looks like, you would need to look at code bases which have been architectured for CL - after end 80s / early 90s onwards, when CLOS and a bunch of other things was added to the language.
One of the early great code bases for CL was the prototype implementation of CLOS with a meta-object protocol: Portable Common LOOPS.
> It is more about how you go about your development and how you document your than about the language you use to write your code.
Lisp is called a programmer amplifier. Unfortunately bad programmers get amplified, too.
The software engineering research has had the goal to create better programming languages. That let to Pascal, Ada, ... and a bunch of other languages.
Lisp is the exact opposite. Lot's of freedom. With freedom comes responsibility. I have seen great Lisp code bases and also stuff I could not understand (basically write-once code).
For me the ability to write extremely descriptive Lisp code is a huge attraction. But I know that one can easily write large programs which are hard to understand - and not just because they lack documentation.
I am looking at Axiom and I find it to exhibit the same kinds of problems. As a programmer, I find Common Lisp requires a significant additional burden to understand what "tricks" may be in use. I don't like languages like Java, C#, C++, etc but I can generally follow without too much burden what is going on, and if I have to update some code in these languages, fine.
Common Lisp requires a much higher mental burden just to understand what the code may be doing and it takes somewhat more effort to ensure that you aren't screwing up your code base.
If you are immersed in Common Lisp and CLOS most of the time, you have already assimilated the knowledge that others will have to acquire to get to a level that they can safely modify the code base. This is the point that most Lisp aficionados miss. Steep learning curves are not helpful in "off the cuff" maintenance.
I have refused to update badly written programs when it would take huge amounts of time to learn all the gory details just to make a simple change - it is not worth the angst.
I am interested in the code bases for both Maxima and Axiom and am slowly working through resolving the underlying semantics and algorithms used in both systems. Both systems provide an alternative language in which you can write algorithms for each system. Why do that if Lisp is the "bees knees" so to speak?
You highlight that software engineering has led to languages like Pascal, Ada and others and that Lisp is the opposite. Pascal was nobbled by the implementation (by design), yet it too could have been so much more by removing the second classedness of its types and structures as well as other areas of second classness.
You raise the concept of a programmer amplifier, yet there are many languages that will allow this. Lisp is only one of them. I heard the same said of Forth, Smalltalk and others.
I have seen beautifully written code in all sorts of languages, including Cobol of all languages. I have seen absolutely awful code written in all sorts of languages - some of that code has been my own throw away and get the task done now junk.
The point is that there is no language so far ahead of all the others, irrespective of what anybody may believe. All languages have limitations that make it a pain to use in some way or another. This is one of the reasons that I like learning new languages, someone somewhere has given thought to solving some problem in a more amenable way. Lisp is just one of many that each of us should have a familiarity with. We have lost a lot over the decades by the way we have failed to teach each succeeding generation the wide range of languages that have be made available to us.
> I am looking at Axiom and I find it to exhibit the same kinds of problems. As a programmer, I find Common Lisp requires a significant additional burden to understand what "tricks" may be in use.
I wouldn't expect that you can understand something like Axiom from 'looking' at it. Axiom is easily one of the most capable programming systems ever created.
> I am interested in the code bases for both Maxima and Axiom and am slowly working through resolving the underlying semantics and algorithms used in both systems. Both systems provide an alternative language in which you can write algorithms for each system. Why do that if Lisp is the "bees knees" so to speak?
I don't think Lisp is the "bees knees".
Lisp can implement a full new programming system for the domain of mathematics. Many other programming languages have been developed in Lisp (for example ML) or have implementations written in Lisp (from C to Prolog and Python). Computer Algebra systems like Reduce, Axiom and Macsyma target mathematicians and their special notations. There are other systems for mathematics, which use Lisp syntax: for example Kenzo https://github.com/gheber/kenzo
> If you are immersed in Common Lisp and CLOS most of the time, you have already assimilated the knowledge that others will have to acquire to get to a level that they can safely modify the code base. This is the point that most Lisp aficionados miss. Steep learning curves are not helpful in "off the cuff" maintenance.
Same for any JavaEE software... any sufficiently complex programming system requires lots of education.
> You raise the concept of a programmer amplifier, yet there are many languages that will allow this. Lisp is only one of them. I heard the same said of Forth, Smalltalk and others.
There are a bunch. Smalltalk isn't that much a programmer amplifier because of it's language like Lisp is - that's more a factor of its original interactive Smalltalk IDE - which might also benefit from how the Smalltalk system works.
The Lisp idea of a programmer amplifier is based on the observation that one can write extremely complex software with a small team (example Cyc) and that one can write large amounts of code in much more compact way. As I mentioned the AT&T/Lucent team reported a productivity advantage over an C++ team of up to 10 - and we are talking about a 100+ person Lisp team compared to a much larger C++ team - both shipping products in the same domain: enterprise telco switches. We are talking about code that was measured in MLOCS.
Nowadays this is a bit more difficult, since there are large eco-systems like J2EE/JEE, which can easily dominate productivity.
> The point is that there is no language so far ahead of all the others, irrespective of what anybody may believe.
Lisp is still different from most other 'mainstream' languages in its ability to be programming itself easily, its style of interactivity and by directly supporting ways of linguistic abstraction. It's not so much about having a feature - Java has lambda expressions now - but it's about the whole integration and how it supports the developer. Java has fantastic IDEs, but they support a different workflow from how I develop code - and I have seen many Java developers attempting to interactively write code... it's not pretty.
We can use that as an advantage and we can shoot into our foots. But the fact that coding in Lisp and the resulting software looks radically different from Java development remains. It's not about 'being ahead' - it's more about supporting certain styles of development and certain styles of code well. I think it's absolutely okay when people prefer other tools and even that these are widely used - but I would have to use them differently and the result is different.
> I wouldn't expect that you can understand something like Axiom from 'looking' at it. Axiom is easily one of the most capable programming systems ever created.
Let me ask you a question - What do you mean by "looking at it"? I am getting into reading the code, finding the definitions, analysing what has been written and taking notes as to what various parts actually mean. This is in part dependent on the SPAD code that the Lisp code is supposed to implement.
In terms of that, Lisp is being used to provide a basic system and another more appropriate language to do your work in. In effect, providing the same services that C is used for. I have available to me a number of Lisp systems that were implemented in C.
In regards to C++, I find it a "monstrous" language and having Lisp be more productive than C++ is no surprise. But is Lisp more productive than all other languages?
> We are talking about code that was measured in MLOCS.
Then we must ask the salient question - How much of that code was actually necessary? One of the "features" of Lisp is the ability to write code generating macros. The code using those macro calls looks small, but in reality the code is many times the size that it appears to be. Okay, my question to you is this - instead of using macros, could the code have been restructured in other ways that would mean far less code being written?
Lisp has its uses and a knowledge of the language enhances your ability to be a better programmer. This is true of all languages though. Each language provides an insight into a class of problems as a better solution language.
> Lisp is still different from most other 'mainstream' languages in its ability to be programming itself easily, its style of interactivity and by directly supporting ways of linguistic abstraction.
You use the word "mainstream" now. Too often it is declared that Lisp is better than "every" other language. It has its advantages, no problem with that at all. It also has its problems as a language. When the problems with the language are glossed over by the Lisp aficionados then Lisp loses the war.
> But the fact that coding in Lisp and the resulting software looks radically different from Java development remains.
No issue with this and this is true of other languages as well. I am no fan of Java.
> It's not about 'being ahead' - it's more about supporting certain styles of development and certain styles of code well. I think it's absolutely okay when people prefer other tools and even that these are widely used - but I would have to use them differently and the result is different.
I agree with you here. Different languages give rise to different styles and development and are applicable for different problem domains. I would really like to see in our education systems more provision to the exposure of the new generations to all sorts of languages and a serious encouragement to investigating all sorts of languages for different problem domains. But first and foremost, there needs to be a serious push to the proper documenting of code.
See the table for the development time. Relational Lisp (a Common Lisp with support for relations as an extension) had by far the shortest development time... since the particular language has a high-level look&feel, there is also less documentation needed - it might be more like an executable spec.
> instead of using macros, could the code have been restructured in other ways that would mean far less code being written?
By a custom code generator, perhaps. But that's a tool on top of C++. Another stage. Another tool...
> Too often it is declared that Lisp is better than "every" other language
That's a nonsense believe. 'better' is not a quality we can measure or even argue about.
Got broken? I think of it more as having failed to obtain/coordinate the resources needed to progress.
What it means to have a healthy language ecosystem has advanced. 1970's Prolog implementations couldn't standardize on a way to read files. 1980's CommonLisp did, but had no community repo. 1990's Perl did, but few languages then had a good test suite, and they were commercial and $$$$. Later languages did, but <insert-your-favorite-thing-that-we-still-suck-at>.
And it's not easy for a language to move on. Prolog was still struggling with creating a standard library decades later. CommonLisp and Python had decade-long struggles to create community repos. A story goes that node.js wasn't planning on a community repo, until someone said "don't be python".
The magnitude of software engineering resources has so massively ramped, that old-time progress looks like sleep or death. Every phone now has a UI layout constraint system. We knew it was the right thing, craved it, for years... while the occasional person had it as an intermittent hobby project. That was just the scale of things. Open source barely existed. "Will open source survive"? was a completely unresolved question. Commercial was a much bigger piece of a much smaller pie, but that wasn't sufficient to drive the ecosystem.
The Haskell implementation of Perl 6 failed because the community couldn't manage to fund the one critical person. It was circa 2005, and the social infrastructure needed to fund someone simply wasn't the practiced thing it is now.
And we're still bad at all this. The javascript community, for all it's massive size, never managed to pick up the prototype-based programming skills of self and smalltalk. The... never mind.
It's the usual civilization bootstrap sad tale. Society, government, markets, and our profession, are miserably poor at allocating resources and coordinating effort. So societally-critical tech ends up on the multi-decade glacial-creep hobby-project-and-graduate-student installment plan. Add in pervasively dysfunctional incentives, and... it becomes amazing that we're making such wonderful progress... even if is so very wretchedly slow and poor compared to what it might be.
So did CL get broken? Or mostly just got left behind? Or is that a kind of broken?
You raise interesting history and it's a good thing to see the perspective as you've given.
I don't know if Common Lisp got left behind or just took a completely different path. From my perspective, it got broken with its macro system decisions, it dynamic/static environment decisions and its namespace decisions. It created too many second class citizens within the language which means that you have to know far more than you should in understanding any part of the programs you are looking at.
Every choice a language designer makes affects what the language will do in terms of programmer productivity, not only for the original developers of programs using that language, but also for all those who come later when maintaining or extending those programs.
I have come to the conclusion that a language can be a help when writing the original program and become a hindrance when you need to change that program for any reason. It is here that the detailed documentation covering all the design criteria and coding decisions, algorithm choices, etc, become more important than the language you may choose.
Both together will enable future generations to build upon what has been done.
All the points that you have highlighted above are important, but the underlying disincentive to provide full and adequately detailed documentation will work against community growth. No less today than in the centuries past is the hiding away of knowledge where individuals are not willing to pass on the critical pieces unless you are a part of the pack or do not think it is important enough to write down because it is obviously obvious.
To understand a piece of Lisp code, one has to know what the special forms and how they interact, what the macros being used are and what code they are generating and what the various symbols are hiding in terms of their SPECIALness might be. These things may help in writing the code, but they work against future programmers in modifying the code. Having had to maintain various code bases that I did not write in quite a variety of different languages, I have found that "trickily" written code can become a nightmare to bring about required changes. I have found that Lisp code writers seem to like writing "trickily" written code.
Now, that is only one person's perspective and someone else may find something completely different. That is not a problem as there are many tens of .... programmers in the world. Each one having a perspective on how to write good code.
Nod. I fuzzily recall being told yeas ago of ITA Software struggling to even build their own CL code. Reader-defined-symbol load-order conflict hell, as I recall. And that was just a core engine, embedded in a sea of Java.
> second class citizens
I too wish something like Kernel[1] had been pursued. Kernel languages continue to be explored, so perhaps someday. Someday capped by AI/VR/whatever meaning "it might have been nice to have back then, but old-style languages just aren't how we do 'software' anymore".
> detailed documentation covering all the design criteria and coding decisions
As in manufacturing, inadequate docs can have both short and long-term catastrophic and drag impacts... but our tooling is really bad, high-burden, so we've unhappy tradeoffs to make in practice.
Though, I just saw a pull request go by, adding a nice function to a popular public api. The review requested 'please add a sentence saying what it does.' :)
So, yeah. Capturing design motivation is a thing, and software doesn't seem a leader among industries there.
> enable future generations to build upon what has been done.
Early python had a largely-unused abstraction available, of objects carrying C pointers, so C programs/libraries could be pulled together at runtime. In an alternate timeline, with only slightly different choices, instead of monolithic C libraries, there might have been rich ecology. :/ The failure to widely adopt multiple dispatch seems another one of these "and thus we doomed those who followed us to pain and toil, and society to the loss of all they might have contributed had they not been thus crippled".
> To understand a piece of Lisp code [...struggle]
This one I don't quite buy. Java's "better for industry to shackle developers to keep them hot swappable", yes, regrettably. But an inherent struggle to read? That's always seemed to me more an instance of the IDE/tooling-vs-language-mismatch argument. "You're community uses too many little files (because it's awkward in my favorite editor)." "You're language shouldn't have permitted unicode for identifiers (because I don't know how to type it, and my email program doesn't like it)." CL in vi, yuck. CL in Lisp Machine emacs... was like vscode or eclipse, for in many ways a nicer language, that ran everything down to metal. Though one can perhaps push this argument too far, as with smalltalk image-based "we don't need no source files" culture. Or it becomes a "with a sufficiently smart AI-complete refactoring IDE, even this code base becomes maintainable".
But "trickily" written code, yes. Or more generally, just crufty. Perhaps that's another of those historical shifts. More elbow room now to prioritize maintenance: performance less of a dominating concern; more development not having the flavor of small-team hackathon/death-march/spike-into-production. And despite the "more eyeballs" open-source argument perhaps being over stated, I'd guess the ratio of readers to writers has increased by an order of magnitude or two or more, at least for popular open source. There are just so very many more programmers. The idea that 'programming languages are for communicating among humans as much as with computers' came from the lisp community. But there's also "enough rope to hang yourself; enough power to shoot yourself in the foot; some people just shouldn't be allowed firearms (or pottery); safety interlocks and guards help you keep your fingers attached".
One perspective on T(est)DD I like, is it allows you to shift around ease of change - to shape the 'change requires more overhead' vs 'change requires less thinking to do safely' tradeoff over your code space. Things nailed down by tests, are harder to change (the tests need updating too), but make surrounded things easier to change, by reducing the need to maintain correctness of transformation, and simplifying debugging of the inevitable failure to do so. It's puzzled me that the TDD community hasn't talked more about test lifecycle - the dance of adding, expanding, updating, and pruning tests. Much CL code and culture predated testing culture. TDD (easy refactoring) plus insanely rich and concise languages (plus powerful tooling) seems a largely unexplored but intriguing area of language design space. Sort of haskell/idris T(ype)DD and T(est)DD, with an IDE able to make even dense APL transparent, for some language with richer type, runtime, and syntax systems.
Looking back at CL, and thinking "like <current language>, just a bit different", one can miss how much has changed since. Which hides how much change is available and incoming. 1950's programs each had their own languages, because using a "high-level" language was implausibly heavy. No one thinks of using assembly for web dev. Cloud has only started to impact language design. And mostly in a "ok, we'd really have to deal with that, but don't, because everyone has build farms". There's https://github.com/StanfordSNR/gg 'compile the linux kernel cold-cache in a thrice for a nickle'. Golang may be the last major language where single-core cold-cache offline compilation performance was a language design priority. Nix would be silly without having internet, but we do, so we can have fun. What it means to have a language and its ecosystem has looked very different in the past, and can look very different in the future. Even before mixing in ML "please apply this behavior spec to this language-or-dsl substrate, validated with this more-conservatively-handled test suite, and keep it under a buck, and be done by the time I finish sneezing". There's so much potential fun. And potential to impact society. I just hope we don't piss away decades getting there.
My point about "understanding the code" and the burden of additional information to retain is about the semantics applicable to the language itself, not about the tooling that we have build around it for development.
Lisp started with some core simple ideas to which were added many others. For some like the dynamic scoping, simple idea that it is, it has complexity interactions with the rest of the language. These interactions increase the knowledge burden that must be retained at all times to be able to make sense of what you are reading. This burden is on top of any knowledge burden you need to carry in relation to the application you are modifying or maintaining.
This is about what are the things you design as part of your language, not the things you do with your language. This was what I was trying to somewhat humorously write in my first comment. As I look back over it, I failed to make that clear.
Lisp had the beginnings of "wow", but then it took a wrong turn down into a semantic quagmire. Scheme started to fix that and later Kernel was another attempt.
> failed to make that clear [...] the burden of additional information to retain is about the semantics applicable to the language itself, not about the tooling that we have build around it for development. [...] knowledge burden that must be retained at all times to be able to make sense of what you are reading
Not lack of clarity I think - it seems there's a real disagreement there. I agree about the burden, and the role of complex semantics in increasing it. But I think of bearing the burden as more multifaceted than being solely about the language. I think of it as a collaboration between language, and tooling, and tasteful coding. For maintenance, the last is unavailable. But there's still tooling. If the language design makes something unclear and burdensome, it seems to me sufficient that language tooling clarifies it and lifts the burden. That our tooling is often as poor as our languages, perhaps makes this distinction less interesting. But a shared attribution seems worth keeping in mind - an extra point of leverage. Especially since folks so often choose seriously suboptimal tooling, and tasteless engineering, and then attribute their difficulties to the language. There's much truth to that attribution, but also much left out.
Though as you pointed out, cognitive styles could play a role. I was at a mathy talk with a math person, and we completely disagreed on the adequacy of the talk. My best trick is "surfing" incompletely-described systems. His best trick is precisely understanding systems. Faced with pairs of code and tooling, I could see us repeatedly having divergent happiness. Except where some future nonwretched language finally permits nice code.
> These interactions increase the knowledge burden that must be retained at all times to be able to make sense of what you are reading. This burden is on top of any knowledge burden you need to carry in relation to the application you are modifying or maintaining.
That's why you have a Lisp where you can interactively explore the running program.
> Lisp had the beginnings of "wow", but then it took a wrong turn down into a semantic quagmire. Scheme started to fix that and later Kernel was another attempt.
I think that's misguided. Lisp is not a semantic quagmire that was tried to be fixed with Scheme or Kernel. As a Lisp user, I find it great that someone tries to revive Fexprs, but practically it has no meaning for me.
Lisp is actually a real programming language and its users have and are determining what features it has.
> node.js's API and require() was based on CommonJS [1], and server-side JS was a thing since the Netscape times around 1996.
I'm sorry, I'm missing the point. Perhaps I should have said community code repository/database? CPAN, PyPI, npm.
> Prolog's API (and syntax) became a formal standard only in 1995, but Edinburgh Prolog was widely used way before
As was Quintus prolog, the other big "camp". A SICStus (Edinburgh camp) description: "The ISO Prolog standardization effort started late, too late. The Prolog dialects had already diverged: basically, there were as many dialects as there were implementations, although the Edinburgh tradition, which had grown out of David H.D. Warren’s work, was always the dominant one. Every vendor had already invested too much effort and acquired
too large a customer base to be prepared to make radical changes to syntax and semantics. Instead, every vendor would defend his own dialect against such radical changes. Finally, after the most vehement opposition had been worn down in countless acrimonious committee meetings, a compromise document that most voting countries could live with was submitted for balloting and was approved. Although far from perfect," [...] "contains things that would better have been left out, and lacks other dearly needed items," https://arxiv.org/abs/1011.5640 The later took more years.
Similar 'incompatible dialects' balkanization afflicted other languages. CommonLisp and R?RS were the lisp equivalents of ISO Prolog. Acrimonious committees.
It's happily become increasingly hard to imagine. A general pattern across languages of nested continuums of incompatibility. No sharing infrastructure. Diverse tooling. Hand documentation of dependencies. Each company and school with their own environment, that had to be manually dealt with in any struggle to borrow code from them. Small islands of relative language compatibility around the many implementations, nested in less compatible camps, nested in a variously incompatible "language" ecology. More language family than language. To download is to port. With no way to share the NxN effort, so everyone else gets to do it again for themselves.
Perhaps an analogy might be python 2 and 3 being camps that aren't going away, with largely separate communities (like openbsd and linux), with no language test suites and variously incompatible implementations (cpython, jython, ironpython). No BDFL, and an acrimonious committee process struggling to negotiate and nail to the floor a CommonPython standard. And far far fewer people spread among it all, so each little niche is like its own resource-starved little language. Imagine each web framework community struggling to create/port it's own python standard library and tooling.
The opportunity to complain about left-pad was... like complaining about wifi slowness for your kindle, at your seat at 30 thousand feet over the atlantic. :)
Ah, ok. I was pointing out there that something we now take for granted, being able to "yarn add foo" (ask a build tool to download and install a foo package/module from the central javascript npm database which collects collective community effort), didn't used to be a thing. It was once "search (on mailing list archives, later on web) for where some author has stashed their own hand prepared foo.tar file (no semantic versioning originally) on some ftp server somewhere (hopefully with anonymous login; and reading the welcome message go by often describing how/when they did/didn't want the server used; and often groveling over directory listings, and reading directory FILES and READMEs, to figure out which might be the right file); download it; check the size against the ftp server file listing to see if it's likely intact (no sums, and truncation isn't uncommmon); check where it's going to spray its files (multiple conventions); unpack it; read README or INSTALL to get some notes on how it was intended to be built, and perhaps on various workarounds for different systems; variously struggle to make that happen on your system; including repeating this exercise for any dependencies; and then hope it all just works, because there's no or only very minimal tests to check that".
Python was originally like this. Then there were years of "we're trying to have a Perl-like central repository... again... but we're again not yet quite pulling it off as a community..." There's a story that the python experience was the cautionary tale which motivated having a single officially-sanctioned npm code repository to serve node. Instead of not, and hoping it would all just work out. Using 1990's python was a very different experience than using 2010's python, far more than the difference of pythons 1 and 3. And the 2020's python experience may become something where you can't imagine ever going back... to how you're handling python now.
Sorry, I've only now come to read your post(s). I guess if one were to compare npm with anything, Java's maven central is named as reference at multiple places in the npm source and on gh forums, and is also the point of reference for CommonJS modules, since many of the early CommonJs/node adopters were Java fall-outs.
I know very well how downloading packages and patches used to be like in the 90s ;) and I think it was Perl/CPAN who nailed the idea of a language-specific central repository and automatic dependency resolution, though it was practiced in Debian (and possibly pkgsrc) before that. Not that I had much success using CPAN to resolve old .pm's; these seem to be lost to bitrot.
I loved that sentence! I'm guessing epistemology or a similar field has pondered the "invented or discovered" question already, and if so, I want to read about it.
> [on SICP:] Those concepts were general enough that any language could have been used
What?? In chapter 4, you write your own Lisp interpreter. If they had chosen C++, would you be writing a C++ compiler? Or a Lisp interpreter in C++? Either way, it would be ugly. And most languages would encounter problems even before they got to chapter 4. What made SICP great was building abstractions out of primitives. Most languages give you some abstractions, and others simply can't be built (at least not elegantly). I can't imagine SICP using a language that doesn't feel a lot like Lisp.
That said, count me among the people whose first Lisp exposure was SICP. And yeah, it was really fun and really enlightening. I am loving Racket now, but Racket is big and practical. SICP is small and beautiful - as I recall, the authors deliberately avoid using most of the language. (They used Scheme, but I think Lisp would have worked fine too, right?)