Hacker News new | past | comments | ask | show | jobs | submit login
Common Lisp: The Untold Story (2012) (nhplace.com)
100 points by tosh on June 10, 2019 | hide | past | favorite | 30 comments



Interesting to hear an insiders perspective of the ‘Lisp Wars’.

My outsider’s perspective, at the time, was disappointment. I really liked InterLisp-D on my Xerox 1108 Lisp Machine. That said, Xerox provided a Common Lisp implementation that still had the UI wonderfulness and I think they still supported DWIM. I now have about 35 years of frequent Common Lisp use/experience, and my latest project/product is written in Common Lisp (http://kgcreator.com).


"In 1992, Symbolics Inc. was doing poorly financially because it was continuing to sell hardware and its computers were not hitting the price point demanded by the market. A last-minute effort was underway to produce a software-only emulator of the architecture".

The emulator they built Open Genera was for the Alpha, and after Symbolics the defuncted ported to Linux 64bit: https://static.loomcom.com/genera/genera-install.html


There's a Docker recipe for it too: https://github.com/sethm/docker-vlm


That repo is 3 years old. Does it work with more recent versions of Docker? (I'm on windows right now, so can't check, as it requires NFS on the host).


At that point it had been figured out how to safely and efficiently implement Common Lisp on 32 bit stock hardware (workstations, then PCs), which of course was going to economically destroy specialized machines.

What standardization of CL also has done is restrain the commercial lisp implementation vendors, since they compete against standard-compliant free implementations.


> Change was happening to MACLISP very rapidly in the late 1970’s and early 1980’s. It was common for MACLISP users to read their mail only to find that some critical semantics in the language had changed and that it was time to update programs they had already written to accommodate the new meaning.

> Jokes were made about the frequent and extreme nature of the changes.



I'd be curious to see more on the advantages of using raw TeX instead of LaTeX. Feels like that should be backwards, but didn't surprise me, either.


I'm an '82 PhD mathematician who has used TeX and LaTeX since near their beginnings, to typeset math papers. There was a long period when TeX was essentially instantaneous, while LaTeX was plodding. Also, if one pried into various LaTeX packages, there was a general tone "User won't do this for themselves or even know we're doing this, so let's also clean up this related situation" that felt like coming home to find a team of house cleaners crawling up your chimney with toothbrushes. Directing mathematicians is like herding wolves; this didn't sit well with everyone.

Struggling to agree on TeX vs LaTeX for collaborative projects, I ended up writing a few pages "begin.tex" that at least managed automatic theorem numbering and reference handling, and a few other basics, so we could live without LaTeX. This too was essentially instantaneous, and spread from collaborator to collaborator for a few years until machines got faster.

Speed matters: Look at the Vim / Neovim split. Some found the Vim delays acceptable, while others had a vision of the future that required far more sophisticated evolutions to not impose user delays. Vim was forced to catch up.


I thought he was working at a time when there was no Latex. (Of course, some people do like Plain TeX more.)


I think it is strange.

LISP people hated Common LISP circa 1990 for various reasons. Most users were still stuck on 16-bit micros on which LISP was tough to fit, and Common LISP didn't make it any easier.

LISP people look back at Common LISP with nostalgia now since it was so much better than Blub languages like Java, Python, Haskell, Go, Rust, ...


I hated it at first because it had lexical scoping by default and I was used to dynamic extent. I don't want to be unkind to my former self (MFS), but MFS was pretty stupid.


That was it. I made my own Eval in Symbolics, just because I truely did not understand the scoping issue.


I suspect that some people disliked Common Lisp because they worked at institutions where they were able to develop and support their own Lisps, and ANSI CL was foisted on them: "party is over, make it compatible with this honkin' document".


Also internationally, since CL was a most US effort and Europe/Japan were trying to have independent or international implementations/standards of Lisp.


I wonder what they thought the point of that was.


In the document he mentions that the eMail latency between EU/Japan and US was 2-3 days because of the number of uucp hops that the mail needed to make. He cites this as a reason why it was impractical to collaborate.

Perhaps those issues persisted and once they were gone the culture was embedded?


What the point of international standards with international cooperation is?

Is that the question?


No, the point of having their own Lisp, not using the US standard. Was it a hedge against Lisp becoming hugely successful and helping their computer industries?


Pitman covers this in his article: communication back then between Europe and the US was much more difficult. Sending UUCP emails meant at least a 48hr round trip and so the US committee basically couldn't be bothered cooperating with their European counterparts. The Europeans - at least as Pitman imagines it - got upset and did their own thing


> No, the point of having their own Lisp, not using the US standard

Why not develop according to US standards?

> Was it a hedge against Lisp becoming hugely successful and helping their computer industries?

That was the plan? What went wrong?


Interesting that he didn't add the aftermath inside of his own mistakes in the standard. He was the best person to write it down and defend it later publicly. but he also added a chapter by himself, the nes (new error system), the new conditioning system, which was by far the worst part of the standard. Almost nobody uses it, way too overarchitectured. All the rest is very solid, people only complain that threads and ffi are missing from the standard.


I think that the condition system is one of the better parts of Lisp, and miss it in every language where the libraries I use unwind the stack to provide me an error.

It’s true that it’s not used too much in small programs, because small programs don’t need it. But in large systems, it’s invaluable.


Conditions are invaluable, but not Ken's. It faces the same problems as my stderror AutoLISP library: overarchitectured. Slim and small, not so big and fragile. CLOS is small and easy compared to the conditions.

scheme did it right


> CLOS is small and easy compared to the conditions

CLOS is much more complex than conditions.

> scheme did it right

Not at all.

Scheme was not providing any error handling in the language up to R5RS and had then zillions of different implementation specific extensions, many of them influenced by Common Lisp's condition system.

Compare:

https://www.gnu.org/software/mit-scheme/documentation/mit-sc...

https://docs.racket-lang.org/reference/exns.html

https://wiki.call-cc.org/man/4/Exceptions

https://www.scheme.com/tspl4/exceptions.html

For R7RS large they are still finding out what to do.

https://bitbucket.org/cowan/r7rs-wg1-infra/src/default/Excep...


Indeed, right through R5RS, Scheme specified that certain situations in the processing of a Scheme program constitute an "error", without explaining what that word means, or any mechanism to catch such a thing, recover from it, or generate it on purpose (without provoking an erroneous condition). A conforming R5RS Scheme implementation can crash or abort upon encountering an error.


> but he also added a chapter by himself

He was chair of the ANSI CL subcommittee for error handling.

> far the worst part of the standard

That's a very very rarely heard complaint.


> All the rest is very solid

Logical pathnames are very solid?

Let's also talk about the complex types and their operations. Their specification has a big internal inconsistency.

Oh, and read strictly, the subtype relationship is undecidable, which makes array element upgrading impossible to implement in a fully conforming way.

There are also all those places where the standard does not define what happens. Why are some operations required to signal errors in safe code when argument type constraints are violated, but not others? This lack of specificity is the opposite of "solid".


OK, logical pathnames are not solid. Agreed.

Types and subtypes were a very late decision and not in CLtL2 I believe.

Those lack of exact error descriptions is also in most other standards, and many are implementation defined. CL set the first standard there with its strict formalization even for the most mundane forms. Still very solid to me. I'm still fighting daily with the C standards which should know better by now.


So, conditions are the least solid part of the standard, if we are allowed to make excuses for and ignore any other part.

Ok then.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: