my k interpreter[0] is less than 1000 lines of JS. Big compared to many on this list, but still human-scale. Looking back on it I see many opportunities left to simplify.
further back, I wrote a compiler for part of the language described in Dijkstra's A Discipline of Programming in a dialect of forth[1].
one of my colleagues, Stevan Apter, has a number of interesting toy interpreters on his site[2], particularly concatenative languages.
Omg, its you! I want to say huuge thank you. Mako VM was something that unified my love for forth, game consoles and writing interpeters and I've learned SO MUCH by studying your well written source code. You can't even imagine what a profound influence you've had on my life :D thank you again ^^^
I wish exist more information on the array family (and with less "condensed" syntax :)).
I'm building a relational language and have tried to borrow some ideas, but apart from the very basics is not much info out (and most are about numerics). For example, how about crud operations.? Lazy evaluation?
The condensed "syntax" (notation) is actually a feature! Read "Notation as a Tool of Thought" to learn more, it's actually one of the best parts of APL as a whole.
Indeed, as someone who's learning a bit of APL on the side I'd love to have a tool that can "explain" a line of APL on demand, e.g. show the parse tree, name built-in functions and operators, expand trains, possibly name common idioms.
I find the concepts in APL very simple. The difficulty for me is in becoming comfortable reading it. Such a tool would make it quick and easy to confirm one's initially very tentative guesses as to what a line means, and gradually you'd come to depend less on it as you get more confident that your initial understanding is right.
And not that I'd recommend using proprietary software, but something that might be helpful for you regarding naming is that Dyalog APL has a top bar with every character used in modern APL, the name of them, and (I think, I've never actually used the system) an example of how to use them.
Thanks for the link. Yes, I've seen other idiom libraries as well, and what I'd like to see is automatic lookup of those idioms, with a name, and optional further breakdown.
It’s a wonderful book. The “sequel” Predicate Calculus and Program Semantics by Dijkstra and Scholten gives a more rigorous treatment of the ideas presented in A Discipline of Programming. Both are fantastic.
I've read the book (and almost everything Dijkstra's written barring a few EWDs I couldn't find), I just wasn't aware that 'RodgerTheGreat had done anything related to it.
Nice to see a fellow read-all-the-EWDs buddy. I have to skip the Dutch language ones, but I think I've read all of the English ones too, barring the ones that aren't on the UT Austin site. So much good stuff.
I think the ones which can self-compile are most interesting, because it passes a "it's not just a trivial example anymore" point. To do that in a single relatively small file also makes it convenient for studying.
Mini-C is even more amazing if you look at its description:
I set myself a challenge: write a self-hosting C compiler in 10 hours.
Although the language it accepts, being untyped, is closer to B.
Also nice to see my personal favourite, C4, in the list.
Somehow I still get those chills while reading source code of programming languages interpreters / compilers. I guess it's like getting to the origins of life, or something.
It is. As a kid, I kind of understood programs, but I could not even imagine how an interpreter (or compiler) could work. I partly got a CS degree just so I could learn the answer.
Incredibly similar to my story. I did not even know how to program, but the mere idea of something that took code in and spit executables out was completely magic to me. I absolutely had to know how they worked.
Forth is a great language, simple syntax, (relatively) simple to implement, has a repl, easily extendable and fast.
Chuck Moore was/is really onto something.
Most of the software written in Forth runs on embedded systems, not user-facing ones.
OpenFirmware is based on forth, and was used instead of BIOS for Sun workstations, some Powermacs, and the OLPC, among other devices. On the OLPC it offers a nice debugging interface and educational opportunity.[0]
The Wikireader "wikipedia appliance" used forth for their testing harness during manufacturing, and it's possible to write your own applications for the device using it.[1]
Forth has been used for control software for a variety of aerospace applications.[2]
I have personally used Forth as an interactive testing environment for bringing up new hardware. In one case, I prototyped software for the power management chip in an embedded linux device.
I've also written games and programming environments in Forth, both for fun and for use in educational after-school programs I helped organize. For example, a logo environment[3,4] and a rewrite of Yar's Revenge[5,6].
Not to mention, 'LessDmesg, your statement on there not being a "decent text editor" written in Forth is absolutely unfounded. Arguably the best text editor of all time was written in Forth: the Canon Cat.
Created by Jef Raskin, creator of the Macintosh, only perhaps the most influential UI designer in the entire world.
Nope, some obscure embedded or toy stuff doesn't count as real software that matters to people. Even Haskell, bless its academic ivory-tower soul, has real-time video games, IDEs and neural networks, not to mention tons of websites, written in it. And Forth has what, some device drivers and hobby/learning stuff? Point amply proven, thank you.
Did you look at some of those links?
Please don't confuse 'popularity' or user numbers as being the measurement of what is 'real software'. I'm sure anyone involved in any of the aerospace projects would argue the Forth software used matters a great deal and is very real!
You're kidding, right? There was a time when a good chunk of the world ran on Sun... even PowerMac was not such a small name until recently.
I'd like to see how well your video game runs when the software responsible for configuring and interfacing with your hardware and booting your OS is missing; tell me then how much that software doesn't matter. Have you ever tried to actually write any of this type of software that you're so casually dismissing as not "real"?
The LLVM Kaleidoscope tutorial [0] builds up to a single file implementation. It's a very good introduction. I'd also like to offer my winning Brainfuck compiler + direct threaded VM implemented in Ethereum assembly [1].
If you want to do a really poor, minimal job of implementing something that processes and executes programs, you will find that most of the complexity will rest in handling syntax. Any reductions in that complexity will have a big relative impact on trimming down the code and amount of work.
Now you can duck out of implementing solid semantics, without changing what a language looks like. For instance, programs language with broken scope can look superficially like programs in a language with working lexical scope. Or, a language that works by interpreting a syntax tree using a few lines of code looks and works just as well as a compiled one, just slower.
But if you don't negotiate about what a language looks like, there is only so much simplification you can do while still correctly handling the syntax which allows the language to look like that.
Further reductions in complexity require simplifications in syntax (how the language looks), which leads toward one of several existing well-understood design families for minimal syntax: Lisp-like, Forth-like, APL-like.
I've implemented things with "Lisp syntax" a couple of times, neither of which had anything whatsoever to do with Lisp. I used to call it the "Lisp non-syntax".
Nowadays I usually bite the bullet and just use JSON or something, but if you want to implement an interpreter starting just from bare minimum string functions pretending you're in 1980 and can't just grab half-a-dozen serialization formats out of your language's package manager (not a bad thing to practice!), Lisp-non-syntax is still a pretty good choice.
Well, most mainstream languages are dialects of Lisp (though usually with syntax, records, methods, and sometimes static typing) so if you strip them down far enough you get Lisp. Other reasonably usable minimal programming languages do exist, like the lambda-calculus, the pi-calculus, Abadí and Cardelli's object calculus, the ur-Forth, and the ur-Smalltalk, but they're not as familiar.
Garbage collection was developed for Lisp.
That way, any language with GC borrows directly from Lisp.
Lisp introduced the notion of Symbols, which is just String Interning. That way, any language with String Interning borrows from Lisp.
REPL based development was Lisps forte. Any language with a REPL borrows directly from Lisp.
Anonymous functions are a feature of Lisp since 1958. Any language with Anonymous functions is a descendant of Lisp.
I could go on, but we are not exactly talking about syntactical lookalikes but semantic lookalikes too. Java, C# with LINQ, Python, Perl, Ruby, Julia, Common Lisp, Clojure, ML family, Rust hygienic macro system (from scheme), C++ anonymous functions ... almost everything takes a lot of features originally started by LISP
>Garbage collection was developed for Lisp. That way, any language with GC borrows directly from Lisp.
The English language borrows a lot of things directly from other languages. That doesn't mean it is "a dialect of" any of them, which was the claim being objected to–that "most mainstream languages are dialects of Lisp".
Programming languages are not natural languages. The usual definition (which is loose) of a dialect in natural languages does not actually apply on "programming languages", which I repeat do not even classify as natural language.
Comparing our natural definition of a "dialect" to a PL definition of a dialect is erroneous.
I will not defend the parent's use of the word dialect anymore, and I think "decedent" is the right word to go.
More like that some English users will claim that their language is a Lisp dialect. Which then actual Lisp users will find controversial and they will get attacked as having no clue.
To put this in context, while I've never written a program in Common Lisp, ZetaLisp, Clojure, or Scheme as anything other than an exercise, I've been writing elisp for a quarter century (though casually, never having written a new major mode), I've written a self-compiling compiler in a subset Scheme targeting i386 assembler and a raytracer in Clojure. I've used Python and JS on a daily basis for about 20 years.
So, does that make me an "actual Lisp user" or not? I think so, although clearly if you don't think Python and JS are Lisps, the case is weaker. I feel like probably anyone who's written a self-compiling compiler in Scheme qualifies as an "actual Lisp user", though. Of course, there are people who debate whether Scheme is really a Lisp, but I think that's a sufficiently fringe viewpoint that we can simply ignore it.
Maybe you should talk for yourself, and not set it up that 'we' does not include me or others with different ideas about what a language is.
Basic rule: if a thing has Lisp in its name, there is a good chance that it is actually a Lisp. If it sets up its own name, community, etc. than it's probably in a state that it is its own language.
Oh, you think there should be an actual counterargument to the proposition that Scheme is "not a Lisp"? Well, okay.
This is of course a purely semantic discussion, about the meaning of the word "Lisp" (and "Scheme", if you like). Words are defined by their community of users, and you are of course free to use them however you like, at the risk of talking nonsense and tricking yourself into logical fallacies if you adopt an inconsistent meaning, or failing to communicate if you adopt a non-mainstream meaning. So let's consider whether the mainstream meaning of the word "Lisp", that is, the usage of the word among Lisp experts, includes or excludes Scheme.
This runs into the difficulty of determining who "Lisp experts" are, since that depends on what "Lisp" means; we could run into a schism like the Protestant/Catholic schism, in which each side considered themselves to be the guardians of, and experts on, the true meaning of Christianity. To avoid this problem, presumably we can all agree that at least the people who wrote the Common Lisp specification or major contributions to it, who are credited as major contributors to "Lisp" in the Common Lisp standard, and who have published well-regarded books about Common Lisp, are "Lisp experts", independent of the results of our inquiry about Scheme. (I am taking the liberty of supposing that your comment above implies that Common Lisp is a Lisp, despite its departure from traditional Lisp features such as fexprs, fsubrs, and purely dynamic scoping, which were present in the first version of Lisp I learned.)
I assert that it includes Scheme, because Lisp experts almost always use the word "Lisp" in the sense of a family of related languages including Scheme, only occasionally deviating from this use, mostly in informal discourse. Evidence:
The preface for R3RS http://people.csail.mit.edu/jaffer/r3rs_1.html#SEC1 says, "Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele Jr. and Gerald Jay Sussman." This is signed Jonathan Rees and William Clinger (Editors), Hal Abelson, R. Kent Dybvig,
Christopher T. Haynes, Guillermo J. Rozas, Norman I. Adams IV, Daniel P. Friedman,
Eugene Kohlbecker, Gerald Jay Sussman, David H. Bartley, Robert Halstead, Don Oxley,
Mitchell Wand, Gary Brooks, Chris Hanson, and Kent M. Pitman, who chaired the error-handling subcommittee of the ANSI Common Lisp committee, was the Project Editor, and prepared the Common Lisp HyperSpec. R4RS and R5RS have the same summary statement and add Guy Steele, who wrote Common Lisp, the Language and was originally the vice-chair of the ANSI Common Lisp committee, as signatory. R6RS has the same statement but a different committee: Michael Sperber (Mr. Preprocessor), Dybvig, Matthew Flatt, Anton van Straaten, Richard Kelsey, Clinger, Rees, Robert Bruce Findler, and Jacob Matthews. R7RS-small has the same summary statement; its signatories are Alex Shinn, John Cowan, and Arthur A. Gleckler (Editors), Steven Ganz,
Alexey Radul, Olin Shivers, Aaron W. Hsu, Jeffrey T. Read, Alaric
Snell-Pym, Bradley Lucier, David Rush, Sussman, Emmanuel
Medernach, and Benjamin L. Russel.
Moreover, the Common Lisp HyperSpec itself calls Scheme a dialect of Lisp:
> Lisp is a family of languages with a long history. … One of the most important developments in Lisp occurred during the second half of the 1970’s: Scheme. Scheme, designed by Gerald J. Sussman and Guy L. Steele Jr., is a simple dialect of Lisp whose design brought to Lisp some of the ideas from programming language semantics developed in the 1960’s. Sussman was one of the prime innovators behind many other advances in Lisp technology from the late 1960’s through the 1970’s. The major contributions of Scheme were lexical scoping, lexical closures, first-class continuations, and simplified syntax (no separation of value cells and function cells). Some of these contributions made a large impact on the design of Common Lisp.
Similarly, Richard P. Gabriel's (and Pitman's) seminal "Technical Issues of Separation in Function Cells and Value Cells" https://www.dreamsongs.com/Separation.html explains:
> In this paper, we shall refer to two abstract dialects of Lisp called Lisp₁ and Lisp₂.
> Lisp₁ has a single namespace that serves a dual role as the function namespace and value namespace; that is, its function namespace and value namespace are not distinct. In Lisp₁, the functional position of a form and the argument positions of forms are evaluated according to the same rules. Scheme [Rees 1986] and the language being designed by the EuLisp group [Padget 1986] are Lisp₁ dialects.
> Lisp₂ has distinct function and value namespaces. In Lisp₂, the rules for evaluation in the functional position of a form are distinct from those for evaluation in the argument positions of the form. Common Lisp is a Lisp₂ dialect.
Gabriel is nowadays perhaps better known for writing "Worse is Better", but he also wrote the book on Lisp benchmarking and was a major contributor to the Common Lisp committee, the above-excerpted essay being only one of his contributions.
Paul Graham, who wrote On Lisp (recommended by the manual of SBCL, currently the most popular implementation of Common Lisp) and the Prentice Hall ANSI Common Lisp book, wrote a Lisp FAQ http://www.paulgraham.com/lispfaq1.html which says:
> What is Lisp?
> Lisp is a family of programming languages descended from a language John McCarthy invented (or more accurately, discovered) in the late 1950s. The two main dialects now are Common Lisp and Scheme. We're working on a new dialect called Arc.
So, in arguing that Scheme is not a Lisp, you are opposing yourself to not only every version of the Scheme standard (written by many Lisp luminaries) but also every version of the Common Lisp standard, and to Gabriel, Graham, Steele, Sussman, and — mostly — Pitman.
But consider this position paper by Pitman from 1994, some years after the Common Lisp spec, http://nhplace.com/kent/PS/Lambda.html where he deeply considers what it means for a language to be "a Lisp":
> I learned Lisp as merely a programming language.
> But as I watched, it began to evolve. And I came to view it more as a space of languages, unified by a set of common design principles--a terrain upon which one could move freely among certain camps and still be within the warm and friendly confines of a larger community called Lisp.
> Lately, however, that terrain seems rougher than I once had thought--perhaps indeed rougher than it once actually was.
...
> “If they're all Lisps, presumably they're all built around some common core. Right?”
> Not necessarily. ...
> IF and COND? Well, some Lisps offer only one or the other and they differ as to which is primitive. And in dialects where IF exists, there is disagreement about whether it takes one, two, or many arguments--or whether the alternative starts in the third position or is introduced by a keyword. In some dialects of Scheme, COND takes the unusual but popular “=>” syntax.
...
> The Dylan language, which some (myself included) might say is in the Lisp family--or at least descended from it--chose to abandon Lisp's traditional heavily parenthesized notation in favor of a more traditional syntax. ...but probably no one would deny that there are aspects of Lisp more important than parentheses...
> It is a mistake to assume that a naming similarity (in this case ``purple'' and ``light purple'') automatically implies a close functional relationship, or that an absence of naming similarity implies no such close relationship...
> In standardization efforts for Common Lisp (by ANSI's X3J13 in the US) and ISLISP (worldwide, through ISO's SC22 working group, WG16), explicit decisions were made not to attempt to standardize ``Lisp.'' By informal agreement among these parties, ``Lisp'' is considered the language family, not of any particular language. ...
> Sometimes it's best for us all to act as a single body--when we have a common need or when we can help each other on our separate needs by acting as one body. On those occasions, it might be to the advantage of some or all of us to view Scheme and Dylan as members of the Lisp family. At other times, it's best for us to act independently, to avoid stepping on each other's toes. On those occasions, not only might Scheme and Dylan not be Lisps, but it might be important even to say that Common Lisp and ISLISP are sufficiently distinct that it is better to treat them as non-overlapping languages.
I've also found exceptions in, for example, the Racket documentation, even though it shares a couple of authors with R7RS; at best its use equivocates between describing Scheme (and Racket) as being a Lisp and setting them up as alternatives; first, it implicitly describes Scheme as a Lisp:
> We thought Scheme’s macro system would help us experiment with language designs. The language also appeared to be a perfect match for constructing a simple interactive development environment (IDE); after all, many Lisp courses taught how to create a read-eval-print loop, and a lot of emacs was written in Lisp.
But then, later, it describes Racket, Scheme, and "Lisp" as three separate languages (and of course Racket, Scheme, Common Lisp, NewLisp, ISLISP, Lush, and so on are all separate languages):
> Racket is also a member of the Lisp family, ... Like Lisp, a Racket function application is just a pair of parentheses around the function and its arguments[.] ... In sum, Racket’s toolbox empowers programmers to create new languages quickly and thus enables language-oriented program design. The key to this achievement is to improve over Lisp and Scheme’s approaches: Racket carefully stages syntax elaboration (Flatt 2002), eliminating Lisp’s problematic eval-when-where approach...
The comp.lang.lisp FAQ clarifies that the newsgroup has been taken over specifically by Common Lisp users, carefully distinguishing the Common Lisp language from the larger family of Lisps:
> 1.1 What is the purpose of comp.lang.lisp?
> The charter… states that the purpose of comp.lang.lisp is "Discussion about LISP". …newsgroups' purposes evolve, as do names. Firstly, the newsgroup has evolved such that the main topic of discussion is ANSI Common Lisp, though discussion about other lisp variants is welcome within reasonable bounds.…
> 1.2. What is on-topic on comp.lang.lisp?
> Discussion of the language defined by the ANSI Common Lisp standard is definitely on-topic on comp.lang.lisp. ...
> Since the Lisp community is remarkably long-lived, discussion of the history and evolution of Lisp tends also to be welcomed, or at least tolerated; discussion of non-standard lisps (though generally not Scheme or Emacs Lisp) is also accepted. …
> 1.3. What is off-topic on comp.lang.lisp?
> Questions about Scheme, Emacs Lisp and AutoLisp tend not to be terribly welcome, as they have their own fora in the comp.lang.scheme, comp.emacs and comp.cad.autocad newsgroups. …
> 1.4. Is Scheme a lisp?
> Scheme is a member of the greater family of Lisp languages, assuming that is considered to include others like Dylan and Emacs Lisp. The design of Scheme predates the ANSI Common Lisp standard, and some CL features such as lexical scoping may be considered to have been derived from Scheme.
In conclusion, in most contexts and most places, Lisp experts describe Scheme without further comment as just another Lisp dialect, just like Common Lisp, Emacs Lisp, AutoLISP, or ISLISP. But, in the same paper where he uses Scheme as an example of the nonuniformity of COND semantics across Lisp dialects, Pitman points out that sometimes it's useful to emphasize that Common Lisp, Scheme, and ISLISP are two different programming languages, in order to discourage people from trying to unify them under a single standard; and so occasionally Lisp experts do use the term "Lisp" in a way that excludes Scheme, either through carelessness or, as Pitman says, for political reasons.
(And the vicious band of aggressive lowlifes led by Erik Naggum drove discussion of non-Common-Lisp variants out of comp.lang.lisp by harassing into submission anyone who disagreed with them, largely destroying the Common Lisp community in the process; but they are careful to clarify that Scheme is just as much a Lisp as Emacs Lisp and Dylan are.)
Given that your opinion on what "a Lisp dialect" is differs so manifestly from the consensus of experts in the field, I think we should give your opinion no weight whatsoever when considering my arguments above about why most modern programming languages should be considered dialects of Lisp.
> Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele Jr. and Gerald Jay Sussman.
Note that at the time Scheme was invented:
1) It had lists terminated by the nil symbol, probably because
2) It was implemented as a sublanguage hosted in a Lisp system.
those kinds of things are not true of Scheme today.
Scheme may have been a Lisp when it was first conceived, but it's a different language now.
None of the things I'm quoting there date from that epoch of Scheme, except possibly RPG's Lisp₁/Lisp₂ essay. The first Scheme compiler was Steele's RABBIT, published in 1978; though it compiled Scheme to MacLISP, it certainly had the freedom to represent lists as it saw fit. The first native-code Scheme compiler was probably Rees' T, in I think 1982, described by Paul Graham as "one of the best Lisp implementations" in http://www.paulgraham.com/thist.html while R7RS-small is from 2013.
Certainly Scheme is not the same language today as it was in 1976 (or 1986 or 1996), but no language in the Lisp family is the same now as then. SBCL or CLISP isn't going to be able to run unmodified SHRDLU or MACSYMA either.
(Note that Shivers, in the post Paul is quoting there, clearly considers the Scheme that Rees implemented in T to be "a Lisp" as well, just like Common Lisp, MacLISP, Franz Lisp, NIL, Emacs Lisp, Zetalisp, and InterLISP, all of which are mentioned.)
As demonstrated above, Sussman's involvement in steering the Scheme language design continues to the present day, while Steele's involvement continued until at least R⁵RS (published in 1998), so I don't think there's any reason to think that the ANSI Common Lisp standard (published in 1994) is referring specifically to the first versions of Scheme in 1976 — particularly given that it was written by Pitman, signatory to R³RS, R⁴RS, and R⁵RS, which say (by way of explaining their subject matter) "Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language". Moreover, certainly Pitman isn't referring to old versions of Scheme when he's talking about => in his article about different Lisp camps.
> I don't think there's any reason to think that the ANSI Common Lisp standard (published in 1994) is referring specifically to the first versions of Scheme in 1976.
Direct quote from 1.1.2 History: "One of the most important developments in Lisp occurred during the second half of the 1970's: Scheme."
That's a quote from my post above. Where is the evidence that someone who wrote this thinks that Scheme at some point stopped being a Lisp? You'd think they would have mentioned this if they thought that early versions of Scheme were Lisps while later versions, despite filling Lisp conferences with papers about Scheme at the time, were no longer Lisps.
> The purpose of the European Lisp Symposium is to provide a forum for the discussion and dissemination of all aspects of design, implementation and application of any of the Lisp and Lisp-inspired dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP, Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We encourage everyone interested in Lisp to participate.
By 1982, what had in 1980 been the Lisp Conference had been renamed to "Lisp and Functional Programming" (under the auspices of the ACM), so it's hard to draw strong conclusions from, say, the publication of Rees's T paper there about whether Scheme is or is not "a Lisp". But since 2007 it's been the "International Lisp Conference".
In 2008 the LISP50 book (also published by the ACM) included a paper by Clinger entitled "Scheme@33".
At ILC 2010 (the conference doesn't happen every year) Christian Queinnec (who wrote Lisp in Small Pieces, a book largely about Scheme, although Queinnec frequently uses the word "Lisp" in a sense that excludes Scheme) gave a paper "Teaching CS to undergraduates at UPMC", about teaching using Scheme.
In 2014 the International Lisp Conference ("The International Lisp Conference is a forum for the discussion of Lisp and, in particular, the design, implementation and application of any of the Lisp dialects. We encourage everyone interested in Lisp to participate." — no mention of a separate "Scheme community" — http://ilc2014.iro.umontreal.ca/ ) was chaired by Marc Feeley, who's best known for Scheme implementations like PICOBIT, and who's recently been working on JS. (There was a paper that year on hygienic macros for JS.) There was a paper about Racket and a paper about an anesthesia system (!) written in Scheme.
Well, the author of that doesn't mention anything about important developments in Lisp coming from Scheme in the first half of the 1980's, second half of the 1980's and so on. It seems to imply that the important contributions to Lisp from Scheme took place in the late 1970's, and then it dried up (as far as contributions relevant to Lisp).
Furthermore, none of the Scheme-influenced features in CL are from anything but he late 1970's Scheme. The lexical scoping is pretty much, pun intended, the extent of it.
Furthermore, the wording later in the paragraph is in the past tense: "The major contributions of Scheme were, ...".
That's it; Scheme was done contributing to Lisp by the end of the late 1970's. We took some of the good bits, thank you very much!
> You'd think they would have mentioned this if they thought that early versions of Scheme were Lisps while later versions.
Stated explicitly in such terms, it would kind of be an inappropriate rant with respect to the topic of that section, and in the context of the whole document. (That sort of extended section on history is not that common in ANSI and ISO language standards to begin with.)
You'd also think, by the same token, that if they thought Scheme continued to evolve as a Lisp, rather than something else, that they would crib newer features from Scheme. Like for instance, making the empty list a Boolean true value, and introducing a dedicated false constant.
I think the major innovations of Scheme as a language were already present by 1978 — as the CLHS says, "lexical scoping, lexical closures, first-class continuations, and simplified syntax", and also mandatory TCE and the idea of using lambdas for all kinds of perverted purposes. (And it's true that of these only lexical scoping and closures are in CL, with the result that you can use lambdas for all kinds of perverted purposes, though many CL systems also implement TCE.) After that it was basically the same language for a long time, despite the addition of details like the boolean type and case-sensitivity. (Whether or not you like a boolean type and case-sensitivity, it's clear that they're not "major contributions" to any programming language; they're nearly implementation details.) The next major innovation in Scheme from my point of view was hygienic macros, which wasn't on the horizon when Gabriel wrote his essay (he identifies the hygiene problem as one of the major difficulties of a Lisp₁ such as Scheme, but not its solution) and wasn't really solved until 1991, after Common Lisp was already well-defined, though before the final ANSI standard was finished.
(This is part of what makes it so bizarre to claim that Scheme in the late 1970s was Lisp, and then at some point stopped being Lisp: Scheme in the late 1970s was essentially the same language as Scheme in the mid-1990s, except that things like RPLACA and NREVERSE had been renamed to things like set-car! and reverse! in R²RS, a purely surface change even shallower than Dylan's infix syntax. In R⁵RS it gained hygienic macros in the form of syntax-rules, basically standardizing the Macros that Work approach published in 1991, which is a significant innovation, but R⁵RS didn't come out until 1998. syntax-rules was in R⁴RS in 1991, but only as an optional extension. Regardless, improving the macro system hardly seems to make the language less Lispy — on the contrary, really.)
But I wasn't saying that Scheme contributed a lot of things to Common Lisp. It didn't. My point was that the people who wrote the Common Lisp standard considered both Scheme and Common Lisp to both be "Lisp" — they spoke of Scheme being a "development in Lisp". In fact, they specifically cited two Scheme features that were not adopted for Common Lisp as being "major contributions of Scheme". Also, they coined the term "Lisp₁" to refer to Scheme and related languages; two of them went off and co-authored the Scheme spec, saying that Scheme was "a dialect of Lisp"; and they cited examples from Scheme when they were surveying syntactic differences between "Lisps" in their other papers. Also, they wrote papers explaining that they considered "Lisp" to refer to the family of languages, not to any one language (but occasionally it might be convenient for political reasons to exclude Scheme and Dylan from the definition of "Lisp" to avoid pressure to combine Scheme and Common Lisp into a single ANSI standard). I don't know how much clearer it can get.
You seem to be reasoning circularly; much of your reasoning to support your apparent claim that these folks are using "Lisp" to mean "Common Lisp" rests on the presumption that they are using "Lisp" to mean "Common Lisp", despite its manifest incongruence with the texts you are attempting to impose these hermeneutics on.
"Lisp" does not and should not be used to mean "Common Lisp", though such metonymy is understandable in the context discussion and documentation of a particular dialect.
When I use the word "Lisp", I'm strictly referring to a family of languages ... that feature lists terminated by a symbol nil, which is also the Boolean false, those lists being made of mutable cons cells acessed by car, cdr, rplaca and so on.
English is a dialect of Proto-Indo-European. It has enough of a mix of features from Latin and especially French that I wouldn't be comfortable calling it a dialect of Proto-Germanic or Old Norse; but nobody would ever confuse it with a Sinitic or Semitic tongue. In the same way, C combines features of Lisp, such as recursion, the heap, and conditionals, with features of FORTRAN and COBOL, like static types, nested records, and statements.
Spanish, by contrast, is clearly a dialect of Latin, despite the presence of articles, and in the same way Python and JS are dialects of Lisp.
You're confusing languages "descending" from others and dialects (varieties of a language). It's true that there's not a strict boundary between dialects and languages but thinking English is simply a dialect of Proto-Indo-European is muddy. The better version of the argument is that English is a dialect of Indo-European. Proto-Indo-European would be the variety of Indo-European spoken at a particular time (though it's in fact entirely reconstructed). To follow the linguistic analogy, popular programming languages are creoles (like English). They combine features of multiple languages.
I hadn't thought of English as a creole, even though it combines features of multiple languages, because there's no evidence of a pidgin evolutionary state. But, like a creole, it does have a substantially more systematized grammar than either its Scandinavian substrate or its French superstrate, both of which, for example, inflect for gender. And it certainly shows evidence of rapid historical change, like many creoles, and some of that seems to have been driven by decreolization-like processes. So maybe English really is a creole.
The linguistic analogy starts to break down there, though. In some sense all programming languages are pidgins, and of course they are conlangs.
Right! Also conditionals (if-then-else), recursive functions, dynamic typing, and eval all originated in Lisp in 1959. The first two of these are even in C.
http://www.paulgraham.com/icad.html goes into some details. In http://canonical.org/~kragen/memory-models I go into some details on how the Lisp memory model is the basis of, for example, Java, and what some of the alternatives look like.
I guess if you couldn't write a C compiler in a single small file (by modern standards) you wouldn't have been able to run a C compiler on an 8-bit machine.
Actually, C compilers back then were still pretty large (also, as far as I'm aware, every machine that UNIX originated on was at least 16-bit, and the very first machine it ran on was 18-bit):
Many years ago I worked on a C++ compiler, and this was during the C++03 days. The language and standard (now C++20) is // so much larger // than it was even 15-20 years ago.
I will say, however, that when I'm brave enough to peek into the clang code (I work up the courage maybe a couple times a year), at least for someone who understands the language, it's very clean. But it's big. Very big.
A bit disappointed that False is not on there, since it was both a compiler that fit in less than 1k of binary code, and also the origin of the whole esoteric language craze that is still going on.
(Brainfuck was a direct response to it, trying to make a compiler that was significantly smaller - the original compiler was 240 bytes.)
These examples blow my attempt at an interpreted programming language (thousands of lines over multiple source files) out of the water.. so impressive.
further back, I wrote a compiler for part of the language described in Dijkstra's A Discipline of Programming in a dialect of forth[1].
one of my colleagues, Stevan Apter, has a number of interesting toy interpreters on his site[2], particularly concatenative languages.
[0] https://github.com/JohnEarnest/ok/blob/gh-pages/oK.js
[1] https://github.com/JohnEarnest/Mako/blob/master/demos/FiDo.f...
[2] http://nsl.com