Lisp featured support for Garbage collection, exceptions done right, macros, optional typing, native compilation. In fact I would argue that many in the lisp community were so obsessed with the hard bits that they overlooked building simple things that see everyday use in the real world.
Quite some bits were implemented in Lisp. Both simple and hard. Some of the bits are lost.
For example the TI Explorer Lisp Machine came with an X11 server written in Lisp. On my Symbolics Lisp Machine I used the usual MIT X11 server written in C - this was possible because the Symbolics Lisp machine had a C compiler.
I was under the impression that those machines evaluate Lisp forms at the hardware. How could a C compiler work? Does it translate your code into Lisp s-expressions, or is there like a bytecode layer in between?
You could translate to either Lisp or raw instructions for the Lisp machine, but it doesn't really matter. The main thing that you do differently from a normal C compiler is translate pointers into pairs of a buffer and an index into the buffer. Then pointer arithmetic still works as far as it is required to work by the C spec.
What do you think a compiler does? The target instruction set can be any Turing-complete language. Compiling C to LISP is no different than compiling LISP to C (e.g., GNU Common Lisp), compiling ML to Java (which is so easy students can do it in 300/400-level CS), and so on.
Well, the naïve way is to allocate a big block of memory for your C program and manage it using malloc/free like normal (basically, use the LispOS equivalent of sbrk(2)). Alternatively, since pointers are pointers, nothing stops you from having malloc(3) call out to the Lisp runtime's GC. (free(3) is a no-op.) That's basically how the Boehm G. works. I'm not a systems programmer so I'll admit this is a bit hand-wavy. Also, note that malloc/free are not part of the core C language, per se, but the standard library (a/k/a the C runtime). Even if they were in the core language spec, you as the compiler author can implement them however you choose. They aren't magical.
How would a language's support for garbage collection make it any harder to compile a non-garbage-collected language into it? Why would you think that?
There's no code to hint at how garbage collection should be done, since its automatic. You have to essentially include a library that would emulate the source language gc that it came from so its not a direct translation from one language to another. At least that's what I was thinking
Not really. None of the relevant (= you could buy them and use them) Lisp Machines did not evaluate Lisp forms - the evaluator was a program, too. They all used a processor which provided a special instruction set and Lisp datastructures. For example the processor of the Symbolics Lisp Machine was a mostly stack machine running compiled Lisp code.
You can compare it to running JVM instructions in hardware.
Thus the Symbolics Lisp Machine had C, Ada, Pascal and other languages implemented for it.
But so as I recall was the virtual machine that interpreted the bytecodes the compiler produced.
Here I'm referring to what I remember being told about the CADR's microcode (it had a LOT, with options for more if you were willing to buy more $$$ static RAM), but the others were all direct descendants and I gather did pretty much the same things.
It's not that it doesn't implement the hard bits, it just re-implements the hard bits on top of the hard bits. It requires an OpenGL context, which generally requires you to have an X server running.
Well, the natural thing to do is compile the lisp to javascript and use webgl!
Would be interesting to see how much is needed to get it running with some other gl context. Also -- maybe this'll run under OS X and windows without requiring a X server for the GL context?
I actually left this project alone after I got the very basics running... It's very very slow and rather buggy. I guess a profiling run could help a lot though.
It looks like it just uses an existing X server as a way to create an OpenGL context, not as an X server per se. Perhaps it would be possible to port it to some non-X method of producing an OpenGL context, such as EGL.
Because this glosses over the really complex parts. Still, if they are written in Lisp, this would comprise a complete self-sufficient X server (how many lines of code this would take though?).
To me 5000 lines of C code is a LOT of code. Linus's first release of Linux was only 10,000 lines of code. Given much higher expressiveness of Lisp, I think 5000 lines of List code is not all that to boast about. I can't pull a reference here but according to one of the studies, the productivity of some of the well known programmers was measured at 10K LOC/year (yes, it's very loose measure of "productivity"). So writing 5000 lines of good code may require ~6 months, i.e., not a trivial amount of work. Although I agree it is cool to have X server in Lisp.
Agreed, I was surprised how many code Slime implementation has, there are source files about several kloc in size. From what I saw I can say that "lisp expressiveness" did not helped much there.
I guess it's you who is just unfamiliar with the syntax.
Do you find (+ 2 2) harder to understand that 2+2?
If the code is well indented, IMHO it's quite understandable. For example, what do you think of this Lisp code snippet (it's not the best possible code, just some stuff I have at hand to show without you needing to know the context):
;;takes two list l1 and l2 of same length, and return (l1[0] l2[0]
;;l1[1] l2[1] ...)
(defun mix (l1 l2)
(if (null l1) nil
(append (mix (cdr l1) (cdr l2)) (list (car l1) (car l2)))))
This shows off one of lisp's (INHO) unfortunate anachronisms, namely that your "other car is first".
CDR and CAR refer to machine instructions on the IBM-704 that had a (by today's standards) a somewhat quaint handling of "words" in memory, roughly allowing CAR to refer to a data element at the head of a linked list, and CDR to refer to the pointer to the rest of the list (or the empty list) (As well as a CONS instruction to build a word in memory from for parts, including the two 15 bit values that CAR/CDR refer to)[1].
Lisp then extended this by allowing programmers to combine c(a|d)+r, to get the equivalent of second, third, first-of-second-sublist etc. I'm not convinced the succinctness is worth it -- apparently experienced lisp-coders disagree. The TL;DR is that if you don't know lisp, this code is the same as the above, and is a bit more intuitive:
;;takes two list l1 and l2 of same length, and return (l1[0] l2[0]
;;l1[1] l2[1] ...)
(defun mix (list1 list2)
(if (null list1)
nil
(append (mix (rest list1)
(rest list2))
(list (first list1)
(first list2)))))
Other languages might use head and tail, rather than first and rest. I'll agree that list1 and list2 might not offer much more readability than l1 and l2 -- at least not with the comments given.
[1] Most of this is paraphrased from wikipedia, and what I remember from various times I've tried to get started with lisp, without ever really getting entirely comfortable with the language. http://en.wikipedia.org/wiki/CAR_and_CDR
[edit: I didn't notice the comment about reversing the list, but I think it is rather obvious that (append rest list(first)) does indeed reverse the order...? I actually wondered if append had a strange order of parameters while I "rewrote" the code :-) ]
Basically nobody should write such code. As a style guide: use CAR and CDR on cons data structures. Use FIRST, REST, SECOND, ... on lists.
Actually on day two of Lisp, fifty years ago, these patterns have been abstracted. Recursion of lists has been replaced by higher-order functions in many cases.
Hm, am I right in thinking that numbers are defined to be atoms in lisp. And also that there are limits to how one can use macros, in that you'd not be able to a) rename FIRST, as in (FIRST list) to 1 (or '1): (1 list) or ('1 list); and b) that you couldn't modify the syntax to "decompose"/parse a slice syntax like: (1:2 (list-of-lists)) (here 1:2 would mean for example first element of second sublist/cons) ?
Now, you could of course write an interpreter for such a language in lisp, but am I correct in assuming that there's no reasonable way to get a (standard'ish) lisp to compile an expression like (1:2 ...) to the equivalent caadr(?)-like expression?
I saw there's a library for clojure that adds "slice notation" -- but does so by adding a (slice x y list)-macro which of course is fine -- but it would seem that (x:y list) would give similar benefits to the cadr/cddr etc -- allowing selected subsets to be highly optimized, for example? (As well as offering concise syntax)
Reader macros allow you to write parser functions for characters. So you could write a reader macro for { such that it collects all enclosed content until a } character. The it could transform that content and return the transformed result. The result is always Lisp data, not a string. How you transform the enclosed text is up to you.
Common Lisp itself tries to minimize its usage of characters. It is expected that users or libraries want to make use of characters like {, }, [, ], and many others.
I don't think I've come across the simplified reference before -- appears to be a good resource for those of us starting out with lisp -- striking a balance between all of CLOS and the sometimes simplistic presentation of text books.
Thanks for the tip of the indentation, and for checking my code - you are right that I got the order wrong in (append (mix ...) (list ...)), but in this particular case I'm not interested in how my list is ordered.
I think it's easier to reason about in the long run although the initial period of wondering why you don't just set up a data structure and bang on it in place is a little bumpy.
The syntax is always[0] (procedure argument-1 argument-2 ...) where the procedure is applied to the arguments.
The syntax is very minimalistic at cost of much more semantical burden. Parsing Lisp code for humans often relies on conscious understanding the words and AST structure, not on reflexes tied to a visual memory of characters and indents. So I think that Lisps are at odds with human perception (someone may consider this an advantage: this repels people who code with their cerebellum instead of cerebral cortex). But once you have a picture of what is the code about in your head, it's not a big deal.
Lisp's syntax is the easiest thing about it to understand. In fact, Lisp's syntax is so easy to understand, that even Lisp can understand its own syntax.
I never found it clunky, at worst blurry (only one visible syntactic marker) but it was pretty beautiful and timeless.
Beside macros, with non string-buffer editors, you get very very fast and calm typing or I should say low level refactoring.
Finally it has this weird quality of really looking like static data (relation v1 v2 v3) most of the time and really not like a cryptic (by the number of delimiters {} () , ; ) state dependent imperative notation. Makes you really think about how to interpret data rather than send commands to a machine.
It's also easy if you live in Germany or a German speaking surrounding. Being immersed into a German speaking surrounding helps - learning without other Germans from training material is less motivating, so it looks difficult. What actually the difficult part is, is the motivation, not the language. For somebody from an English speaking country learning German is not really difficult - English has lots of Germanic roots. It's just that many people in English-speaking countries don't learn any foreign language to a competent level, not even the easier ones, so everything looks difficult. What really would be a bit more 'difficult' is language with a different alphabet (Russian) or a language which is different on many levels (Japanese).
It also depends on what level you want to be able to read German. If you want to be able to read philosophers (Kant, Schopenhauer, Hegel, Jaspers, Habermas, ...) then the subject is also difficult - it would help if you understand/know the concepts already - they are often independent of the language.
English is also difficult. Parts of the language are easy, but the language is huge. I can read many english texts quite fluently, but there are novels where I am at 10% reading speed (compared to english texts from typical news sources) when I want to understand 95% of the words.
I think getting to a decent understanding of German should not be too hard, but some nuances are. Even some native speakers struggle with parts of the grammar, like dass/das, dativ/genetiv, etc. I am a native German speaker who learned English as his second language and is now learning Greek, including a different alphabet, as his third.
Literacy for example seems to be quite good in some countries with languages which for me would look difficult. The UK OTOH is not happy with their results.
Banging on file descriptors looks pretty much the same no matter what language you do it in. I expect that a Python equivalent would be pretty much the same length.