Hacker News new | past | comments | ask | show | jobs | submit login
The CADR Microprocessor (lm-3.github.io)
112 points by gilgamesh3 on Jan 29, 2019 | hide | past | favorite | 16 comments



Well, I'm impressed. I can't speak to efficiency, but reading through the spec immediately let's me see how one might implement an allocator atop the basic system, and gives me an idea of how one might wrangle predictive jumps if one were optimising aggressively, and so on.

This is one of the most _predictable_ instruction sets I've seen - and that I like. It takes time to learn nuances, but with this it seems that the documentation is right up there with the best, making that learning curve considerably less.


Important bit of historical context is that "micro" in "microprocessor" in this case refers to fact, that it is microcoded, not that it is implemented as single VLSI chip.


Indeed. The first single VLSI chip implementation of a Lisp machine, the MegaChip, was done by Texas Instruments in 1987 for its Explorer II workstations. This chip contained 553000 transistors and was afaik very similar to the CADR architecture-wise. Symbolics, one of TI's competitors, released their own VLSI Lisp machine chip next year called the Ivory, which had a more advanced architecture (labeled the I-Machine in the documentation) that used less transistors for its implementation than the MegaChip, only around ~300k.


The somewhat large transistor counts are understandable taking into account that the CADR CPU is mostly an insane (for the time) amount of SRAM (as in hundreds of 1k and 4k chips) coupled to comparatively small logic.


Is in fact it was a massive wire-wrapped machine


Reading these historic documents always gives me an itch to "OK. What would a CADR look today?" myself into designing a contemporary version with wider addresses, registers and so on. Of course, I never finish.


Looks interesting, but I'm too ignorant to understand it...

What is the practical significance of this? Is this the design for the processor that was used in the actual Lisp Machines, or a design for a hypothetical processor?


it's the architecture of the original lisp machine from which all other lisp machines (arguably) derive, mit's CADR. there's not many of them built, since it was made by hand wiring, but there's still one or two around in a working state.

for reference here's the ai memo write up of the project http://dspace.mit.edu/handle/1721.1/5718 dated 1979, which should place it in a historic context for you


> It's the architecture of the original lisp machine from which all other lisp machines (arguably) derive, mit's CADR.

The first one would have been the CONS. As the name suggests, the CADR was the second one. :) But the CONS was a completely different architecture and the CADR was certainly more influential.


> all other lisp machines

The MIT-derived ones from LMI, Symbolics and TI.

But there were a few others which might be different - like Xerox's Interlisp-D machines, BBN's Jericho Interlisp machine, Japanese attempts or some under development (like the next generation ones from Symbolics, Xerox and LMI which haven't reached the market, but were in various stages of design&completion).


They were wire-wrapped but by robot.


ty


Note: the corresponding emulator is there : https://github.com/LM-3/usim


A bit confuse between the two. Any easy way to up it and run some lisp code, not necessarily Common Lisp?


i wouldn't call it "easy", but it's a way to run a historic MIT lisp system, that predates common lisp. the language it uses is "lisp machine lisp", which was later used as one of the basis for common lisp standardization.


https://lm-3.github.io/

Seems to be a better link, points to other materials etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: