Hacker News new | past | comments | ask | show | jobs | submit login
Symbolics, Inc.: A failure of heterogeneous engineering (2001) [pdf] (web.mit.edu)
90 points by tmalsburg2 on Nov 20, 2020 | hide | past | favorite | 54 comments



That's not really what went wrong with Symbolics. What really went wrong is mentioned in the article, though. "Expert systems" were not really very useful.

Another reason that Lisp machines were in high demand (despite a general lack of results) is due to a "natural trajectory" phenomenon. AI, and especially expert systems, were seen as the future of computing. This is due to hype generated by Feigenbaum, a Stanford University professor. Tom Knight claims that Artificial Intelligence was oversold primarily because Feigenbaum fueled outrageous hype that computers would be able, for example, to replace medical doctors within 10 years. Additional hype was generated by the Japanese government sponsoring of the "Fifth Generation" project. This project, essentially a massive effort by the Japanese to develop machines that think, sparked a nationalistic chord in America. SDI funding in some ways was a means to hedge the possible ramifications of the "superior" AI technology of Japan.

I went through Stanford CS when Feigenbaum was hyping away. For the hype, read his book, "The Fifth Generation". Much of the Stanford CS department was convinced that expert systems would change the world, despite having almost nothing actually working. It was pathetic. Especially the aftermath, the "AI Winter", when I once saw Feigenbaum wandering around the building amidst empty cubicles, looking lost.

Symbolics had some big technical problems. The biggest was that they didn't use a microprocessor. They had a CPU built up from smaller components. So their cost was inherently higher, their hardware was less reliable, and they didn't benefit from progress in microprocessors. Once people got LISP compilers running on Motorola 68000 UNIX workstations, LISP machines were not really needed. Franz LISP on a Sun was comparable to using a Symbolics (we had both where I worked), and the refrigerator-sized Symbolics wasn't worth the trouble. Symbolics was also noted for having a poor maintenance organization. They had to send someone out to fix your refrigerator-sized machine; you couldn't just swap boards as on workstations.

Eventually Symbolics shrank their hardware down to reasonable size, but by then, nobody cared.


It was a cultural problem. Symbolics were really trying to build their own LISP-specific DEC-10-a-like, using DEC-style mini and mainframe computer design traditions, but at slightly lower cost.

There were a number of projects like that around at the time, including the Three Rivers/ICL PERQ which was optimised for PASCAL, and arguably DEC's PDP-11 range whose entire architecture was closely aligned with C and eventually C++, pointers and all.

These were all interesting machines without a future, because the early 80s were a crossover when it turned out that model was too bloated to be sustainable. DEC scraped through the bottleneck with Alpha, but couldn't keep it together as business. Meanwhile the 16-bit and early 32-bit architectures were eating everyone's lunch. SGI and Sun flared up in this space and died when their price/performance ratio couldn't compete with commoditised PCs and Macs - which happened sooner than hardly anyone expected.

This is obvious now, but it wasn't at all obvious then. The workstation market and the language-optimised market both looked like they had a real future, when in fact they were industrial throw-backs to the postwar corporate model.

So it wasn't just the AI winter that killed Symbolics - it was the fact that both hardware and software were essentially nostalgic knock-offs of product models from 5-10 years earlier that were already outdated.

Meanwhile the real revolution was happening elsewhere, starting with 8-bit micros - which were toys, but very popular toys - and eventually leading to ARM's lead today, via Wintel, with Motorola, the Mac, and the NeXT/MacOS as a kind of tangent.

The same cycle is playing out now with massively accelerated GPU hardware for AI applications, which will eventually be commoditised - probably in an integrated way. IMO Apple are the only company to be thinking about this integration in hardware, and no one at all seems to be considering what it means for AI-enhanced commoditised non-specialised software yet.

Apple are giving it some thought, Google are thinking about it technologically, plenty of people are attempting Data Engineering - but still, the current bar for application ideas seems very quite limited compared to the possibilities a personal commoditised integrated architecture could offer, because again current platforms have become centralised and industrialised.

There's a strong centrifugal and individualistic tendency in personal computing which I suspect will subvert that - and we'll see signs of it long before the end of the decade.


That it was a cultural problem is quite correct. Academic computer science was quite small in those days, and it was almost all DoD-funded. The big CS schools had PDP-10 machines, and the lesser ones had DEC VAXen.

In the 1980s, the commercial market in electronics and computers passed the DoD market, first in volume and then in technology. This was a real shock to some communities. There were complaints of "premature VHSIC (Very High Speed Integrated Circuit) insertion" from DoD, by which they meant the commercial market using stuff DoD didn't have yet. DoD thought they were in charge of the IC industry in 1980.[1] DoD had been the big buyer in electronics since WWII, after all. By 1990, DoD was a minor player in ICs and computing.

Symbolics was really a minicomputer manufacturer, building up CPUs from smaller parts. They went down with the other mini makers - DEC, Prime, Data General, Interdata, Tandem, and the rest of that crowd. That technology was obsoleted by single-chip CPUs. Many of the others hung on longer, since they had established customer bases. But they were all on the way down by the late 1980s.

[1] https://apps.dtic.mil/dtic/tr/fulltext/u2/a230012.pdf


> Symbolics was really a minicomputer manufacturer, building up CPUs from smaller parts.

Up to 1987. In 1988 they switched to microprocessors.

> They went down with the other mini makers - DEC, Prime, Data General, Interdata, Tandem, and the rest of that crowd. That technology was obsoleted by single-chip CPUs.

Symbolics introduced their single chip LISP CPUs in 1988. That one was used in their workstations, boards for SUNs and Macs, and in embedded applications.

That's my Symbolics LISP Machine as a board for an Apple Macintosh Quadra:

https://pbs.twimg.com/media/BzRIoEKIMAA2kli?format=jpg&name=...

It uses a microprocessor. The daughterboard is RAM.


"So it wasn't just the AI winter that killed Symbolics - it was the fact that both hardware and software were essentially nostalgic knock-offs of product models from 5-10 years earlier that were already outdated."

I really like this quote and perspective. I think it was a byproduct of period communications - not today's ubiquitous connections.

Essentially it was a very small, even inbred community that talked mostly to itself. "Itself" including associated members at the likes of DARPA. And that was enough to get the funding (and closely related hype) ball rolling. There was little if any feedback from outside the crowd. Even the West Coast was another world to a certain extent.

I'm reminded of XKL - a Cisco founder going into business to (initially) produce modern PDP-10s in the early 90s. Because, you know, that's what the world (even that small inbred world) was waiting for.


> I'm reminded of XKL - a Cisco founder going into business to (initially) produce modern PDP-10s in the early 90s. Because, you know, that's what the world (even that small inbred world) was waiting for.

That was Cisco’s actual business plan — they just sold a few routers (from a design they’d developed for Stanford) to get some bucks in the door while they geared up to build the PDP-10 clone.

(Obviously the never pivoted back to the original plan).


They were building what they could. The MIT CADR used the same chips as a DEC VAX-11/780, both were a generation after the PDP-10.

The first "toy" computer I had that could run Lisp well was the Atari ST, I ported Franz Lisp to it. Had an 8086 machine at the same time but it didn't have a big enough address space.


That’s very true but the VAX (family) in a sense was also the end of a niche, the mini/supermini as timesharing system.


What drives these trends of centralization vs commodotization?


> The biggest was that they didn't use a microprocessor

The Symbolics Ivory microprocessor was introduced in 1988.

> Franz LISP on a Sun was comparable to using a Symbolics

Not really.

The Lisp alternatives to Symbolics came later with commercial systems like the TI Explorer and then Allegro CL, Lucid CL, LispWorks, Golden Common Lisp, Macintosh Common Lisp.


That was a culture thing. There were Big LISP people on PDP-10s and Little People using UNIX. Having used both, I found it easier to get things done in Franz LISP than in the rather overbuilt Common LISP systems of the era. Franz LISP was set up like a UNIX program - you edited text files in some editor, then ran the compiler and run time system. Common LISP systems of the era were one giant application you never left. This is also part of the source of the EMACS/vi split. Big LISP people used EMACS, and Little People used vi.


GUI based systems were a thing at least since Smalltalk 80.

Generally all kinds of IDEs were common on smallest machines. Lisp had nice IDEs on small machines like the early Macintosh Common Lisp which ran usefully in 4 MB RAM on a Mac SE.

> found it easier to get things done in Franz LISP than in the rather overbuilt Common LISP systems of the era

Many others thought different and GUI based Windows systems with IDEs won much of the market.

> I found it easier to get things done in Franz LISP than in the rather overbuilt Common LISP systems of the era.

Franz LISP was dead end, never made it to Windows as a product.

> Common LISP systems of the era were one giant application you never left.

Many of CL systems of that area (which appeared mid 80s) could be used like Franz LISP just with vi and a shell: CMUCL, KCL, Allegro CL, Lucid CL, LispWorks and many others.

Franz Inc. created Allegro CL, which ran bare bones on Unix with any editor&shell, with GNU Emacs (via ELI) or additionally with its own IDE tools. Eventually also ran on Microsoft Windows, including a GUI designer.


That was later. I was doing this around 1980-1981, when the LISP options were much fewer, the Mackintosh did not exist, Windows did not exist, and Symbolics machines were just becoming available. Franz Lisp was a good option. Common LISP came later, with many features from the Symbolics refrigerator, including their rather clunky object system.

Among other things, I ported the Boyer-Moore theorem prover to Franz Lisp. That started life on Interlisp on a PDP-10. I later ported it to Common LISP, and I have a currently working version today on Github, for nostalgia reasons. It's fun seeing it run 1000x faster than it did back then.


In 1980 Symbolics did not sell any machines. The company was not on the market at that time. The first handful of machines reached the market in late 1981 and they literally were the first of its kind. They were also mostly the same systems that MIT developed, sold as LM-2. Almost no GUI based machines of any kind were commercially (!) available at that time. Just about 80 (eighty) of those LM-2 were ever built and sold between 81 and 83. The first actual Symbolics machine reached the market in about 1983, the Symbolics 3600.

SUN did not sell anything in 1980/81. The SUN 1 came to the market in mid/late 1982 as a 68k machine with a SUN memory management unit.

Basically in 1980/1981 there were no UNIX system with a GUI on the market (i.e. commercially available) at all.

> including their rather clunky object system.

Common Lisp's object system was developed many years later. The first spec was published in 88.

Franz LISP used the same object system as Symbolics. It shipped with an object system called Flavors.

Flavors in the Franz LISP sources: https://github.com/omasanori/franz-lisp/blob/master/lisplib/...


One of the founders of Symbolics, Dan Weinreb, commented on this article: https://danluu.com/symbolics-lisp-machines/


In case anyone else shares my initial confusion, Weinreb's comment on this article is at the very end of the above link:

> I just came across “Symbolics, Inc: A failure of heterogeneous engineering” by Alvin Graylin, Kari Anne Hoir Kjolaas, Jonathan Loflin, and Jimmie D. Walker III (it doesn’t say with whom they are affiliated, and there is no date), at http://www.sts.tu-harburg.de/~r.f.moeller/symbolics-info/Sym...

> This is an excellent paper, and if you are interested in what happened to Symbolics, it’s a must-read.

His comments follow that.


Trivia: SYMBOLICS.COM was the first ever .com domain registered.


A list of the 100 oldest .com domain registrations:

https://theforrester.wordpress.com/2007/08/13/the-100-oldest...


This is a student paper, and it’s from 1998 not 2001, according to this page which lists many other student papers from that course over several years:

http://web.mit.edu/6.933/www/projects_whole.html


We had an LMI machine in my department (competitor to symbolics) - and it was no better either. It hardly ever worked and was scrapped soon after it was bought. According to the system admin, the machine was very temperature sensitive and would not work other than in a narrow band of temperatures.


I'd like to point out that Tom Knight, one of the scientists mentioned, now is considered one of the Godfathers of synthetic biology. He made a pivot in the late 1990s/early 2000s.

If you'd like to read one of the papers that inspired him to make the change, read "The Completeness of Molecular Biology"[0]. It is a truly inspiring paper published in 1984 that is still relevant to synthetic biology.

[0] https://static.ias.edu/pitp/2019/sites/pitp/files/morowitz-c...


I am confused.

From the paper: "Thomas F. Knight, Jack Holloway and Richard Greenblatt developed the first LISP at the MIT Artificial Intelligence Laboratory in the late 1970s."

From Wikipedia: "John McCarthy developed Lisp in 1958 while he was at the Massachusetts Institute of Technology."

Shouldn't the paper say "Symbolics LISP"?


It should have said "the first LISP Machine". Which was a personal computer with a GUI, built to support the development and execution of large Lisp software.

Symbolics was a company founded later to commercial that research - at the same time with its competitor Lisp Machines, Inc.


I wasn't aware of the Symbolics / SDI connection:

> The market created by funding from SDI was quite forgiving. The government was interested in creating complex Lisp programs and Symbolics machines were the leading alternative at that time. Officials who allocated funds for SDI did not demand cost-effective results from their research funds and hence the expert-systems companies boomed during this period.

Sorry to be glib but, maybe we should be grateful we got an AI winter, not a nuclear winter.


Much of the AI research of the time was financed by the US military - not just for SDI.

For example the DART logistics planning system written in Lisp for the Gulf War was said to have paid back all investments into AI research up to that point.


It is amusing (in a way) to think about that era's protests from the likes of CPSR (Computer Professionals for Social Responsibility). Specifically that software systems of a certain complexity (LoC) weren't even possible - yet even current phones transcend these numbers by orders of magnitude.


It was a long time ago, but wasn’t the idea that systems of a certain complexity with a very high reliability are impossible? My phone, at least, is not a paragon of reliability.


I don’t think they were that clear about reliability metrics. It was obscure but the slightly older Safeguard ABM software load was of that order of complexity magnitude and tested reliability. Thank goodness of course we never had a real test. Admittedly I have a small axe to grind, having known some of those CPSR people.


I went to Hungary in '87 to present a paper on semiconductor point defects, funded by SDI money from AFNOR. Soviets were particularly interested in defects because their processes had a lot of them; their idea was to find a way to make the devices work right anyway. It is why they were always 10 years behind, and part of why the Mig 25 had vacuum tubes. (It did work...)


Same issue with other 1980s custom hardware like mini-super scientific computers. It would take 3+ years for a new hardware generation. Intel or Motorola would have at least three generations in that span and nearly catch up for a fraction of the engineering cost. Companies like Convex, Masspar and Thinking Machines rarely made it to their 3rd hardware generation.


Another resource I found that is related is a Thesis paper "If It Works, It's not AI" https://dspace.mit.edu/bitstream/handle/1721.1/80558/4355745...


I am sorry for the offtopic, but how a number of CS experts who authored the article could get text kerning so wrong in the PDF? Or is it a scan of some paper source?


Seems to depend on the PDF viewer (font issues?). The kerning is almost unreadably bad in macOS's PDF viewer (Preview), but looks perfectly fine in Firefox's built-in PDF viewer on the same machine.


I do apologise, looks perfect in Acrobat Reader.


What makes you think CS experts know or care about text rendering?


One of the great CS experts created one of the great text rendering and layout systems, TeX.


Weren’t the authors of that paper from the business school?


They about lost me at page 10, "In C/C++, the dimensions of an array must be declared at the beginning of a function)". First, no opening "(". What kind of Lisp hacker does this? Second, both in C and in C++ functions can (of course) take variable-sized arrays. (And, there is no such language as "C/C++"). If you are trying to explain why a Lisp company failed, is it the right place to put obviously false propaganda about the winners?

On page 14, several howlers in one, "[Word tagging] eliminates the need for data type declarations in programs and also catches bugs at runtime, which dramatically improves system reliability." Where to start? Dynamic type errors are a major cause of program failures in obligate runtime-bound languages, today. Word tagging turned out to be a dead end, easily outmatched by page tagging in normal architectures.

Higher up on page 14: "... an extra instruction in the final 386 architecture to facilitate garbage collection." What instruction? Was there one, really?

Also on page 14: Asserting novelty of virtual memory in the '80s, really?

Page 16: Calling out the importance of their proprietary debugger gives the lie to the claim on page 14.

Page 20: "Many potential customers of Symbolics were interested solely in Symbolics' software. However, customers could not buy the software without purchasing a expensive LISP machine." It looks like the authors would like to think the software was attractive. But of course any such potential customers could get the same software from MIT without a 5x$ machine hanging off. Did they?

Page 28: "The Symbolics documentation was outstanding ... former Symbolics employees still treasure them." People developing important things can't afford to spend enough attention on docs to make them outstanding, because the important things demand that attention.

And, yes, the kerning is positively abominable. No TeX?


That PDF document was produced by some historical interviewers, not Symbolics. Your comment roundly criticizes Symbolics for someone else's work. Only in the words it contains does it have anything to do with Symbolics documentation!

Unable to discover the affiliation of its authors (not stated in the document itself), I can't tell where it did come from -- although it is hosted by several sites around the world, not just MIT. It shows internal evidence of font character code corruption, typical in the 2001 era of moving a document from one kind of system to another, or even just trying to change font-set. Note the prevalence of capital U's where an apostrophe belongs, and the clipped text in the second "competitive wheel" diagram. In fact, the diagrams show all the signs of being pasted in from an incompatible system: that wheel diagram is no longer circular, and the egregious box-and-pointer diagrams show the character misplacement typically exacerbated by PDF encoding.

None of that would have issued from Symbolics Press.

I may be a little defensive on the issue of paying attention to docs -- why else would I make a thoughtful response to a down-voted comment from someone with a history of them? At Symbolics, I was pleased that my technical writer colleagues shared the same title of MTS as us developers. I led the small team which developed Symbolics' document development system Concordia, and its document formatter -- which btw was contemporary with TeX and drew upon Knuth's published paragraph and equation layout algorithms. Also I personally inspected every pixel of every character (and all the pixels between) in every font used in Symbolics documentation. In a side-by-side comparison, Symbolics docs would look superior to something from 1986 TeX. One might see how even Prof. Knuth himself wished that Computer Modern had had the attentions of a professional font designer.

Finally, the paper itself explained that such in-house efforts were all of a piece with the rest of the company's engineering ethos.


Leaving aside ad hominem remarks... I do not criticize Symbolics at all, or the documentation it (or you) produced. I do criticize the authors; either they were Lisp apologists themselves, or allowed themselves to be misled by Lisp apologists.

And, they kerned badly. I leave to you which was the greater offense.


There’s no assertion of inventing virtual memory - only that the platform had it as a feature. And for “novelty”, yes it was for personal, even minicomputer, systems at the end of the 1970s when they were conceived.

Also you’re confusing Symbolics software (post LM2) with the very barebones MIT CADR release. It was far better than that and the closely related LMI release. RMS rantings on this subject have only the most tenuous connection to reality (much like their originator ;).

Finally, Symbolics online hypertext docs (via Document Examiner) really were great. When one is attempting to expand the market for a powerful, yet esoteric system, good docs make that system more approachable. Sadly I think they really did run out of Lisp hackers as an audience but they did try.


What was invented though was a garbage collector tuned for virtual memory.


Let us stipulate that the docs were outstanding. The syllogism, then, dictates that the code wasn't important.

An independent measure may be derived from the value placed on the code when the company went into receivership, and from how widely it found use, afterward, detached from its overpriced host machine.


They were both important and at least for a while, Symbolics had the resources to fully indulge both. Do crappy Google docs assure their ubermenschitude? :)

It’s two different questions, to discuss the quality of their code and to discuss the value placed on it. It didn’t match the development and operational model of the tech world when they went into receivership, hence a low value.


Bad code and unimportant code can be as badly documented as (good or bad) important code, and often are.

Initial quality of code is largely independent of its importance, although important code improves. I am happy to stipulate that theirs was admirable code, besides being admirably documented. Still: the implication is that few people ran it, and any consequences of running it faded quickly.

When there's a choice of only two of good, important, or well-documented, a well-run organization chooses the first two.


C/C++ is just referring to two things. "In the NFL/NBA, median player salary is greater than $700,000."

If you're not giving them this, I don't expect you're reading the rest with a very charitable interpretation. And variable sized arrays weren't really supported in C/C++ until C11/C++11, and this was written in 2001.


Variable sized arrays are supported all the way back to 1978 K&R C and earlier, with dynamic allocation and pointer arithmetic. You just don't have nice syntax for managing them. You can implement a decent Lisp in C.

The bullshit about C/C++ is immediately preceded by a claim that everything in "LISP" is represented by lists at the lowest level, which is in contrast with other languages that have fixed arrays, like C/C++.

The people who wrote this garbage paper were not properly familiar with Lisp, C or C++.


C++ was not a force in 1987, so it is decidedly peculiar to mention it at all. And, even in 1987, malloc and alloca were both available everywhere. So, no, this is propaganda; not important by itself, except insofar as it unmasks the authors' biases.

Annnnd, C99 VLAs as such were not in C++11 at all, and were removed ("made optional") in C11, recognized as a mistake. That does not, of course, mean that C or C++ functions' array arguments were ever restricted to a fixed size.


> In C/C++, the dimensions of an array must be declared at the beginning of a function)". First, no opening "(". What kind of Lisp hacker does this? Second, both in C and in C++ functions can (of course) take variable-sized arrays. (And, there is no such language as "C/C++").

In c, an array has fixed size and dimensionality. You can, of course, create your own data structure using a pointer which allows you to access an unbounded number of objects; but that, in c parlance, is not an array.

And in c89, all variables must be declared at the beginning of a lexical scope (though not necessarily the beginning of a function).


C supports variable-length arrays since C99[1][2]. Alas, Lisp Machines preceded C99.

[1]: https://en.wikipedia.org/wiki/Variable-length_array#C99

[2]: https://en.cppreference.com/w/c/language/array


Yes, but they're optional as of c11.


> an extra instruction in the final 386 architecture to facilitate garbage collection.

Despite the tone problems of the post I'm replying to, I'm honestly curious about this: Is there such an opcode? I can't think of one, and I like to think I know things about x86 ISAs, despite all the dark corners that architecture has. Are they, somehow, confusing it with the iAPX 432, as unlikely as that sounds?

Anyway, here's the claim in context:

> (Interestingly, Symbolics at one time was working with Intel to build a development platform based on the 386, which led to the inclusion of an extra instruction in the final 386 architecture to facilitate garbage collection).


The only thing that comes to mind are the new 386 Bit Test instructions which could speed up manipulating tag bits in pointers and also any garbage collector bits.


> They about lost me at page 10:

How did you get past "[o]n the lowest level, LISP represent [sic] all objects, even the expressions of the language itself as lists" right in the previous sentence?

> What kind of Lisp hacker does this?

Right.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: