That's not really what went wrong with Symbolics. What really went wrong is mentioned in the article, though. "Expert systems" were not really very useful.
Another reason that Lisp machines were in high demand (despite a general lack of results) is due to a "natural trajectory" phenomenon. AI, and especially expert systems, were seen as the future of computing. This is due to hype generated by Feigenbaum, a Stanford University professor. Tom Knight claims that Artificial Intelligence was oversold primarily because Feigenbaum fueled outrageous hype that computers would be able, for example, to replace medical doctors within 10 years. Additional hype was generated by the Japanese government sponsoring of the "Fifth Generation" project. This project, essentially a massive effort by the Japanese to develop machines that think, sparked a nationalistic chord in America. SDI funding in some ways was a means to hedge the possible ramifications of the "superior" AI technology of Japan.
I went through Stanford CS when Feigenbaum was hyping away. For the hype, read his book, "The Fifth Generation". Much of the Stanford CS department was convinced that expert systems would change the world, despite having almost nothing actually working. It was pathetic. Especially the aftermath, the "AI Winter", when I once saw Feigenbaum wandering around the building amidst empty cubicles, looking lost.
Symbolics had some big technical problems. The biggest was that they didn't use a microprocessor. They had a CPU built up from smaller components. So their cost was inherently higher, their hardware was less reliable, and they didn't benefit from progress in microprocessors. Once people got LISP compilers running on Motorola 68000 UNIX workstations, LISP machines were not really needed. Franz LISP on a Sun was comparable to using a Symbolics (we had both where I worked), and the refrigerator-sized Symbolics wasn't worth the trouble. Symbolics was also noted for having a poor maintenance organization. They had to send someone out to fix your refrigerator-sized machine; you couldn't just swap boards as on workstations.
Eventually Symbolics shrank their hardware down to reasonable size, but by then, nobody cared.
It was a cultural problem. Symbolics were really trying to build their own LISP-specific DEC-10-a-like, using DEC-style mini and mainframe computer design traditions, but at slightly lower cost.
There were a number of projects like that around at the time, including the Three Rivers/ICL PERQ which was optimised for PASCAL, and arguably DEC's PDP-11 range whose entire architecture was closely aligned with C and eventually C++, pointers and all.
These were all interesting machines without a future, because the early 80s were a crossover when it turned out that model was too bloated to be sustainable. DEC scraped through the bottleneck with Alpha, but couldn't keep it together as business. Meanwhile the 16-bit and early 32-bit architectures were eating everyone's lunch. SGI and Sun flared up in this space and died when their price/performance ratio couldn't compete with commoditised PCs and Macs - which happened sooner than hardly anyone expected.
This is obvious now, but it wasn't at all obvious then. The workstation market and the language-optimised market both looked like they had a real future, when in fact they were industrial throw-backs to the postwar corporate model.
So it wasn't just the AI winter that killed Symbolics - it was the fact that both hardware and software were essentially nostalgic knock-offs of product models from 5-10 years earlier that were already outdated.
Meanwhile the real revolution was happening elsewhere, starting with 8-bit micros - which were toys, but very popular toys - and eventually leading to ARM's lead today, via Wintel, with Motorola, the Mac, and the NeXT/MacOS as a kind of tangent.
The same cycle is playing out now with massively accelerated GPU hardware for AI applications, which will eventually be commoditised - probably in an integrated way. IMO Apple are the only company to be thinking about this integration in hardware, and no one at all seems to be considering what it means for AI-enhanced commoditised non-specialised software yet.
Apple are giving it some thought, Google are thinking about it technologically, plenty of people are attempting Data Engineering - but still, the current bar for application ideas seems very quite limited compared to the possibilities a personal commoditised integrated architecture could offer, because again current platforms have become centralised and industrialised.
There's a strong centrifugal and individualistic tendency in personal computing which I suspect will subvert that - and we'll see signs of it long before the end of the decade.
That it was a cultural problem is quite correct. Academic computer science was quite small in those days, and it was almost all DoD-funded. The big CS schools had PDP-10 machines, and the lesser ones had DEC VAXen.
In the 1980s, the commercial market in electronics and computers passed the DoD market, first in volume and then in technology. This was a real shock to some communities. There were complaints of "premature VHSIC (Very High Speed Integrated Circuit) insertion" from DoD, by which they meant the commercial market using stuff DoD didn't have yet. DoD thought they were in charge of the IC industry in 1980.[1] DoD had been the big buyer in electronics since WWII, after all. By 1990, DoD was a minor player in ICs and computing.
Symbolics was really a minicomputer manufacturer, building up CPUs from smaller parts.
They went down with the other mini makers - DEC, Prime, Data General, Interdata, Tandem, and the rest of that crowd. That technology was obsoleted by single-chip CPUs. Many of the others hung on longer, since they had established customer bases. But they were all on the way down by the late 1980s.
> Symbolics was really a minicomputer manufacturer, building up CPUs from smaller parts.
Up to 1987. In 1988 they switched to microprocessors.
> They went down with the other mini makers - DEC, Prime, Data General, Interdata, Tandem, and the rest of that crowd. That technology was obsoleted by single-chip CPUs.
Symbolics introduced their single chip LISP CPUs in 1988. That one was used in their workstations, boards for SUNs and Macs, and in embedded applications.
That's my Symbolics LISP Machine as a board for an Apple Macintosh Quadra:
"So it wasn't just the AI winter that killed Symbolics - it was the fact that both hardware and software were essentially nostalgic knock-offs of product models from 5-10 years earlier that were already outdated."
I really like this quote and perspective. I think it was a byproduct of period communications - not today's ubiquitous connections.
Essentially it was a very small, even inbred community that talked mostly to itself. "Itself" including associated members at the likes of DARPA. And that was enough to get the funding (and closely related hype) ball rolling. There was little if any feedback from outside the crowd. Even the West Coast was another world to a certain extent.
I'm reminded of XKL - a Cisco founder going into business to (initially) produce modern PDP-10s in the early 90s. Because, you know, that's what the world (even that small inbred world) was waiting for.
> I'm reminded of XKL - a Cisco founder going into business to (initially) produce modern PDP-10s in the early 90s. Because, you know, that's what the world (even that small inbred world) was waiting for.
That was Cisco’s actual business plan — they just sold a few routers (from a design they’d developed for Stanford) to get some bucks in the door while they geared up to build the PDP-10 clone.
(Obviously the never pivoted back to the original plan).
They were building what they could. The MIT CADR used the same chips as a DEC VAX-11/780, both were a generation after the PDP-10.
The first "toy" computer I had that could run Lisp well was the Atari ST, I ported Franz Lisp to it. Had an 8086 machine at the same time but it didn't have a big enough address space.
> The biggest was that they didn't use a microprocessor
The Symbolics Ivory microprocessor was introduced in 1988.
> Franz LISP on a Sun was comparable to using a Symbolics
Not really.
The Lisp alternatives to Symbolics came later with commercial systems like the TI Explorer and then Allegro CL, Lucid CL, LispWorks, Golden Common Lisp, Macintosh Common Lisp.
That was a culture thing. There were Big LISP people on PDP-10s and Little People using UNIX. Having used both, I found it easier to get things done in Franz LISP than in the rather overbuilt Common LISP systems of the era. Franz LISP was set up like a UNIX program - you edited text files in some editor, then ran the compiler and run time system. Common LISP systems of the era were one giant application you never left. This is also part of the source of the EMACS/vi split. Big LISP people used EMACS, and Little People used vi.
GUI based systems were a thing at least since Smalltalk 80.
Generally all kinds of IDEs were common on smallest machines. Lisp had nice IDEs on small machines like the early Macintosh Common Lisp which ran usefully in 4 MB RAM on a Mac SE.
> found it easier to get things done in Franz LISP than in the rather overbuilt Common LISP systems of the era
Many others thought different and GUI based Windows systems with IDEs won much of the market.
> I found it easier to get things done in Franz LISP than in the rather overbuilt Common LISP systems of the era.
Franz LISP was dead end, never made it to Windows as a product.
> Common LISP systems of the era were one giant application you never left.
Many of CL systems of that area (which appeared mid 80s) could be used like Franz LISP just with vi and a shell: CMUCL, KCL, Allegro CL, Lucid CL, LispWorks and many others.
Franz Inc. created Allegro CL, which ran bare bones on Unix with any editor&shell, with GNU Emacs (via ELI) or additionally with its own IDE tools. Eventually also ran on Microsoft Windows, including a GUI designer.
That was later. I was doing this around 1980-1981, when the LISP options were much fewer, the Mackintosh did not exist, Windows did not exist, and Symbolics machines were just becoming available. Franz Lisp was a good option. Common LISP came later, with many features from the Symbolics refrigerator, including their rather clunky object system.
Among other things, I ported the Boyer-Moore theorem prover to Franz Lisp. That started life on Interlisp on a PDP-10. I later ported it to Common LISP, and I have a currently working version today on Github, for nostalgia reasons. It's fun seeing it run 1000x faster than it did back then.
In 1980 Symbolics did not sell any machines. The company was not on the market at that time. The first handful of machines reached the market in late 1981 and they literally were the first of its kind. They were also mostly the same systems that MIT developed, sold as LM-2. Almost no GUI based machines of any kind were commercially (!) available at that time. Just about 80 (eighty) of those LM-2 were ever built and sold between 81 and 83. The first actual Symbolics machine reached the market in about 1983, the Symbolics 3600.
SUN did not sell anything in 1980/81. The SUN 1 came to the market in mid/late 1982 as a 68k machine with a SUN memory management unit.
Basically in 1980/1981 there were no UNIX system with a GUI on the market (i.e. commercially available) at all.
> including their rather clunky object system.
Common Lisp's object system was developed many years later. The first spec was published in 88.
Franz LISP used the same object system as Symbolics. It shipped with an object system called Flavors.
Another reason that Lisp machines were in high demand (despite a general lack of results) is due to a "natural trajectory" phenomenon. AI, and especially expert systems, were seen as the future of computing. This is due to hype generated by Feigenbaum, a Stanford University professor. Tom Knight claims that Artificial Intelligence was oversold primarily because Feigenbaum fueled outrageous hype that computers would be able, for example, to replace medical doctors within 10 years. Additional hype was generated by the Japanese government sponsoring of the "Fifth Generation" project. This project, essentially a massive effort by the Japanese to develop machines that think, sparked a nationalistic chord in America. SDI funding in some ways was a means to hedge the possible ramifications of the "superior" AI technology of Japan.
I went through Stanford CS when Feigenbaum was hyping away. For the hype, read his book, "The Fifth Generation". Much of the Stanford CS department was convinced that expert systems would change the world, despite having almost nothing actually working. It was pathetic. Especially the aftermath, the "AI Winter", when I once saw Feigenbaum wandering around the building amidst empty cubicles, looking lost.
Symbolics had some big technical problems. The biggest was that they didn't use a microprocessor. They had a CPU built up from smaller components. So their cost was inherently higher, their hardware was less reliable, and they didn't benefit from progress in microprocessors. Once people got LISP compilers running on Motorola 68000 UNIX workstations, LISP machines were not really needed. Franz LISP on a Sun was comparable to using a Symbolics (we had both where I worked), and the refrigerator-sized Symbolics wasn't worth the trouble. Symbolics was also noted for having a poor maintenance organization. They had to send someone out to fix your refrigerator-sized machine; you couldn't just swap boards as on workstations.
Eventually Symbolics shrank their hardware down to reasonable size, but by then, nobody cared.