Hacker News new | past | comments | ask | show | jobs | submit login
The PERQ Computer (graydon2.dreamwidth.org)
183 points by mpweiher 79 days ago | hide | past | favorite | 102 comments



I scrounged one in ~1991 from the back corridors of Manchester Uni; fun machine; the CPU was a bitsliced type, common at the time using AMD chips, the microcode was loaded at boot (hence the different microcodes on the different OSs). It used a z80 on the IO board to get things started and load the microcode off disc. The UI on PNX was pretty nice as well for a machine with 1MB of RAM. My Perq 1, had it's 14" belt driven hard drive 27MB (early Seagate), with PNX on - try fitting a Unix system with a GUI on that these days!


I was not aware that between Alto and Lisa there was the PERQ, a first commercial attempt, so thanks for point that out.

Of course the deeper you dig anywhere, the more complexity gets unearthed, and the more fair credit must be distributed across more clever engineers, dilluting the "single genius" picture that movie makers and often sadly also journalists try to portray ("reality distortion field").

I would quite like a minimalistic b/w GUI as the PERQ had on the screen shot.

Leaving out all the transparency/rounded corners nonsense this should be bleeding fast, too, with today's graphics capabilities.

EDIT: typo fixed


There was the Lilith [1] too in the same timeframe. The PERQ and Lilith used AMD bitslice chips instead of the TI ones in the Alto.

[1] https://en.wikipedia.org/wiki/Lilith_(computer)


Lisp Machines from LMI and Symbolics, too.


while those gave a lot of attention to usability and certainly had graphics and windows, i don't think they really belong to the smalltalk/perq/lisa/macintosh lineage; they were sort of a parallel development track with different, less graphical, less event-driven conventions


They were among the first graphical workstations: Large bitmap screen, disks, network, color option, mouse, window system, object-oriented software, fonts, gui framework, ...

Before the Apple Lisa, the SUN-1, the Mac, ...

These are Lisp Machine images/screen shots from MIT from 1980:

https://bitsavers.org/pdf/symbolics/LM-2/LM_Screen_Shots_Jun...

* a dual screen machine with a electronics design system

* Macsyma with plots

* Inspector, Window Debugger, Font Editor

* Music notation

* first tiled screen/window manager

* another electronic CAD tool

The UI was developed using Flavors, the object-oriented system for the Lisp Machine (message passing, classes&objects, multiple inheritance, mixins, ...).


But unrelated. Merely another machine from the same time.


i don't think they were unrelated; the groups developing them were in constant contact and had thick webs of mutual acquaintances. teitelman had grown up at mit before going off to parc to write interlisp, and parc was full of former arpa researchers

i do think they made noticeably (and fascinatingly) different choices not only in what they designed for their user interfaces but also what they thought of as a good user interface

other lispm-related links, other than the one lispm posted above:

https://github.com/lisper/cpus-caddr #Lisp machine #gateware for the MIT CADR in modern #Verilog. “Boots and runs.” By #lisper (Brad Parker)

https://metebalci.com/blog/cadr-lisp-machine-and-cadr-proces... #CADR #Lisp-machine #retrocomputing

at some point i came across a treasure trove of lispm screenshots showing many aspects of the genera ui, but apparently it's not in my bookmarks file

one quibble: i don't think flavors was around until quite a bit later, maybe 01979, so the lispm software wasn't oo until quite late. weinreb and moon's aim-602 on it wasn't until 01980: https://apps.dtic.mil/sti/pdfs/ADA095523.pdf and of the five keywords they chose to describe it, one was 'smalltalk'


> i do think they made noticeably (and fascinatingly) different choices not only in what they designed for their user interfaces but also what they thought of as a good user interface

Though software written originally for the Xerox Interlisp systems got ported to the MIT Lisp Machine descendants, since the hardware was more capable with more memory. For example Intellicorp KEE (Knowledge Engineering Environment) was ported. It retained much of its original UI (which included extensive graphics features and an interface builder), when running on Symbolics Genera. It looks like one is using an Xerox Interlisp UI when running it on a Symbolics. https://vimeo.com/909417132 For example at 08:30 the screen looks like an Interlisp system, even though it is Symbolics Genera.

Xerox PARC also had real MIT CADRs in their network. I remember seeing a photo, with an office, where the Xerox PARC employee had both an Interlisp workstation and (IIRC) a Symbolics. There is also a video for the IJCAI 81 (International Joint Conference on Artificial Intelligence), with demos of an Interlisp system and a MIT Lisp Machine, both demos recorded at Xerox PARC.


this video is fantastic, thank you! i hadn't seen it before, only the ijcai demo


    https://github.com/lisper/cpus-caddr #Lisp machine #gateware for the
    MIT CADR in modern #Verilog. “Boots and runs.” By #lisper (Brad
    Parker)
It doesn't boot or work -- there are CDC issue and other stuff, plus not working on anything that is easilly found (help wanted to get that stuff working again!). Current version of the FPGA CADR is at https://tumbleweed.nu/r/uhdl -- and CADR restoration and stuff work at https://tumbleweed.nu/lm-3 .

    one quibble: i don't think flavors was around until quite a bit
    later, maybe 01979, so the lispm software wasn't oo until quite
    late. weinreb and moon's aim-602 on it wasn't until 01980 [...]
I wouldn't call it "quite late" -- it was only a 2-3 year gap from the system booting to very heavy usage (1977 - 1979); the Lisp Machine (specifically the MIT and LMI branches) survived for 10 more years after that.

Message passing was added on or around 03/09/79, and things got quickly adjusted and flushed after that.

    MMCM@MIT-AI 03/09/79 21:26:00
    To: (BUG LISPM) at MIT-AI
      Band 6 on CADR1 is a new experimental world load.
    In addition to quite a few accumulated changes, this load
    has some important changes to the message passing stuff.
      CLASS-CLASS now has three instance variables where formerly
    it had two.  Most importantly, instead of the SELECT-METHOD
    being in the function cell of the xxx-CLASS symbol (typically) there is now a
    gensym (called the CLASS-METHOD-SYMBOL) created for each instance
    of CLASS-CLASS which holds the select-method.  The CLASS-METHOD-SYMBOL
    for a given class can be obtained by sending the class a :CLASS-METHOD-SYMBOL
    message.
      A forthcomming essay will describe the motovation for doing this,
    etc.  For the present, I note some incompatibilities and changes to
    operating procedure which this makes necessary.
     Incompatibilites: 
      (1)  The <-AS macro has been changed, and any file which uses it
         must be recompiled.  If this is not done, the symptom will usually
         be  a xxx-CLASS undefined function error.  
      (2)  Any other file which uses DEFMETHOD should eventually be recompiled.
         There is a "bridge" which causes old files to win for the time being,
         but this will eventually go away.
    Since new QFASL files wont work in old loads, its probably a good
    idea to rename your old QFASL files if you compile with the new load.
    
      When you do a DEFCLASS, you always get a fresh class with no methods
    defined.  If you are redefining a class this means two things:
      instances of the old class will continue to work
         (this was not always true before.  However, we still have the
         problem that method names can conflict between new and old versions
         of the class).
      all the methods used by a class need to be reloaded for new instances of the
         new class to win completely.  Also subclasses of the given class
         need to be reloaded.
    
      Since reloading is a gross pain, you should not redo DEFCLASSes
     unless you really mean it.  Probably DEFCLASSes should be put in a
     separate file so the rest of the file can be reprocessed without
     redoing the DEFCLASS.  Hopefully,
     there will eventually exist system features to make this more convenient.


thank you very much for the correction!

01979 is still a lot later than smalltalk-72 or the original cadr design from 01974 (https://www.researchgate.net/publication/221213025_A_LISP_ma... says 'several suitable [semiconductor memory] devices are strongly rumored to be slated for fourth quarter 74 announcement' so maybe they didn't actually get it working until 01975 when they ported emacs to it)


Where does the DEFCLASS stuff originally come from? Is it documented somewhere? As I understand it is from before Flavors? Flavors then replaced it or is there some deeper relationship?


It comes from Smalltalk. Flavors are probobly older than DEFCLASS, ZWEI and the Window system was already using Flavors heavily before DEFCLASS became part of the main system.

Neither Flavors or DEFCLASS got replaced -- they have different purposes though can often be replaced either way.


Like the PERQ is largely unrelated to the Smalltalk / Interlisp-D systems from Xerox, to LISA or the early Mac. It is somewhat related to the later NeXT (founded 1985, first release in 1988) -> Mach/Unix, ...

The PERQ was just in the timeline between the Alto and the LISA/Mac. But there isn't much of a relationship to the latter two and not much to the Alto. The Alto was used to write stuff like its OS in BCPL, the Office software written in Mesa, Smalltalk, Interlisp, ... but AFAIK no UNIX and no PASCAL system. The PERQ did not run Alto-Software.

Background information about the PERQ history: http://bitsavers.informatik.uni-stuttgart.de/pdf/perq/RD_Dav...


Also lots of influences, all the stuff Smalltalk influenced, Mesa influenced Modula-2, Cedar influenced Oberon and Modula-3.

Followed by all the languages, those ones influenced as well.


And all that is independent of the PERQ.

The article states (and it makes sense) that the PERQ had influence on the modern Mac, the Macs running Mac OS X and later. Since NeXT and then Apple used UNIX on a Mach Kernel, the latter came as a direct influence from CMU.

Influence on the LISA or the early Mac (before Mac OS X)? Not so much..., those were influenced directly from Xerox PARC ideas, people and technology.

Related to PERQ: The SPICE project from CMU, using PERQs, was also developing SPICE Lisp, which was influenced by the LISP stuff from MIT, including an EMACS variant, influenced by the ZWEI editor from the Lisp Machine. SPICE Lisp from the PERQ, evolved into CMU CL and later SBCL.


Sure, my point was about Xerox's influences in general.


i'd be surprised if the lisa folks had no knowledge of the perq. i mean it was exhibited at trade shows, wasn't it?


Probably. They may have seen such a machine. But both the LISA and the Mac were very different from UI, Software and Hardware. Steve Jobs vision was eventually to get to the Dynabook (-> MacBook, iPad). LISA and the early Mac were stepping stones...

But the Xerox PARC -> Apple route is broad and is well documented. Steve Jobs got a demo of the secret Xerox stuff in exchange of some Apple stock, including the Smalltalk system (though the Smalltalk team were less then happy about it). He hired Larry Tesler. Apple was also one of the selected companies getting Smalltalk-80 access and ported it to their machines (later the Apple sources were open sourced). Steve was also interested in the office domain, desktop publishing was one of the first applications of the Mac (incl. Postscript, the Laserwriter, Adobe Pagemaker, ...).


i would rather say that steve jobs's vision was to make the dynabook impossible; the central feature of the dynabook is that it is a tool for computational thinking for the masses, while the central feature of most of the designs of the projects jobs led (mac 128, imac, ipod, iphone, ipad—though not next) is that they're closed to the users, requiring special tools to open and modify that are only available to a chosen elite, and nowadays, even to write software for. it's a vision of society that sees it primarily as a market for products, not a community of intellectual collaboration or autonomous individual flourishing, which was the intent of the dynabook

the story of smalltalk at apple is fairly complex

apple did also implement smalltalk-80 on the lisa, and there's a screenshot of that in the smalltalk-80 book (01984), but it was apparently never released, even as proprietary software. apple shipped a macintosh smalltalk in august 01985 (according to http://basalgangster.macgui.com/RetroMacComputing/The_Long_V...) but i don't think squeak (the apple smalltalk that eventually became open-source) was derived from it

alan kay didn't join apple until 01984 (the year after jobs hired sculley as ceo), jobs left in september 01985, and squeak wasn't announced until 01996, which is about the time kay left, and i think squeak as a whole departed for disney at that point. jobs came back in 01997, and though he didn't kill squeak, he did kill hypercard, the other main environment for individual computational autonomy. jobs's apple did eventually make squeak open-source—in 02006!

jecel assumpçao in https://news.ycombinator.com/item?id=23392074 explains that the reason for finally open sourcing squeak a decade after the project ended was that olpc (itself fairly locked down, though less so than iphones) demanded it. also, though, he does heavily imply that it descended from the original apple smalltalk


Thank goodness for Jean-Louis Gassée and his advocacy for user-serviceable, expandable Macs, starting with the Macintosh SE and Macintosh II in 1987. I have strong respect for Steve Jobs, but I agree with Jean-Louis Gassée on this issue, and my favorite Macs are user-serviceable. Even with Steve Jobs, the NeXT was far more open than the original Macintosh, though this may have been out of necessity since NeXT competed in the workstation market against companies like Sun and HP. Also, when Steve Jobs returned to Apple, Apple maintained a Power Macintosh/Mac Pro with internal expansion slots and user-serviceable RAM and storage throughout the rest of Steve Jobs’ life. Even the rest of the lineup was user-upgradable, even if it meant dealing with a lot of screws (like certain laptop models in the 2000s).

It wasn’t until Tim Cook took over that Macs became more locked-down in terms of user-serviceability and expandability, culminating with the switch to ARM, where Apple sells no Macs with user-upgradable RAM anymore.

Had Apple’s leadership been more focused in the “interregnum” years of 1985-1996, we could be using Dynabooks running some sort of modern Dylan/Common Lisp machine architecture with a refined Macintosh interface. Apple had all the pieces (Newton’s prototype Lisp OS, SK8, Dylan, OpenDoc, etc.), but unfortunately Apple was unfocused (Pink, Copland, etc.) while Microsoft gained a foothold with DOS/Windows. What could’ve been...my dream side project is to make this alternate universe a reality by building what’s essentially a modern Lisp machine.


I also would like an alternative universe where those Apple technologies succeed, however we also have to remeber by 1996 there wasn't much Apple left, and it was a matter of luck that NeXT's reverse acquisition worked out as well as it did, for where Apple is 30 years later.


> my favorite Macs are user-serviceable

Actually that ship has sailed. The M1 MacBook Air was a big step up on any prior "user serviceable" Mac. It's portable, fast, extremely efficient, light-weight, robust and totally silent. Upgrading RAM has mostly been a non-issue. The Symbolics Genera emulator on the M1 runs roughly 80 times faster than my hardware Symbolics board in my Mac IIfx. That hardware was fragile and expensive. I'm much happier now, given that this stuff runs much better.


I love the power of Apple’s ARM Macs, and at work I always choose a Mac when given a choice between a PC running Windows. I love my work-issued MacBook Pro. However, for my personal equipment, it’s really difficult for me to stomach paying Apple’s inflated prices for RAM upgrades beyond their defaults (8GB won’t cut it for my workloads, and even 16GB sometimes feels cramped), and because the RAM is soldered, I have no choice but to either accept the default or to pay inflated prices for more RAM. Thus after 16 years of buying Macs for home use, I switched away from the Mac a few years ago; I have a Ryzen 9 build as my daily-driving desktop and a Framework 13 as my laptop. My Framework has 32GB of RAM and I can upgrade to 64GB at any time. I admit that I miss macOS at times, but WSL has dramatically improved the Windows experience for me.

I loved my 2006 MacBook. It was lightweight for the time, and it was remarkably easy to make RAM and storage upgrades. I also enjoyed my 2013 Mac Pro, which I purchased refurbished in 2017. While it didn’t have expansion slots, I did upgrade the RAM during the pandemic from 12GB to 64GB, which was wonderful!


Squeak originally made use of the Smalltalk-80 image from Smalltalk, that is how it descends from real Smalltalk, versus the other Smalltalk vendors that are still around (not sure how they created their versions).

This also applies to Pharo, at least for the initial versions as they forked out of Squeak.

Clascal came to be, because Smalltalk was too demanding for Lisa's hardware.


jecel in the comment thread i linked implies that there was more of a relationship than that, although of course none of the code in the object memory and bytecode interpreter could literally be the same because it was in a different programming language


There is the point of them coming up with clascal, and then Object Pascal (in collaboration with Niklaus Wirth), because Smalltalk-80 was never going to execute at acceptable speed on Lisa's and Mac hardware.


They already used PASCAL for the Lisa as its systems programming language. Most of its software at that point was written in Lisa Pascal. Apple worked with Wirth on adding/improving object-oriented programming constructs. Clascal was an extension to an already widely used Lisa Pascal and was used for an OOP framework: http://pascal.hansotten.com/apple-lisa-pascal/

I don't think there was ever a move to use Smalltalk in Apple Products, anyway. Besides a pre-product version of Apple Smalltalk 80 itself, which was available for a short time.

Eventually also PASCAL (using BEGIN/END) lost at Apple against the curly braces in C/C++/Objective C/Java/JavaScript/Swift.


Maybe not the ones listed above, but Xerox PARC's Interlisp-D is another matter.


i feel like even interlisp-d, especially the 01970s version, has a pretty different style of interaction than smalltalk and the macintosh. i know a lot less about the perq


Definitely. Interlisp and Smalltalk were early on mostly driven by research (Interlisp -> everything AI, Smalltalk -> UI research, OOP, AI) and specific early applications. The Mac UI was early on more direct manipulation oriented.

One thing to keep in mind: the UI state of the Art was fast evolving and applications under some of these systems might have UIs different from the underlying operating system. That would also be true on the Mac: HyperCard had a look&feel very different from the underlying Mac OS.

For example Xerox developed "The Analyst" in Smalltalk 80 for the CIA: http://www.bitsavers.org/pdf/xerox/xsis/XSIS_Smalltalk_Produ...

I would think that NoteCards (written by Xerox in Interlisp-D) had similar customers and that there might also be some joint UI influence.


yeah, ui stuff was changing extremely fast. shneiderman's keynote where he introduced the term 'direct manipulation' wasn't even until 01982. his canonical examples of what he meant by 'direct manipulation' were, in order, emacs, visicalc, zooming in and out of gis data or a conceptual 2-d information space with a joystick, pong, missile command, space invaders, light-pen-driven cad/cam systems (into which category he shoehorns newspaper page layout and process control dashboards in continuous-flow plants), driving a car, recording and replaying robot motions, zloof's 01975 query by example, and finally, at the end, “advanced office automation systems” like the xerox star and ibm’s pictureworld

in this context, it's amusing that p.6/36 of that scan you linked cites user interface uniformity as a key advantage of smalltalk: 'the environment's window facilities consistently adhere to a small number of user interface conventions that are quickly learned by casual and experienced users alike.'

[39]: https://dl.acm.org/doi/10.5555/2092.2093 "The future of interactive systems and the emergence of direct manipulation, by Ben Shneiderman, originally presented as the Keynote Address at the NYU Symposium on User Interfaces, 01982-05-26–28, then published with numerous typographical errors in 01982, Behaviour & Information Technology, 1:3, 237-256, DOI 10.1080/01449298208914450"


Excellently written history on a period of time I am fascinated in.

However, I think the author puts too fine a point on the literal exact geographic position of the technology, and not the historical & material forces that manifested. Obviously every computer advancement didn't occur in sunny Palo Alto directly (just reading where your device was "assembled" will tell you that). But even this article trying to highlight the other places where all of this was going on; the author cannot be unburdened by the massive forces coming out of the Bay Area. This is most obvious when the author has to mention Xerox PARC but not interrogate _why_ Xerox chose that of all locations to let them start a "wild unsupervised west-coast lab".

https://en.wikipedia.org/wiki/Augmentation_Research_Center

Very much a personal nitpick on a very well written entry so I hope this doesn't come off overly negative.


The Computer History Museum's long form interview with Avie Tevanian is a good resource for this era.

https://www.youtube.com/watch?v=vwCdKU9uYnE


Thanks you just increased my TODO list. :)


It is a great interview. What I love about these stories is the revelation of the human effort to develop a threading system for I/O that is not an operating system, The all operating systems ride on top of the universal threading.

Imagine all the human hours that stretches back decades trying to develop this single model of computing.

I'm starting to believe that oral history and tradition is what moves the world along. All the written texts are transient. What we pass directly to each generation is our continuity of culture.


other prominent multithreaded cpus have included the lincoln lab tx-2 on which the first graphical interface was developed (with cad, constraint programming, and windows), the cdc 6600 'peripheral processor', the xerox alto, the tera mta, and the higher-end padauk microcontrollers including the pmc251, which cost 10½¢ https://www.lcsc.com/product-detail/Microcontroller-Units-MC...

some current intel and amd parts also support 'hyperthreading', but as i understand it they sometimes run many instructions from the same thread sequentially, unlike the others mentioned above (except the padauk pmc251), and they are limited to 2 or 4 threads, again unlike the others mentioned except the pmc251

i'm a little unclear on the extent to which current gpu hardware supports this kind of every-clock-cycle alternation between different instruction streams; does anyone know?


Never heard of the Padauk very curious to dig into the details. Thanks for posting.


sure! i hope you enjoy it! there was a lot of discussion of them about five years ago: https://jaycarlson.net/2019/09/06/whats-up-with-these-3-cent... https://cpldcpu.wordpress.com/2019/08/12/the-terrible-3-cent...


I really like the context switching. I spend a lot time trying to think about big universal circuits for a 100 computer. The context switching provides universalness to the processing I find irresistible.

I want to make big furniture size circuits for living environments. This family of chips represent about the most complication I want to consider. I could have the largest circuit create symbols through a busy board interface. The symbols would be understood at a human level and could also be monitored by more complex computing processes.


that sounds fascinating!


Thank you. I hope I can get it done.


Agreed, unfortunately too many don't pay attention to our short computing history, and the pendulum keeps swinging back and forth, while some cool technologies keep falling adoption, only to be reinvented in a worse way.


I used one in the mid eighties, the SERC scattered them around British unis. The vertical hard drive had a wierd sparky engine when it spun, and it used graphics ram to compile so it scribbled over the display compiling C.

I used it's animated icon tool "cedra" to make tintin's captain haddock blow smoke out his ears.

We had the icl jv one. A beauty in reddish brown and cream. Made outside edinburgh near dalkeith I believe


The graphics ram use is a neat optimized use of ram.


There was a 3rd party software tool (the name of which I forget) that used the same graphics memory as scratch space trick when copying floppies on early 128k (and probably 512k) Macs. This reduced the number of swaps required to copy a 400k floppy.


I think it's more "we only have a megabyte of RAM, in some cases we'll just have to use the part the framebuffer is in". The Scavenger (its equivalent of the fsck filesystem checker) did the same thing.


IIRC it was part of a VLSI CAD initiative, right? But I may be getting my history mixed up.


Yes, that's my memory. No vlsi at york at the time, we worked on JANET coloured book protocols (me) and the ada compiler (other, far smarter people) and UNIX 32V (a very smart person) there


There is some emulation available for the PERQ A1 in PERQemu https://github.com/jdersch/PERQemu/tree/master/PERQemu

Someone also added the PERQ A1 to Mame in 0.192, but as of now it is still marked as MACHINE_IS_SKELETON


We had at least 1 PERQ at the University of Waterloo in the early 1980's A friend of mine was helping the local IT folks set it up - arranging to read a 9" tape of PERQ's BitBlit software. A bunch of us wanted to see the machine first-hand but lowly undergrads didn't have access to the lab-room. But, wait ... is that acoustic tile above the door jam, only half-way across? Gimme a boost ... skinniest guy goes up and over ... we can SEE ;-)


I have the strange feeling I'm going to end up seeing one of these today at the Midwest Vintage Computer Festival, though I've never heard of them before. Amazing stuff, thanks for sharing this!

I'm glad they didn't start out with only 128 K of RAM, that would have sucked.


As part of the SPICE project there was an implementation of Lisp on the machines. This implementation became CMU Common Lisp. CMU Common Lisp is still available, but it also served as the jumping off point for Steel Bank Common Lisp (SBCL) which is today the top free Common Lisp implementation.

It's interesting there's a heritage of code stretching all the way back to these old machines, although of course the changes since then have been massive.


I like that Three Rivers refers to the geography of Pittsburgh.

For a long time I did not know where SBCL got its name, until someone explained that Carnegie got his fortune from steel and Mellon ran a bank.


Note also that Spice Lisp for The PERQ was also used to implement Hemlock, an EMACS editor.

http://www.bitsavers.org/pdf/perq/accent_S5/Accent_UsersManu...


Can someone who used one comment on what GUI elements were actually in a PERQ?

I see windows and bitmap graphics in the screenshots I can find.

But I don't see menus, a desktop, standardized buttons, scroll bars, etc. In other words I don't see the hallmarks of the Xerox Star, Apple Lisa, and Macintosh. It looks influenced by the Xerox products but not as advanced.


you wouldn't see those in a screenshot of athena on x-windows circa 01994 either. the menus were all pop-up, so there was no menu bar, and the buttons weren't standardized or very recognizable (the xaw graphic design was abysmal, both aesthetically and usability-wise)

the only reason my x-windows desktop at that time would have recognizable buttons was that at first i was running mwm, which was from motif, osf's ugly but comprehensible effort to fight off open look. later i switched to athena's twm (uglier still but customizable), then the much nicer olvwm, then fvwm, which was similarly comprehensible but looked good


Chilton Computing: Single User Systems - Overview: https://www.chilton-computing.org.uk/acd/sus/

PERQ Reference Manual: http://www.vonhagen.org/perqsystems/perq-cpu-ref.pdf

PERQ Workstations: http://www.bitsavers.org/pdf/perq/RD_Davis/Davis-PERQ_Workst...

PERQ FAQ: http://www.vonhagen.org/perq-gen-faq.html

PERQ History -- Overview: https://www.chilton-computing.org.uk/acd/sus/perq_history/

PERQ Publicity: https://www.chilton-computing.org.uk/acd/sus/perq_pr/

PERQ System Users Short Guide: https://www.chilton-computing.org.uk/acd/pdfs/perq_p001.pdf

More PERQ notes (click "Further Reading" for more pages): https://www.chilton-computing.org.uk/acd/literature/notes/di...

PERQ Book: Contents: https://www.chilton-computing.org.uk/acd/literature/books/pe...

  1. Perq System Users Short Guide
  2. Perq files information
  3. Editor Quick Guide Guide
  4. Perq Pascal Extensions
  5. Perq Pascal Extensions Addendum
  6. Perq Hard Disk Primer
  7. Perq Operating System Programmers Guide
  8. Perq QCode Reference Manual
  9. Perq Microprogrammers Guide
  10. Perq Fault Dictionary
  11. Installation Guide
  12. New PERQ Tablet and Cursor Interface
  13. System B.1 Stream Package
  14. Changes to Pix in System B.1
  15. Installation of POS Version B.1


Related:

https://en.wikipedia.org/wiki/PERQ

>"Processor

The PERQ CPU was a microcoded discrete logic design, rather than a microprocessor. It was based around 74S181 bit-slice ALUs and an Am2910 microcode sequencer. The PERQ CPU was unusual in having 20-bit wide registers and a writable control store (WCS), allowing the microcode to be redefined.[4] The CPU had a microinstruction cycle period of 170 ns (5.88 MHz).[5]"


A nit with TFA: the cpu board didn’t “emulate” P-Code, that was the native machine language. It was a “PASCAL machine” like the way we think of the Lisp Machine.

So the cpu board was all logic chips implementing the P-Code machine language, it wasn’t a cpu chip with supporting logic.

That gives you an idea of computing in the old days.

Back in the day PASCAL was the main teaching language at CMU.

(Edit) There seems to be some pushback on what I’m pointing out here, but it’s true, the cpu board is not built around a cpu chip, they built a microcode sequencer, ALU, etc to execute a p-code variant.

You can read about it here: http://bitsavers.org/pdf/perq/PERQ_CPU_Tech_Ref.pdf

Schematics here: http://bitsavers.org/pdf/perq/perq1/PERQ1A_Schematics/03_CPU...

Pic: http://bitsavers.org/pdf/perq/perq1/PERQ1A_PCB_Pics/CPU_top....


the cpu tech ref you linked documents a machine with 512 20-bit registers (256 architectural—they duplicated the register file to avoid using dual- or triple-ported memory, same as the cadr). p-code doesn't have registers. the microcode word format it documents uses 48-bit instructions. p-code instructions are typically about 8 bits wide. the cpu tech ref also doesn't mention pascal or p-code

based on this it seems reasonable to continue believing that, as graydon says, it ran pascal via a p-code interpreter, but that that interpreter was implemented in microcode

and i don't think it's accurate to say 'the cpu board was all logic chips implementing the p-code machine language'. the logic chips implemented microcode execution; the microcode implemented p-code

i agree that this is the same extent to which lisp machines implemented lisp—but evidently the perq also ran cmucl, c, and fortran, so i don't think it's entirely accurate to describe it as 'a pascal machine'


> i don't think it's entirely accurate to describe it as 'a pascal machine'

yes

it would only be accurate, when one looks at BOTH the CPU and the microcode. The Xerox Interlisp-D machine was a Lisp Machine with the specific microcode. It was a Smalltalk machine and a Mesa machine - each with their microcode.

The original MIT Lisp machine was also microcoded, though I don't know other Microcode than the one for Lisp. The early Symbolics Lisp Machines were also microcoded, but again only for the Lisp OS, with the microcode evolving over time to support features of Common Lisp, Prolog and CLOS.

There were complaints that the Microcode on the Lisp Machines was very complex, which contributed to the death of the machines. For example here is an interview with Keith Diefendorff, who was also architect for the TI Explorer Lisp Machine. In his interview he talks about the Explorer Project and the Microcode topic: https://www.youtube.com/watch?v=9la7398ruXQ


The MIT Lisp Machine microcode feels like writing in a modern 3-address RISC instruction set to me. I see a lot of reluctance to modifying the source in old mailing lists, maybe everyone was told that it was too hard so didn't try.

EDIT: An example: The CADR has a nice API from Lisp to the CHAOSNET hardware, the microcode wakes up a stack group (thread) and passes it a buffer to process. Later machines had Ethernet but there isn't any microcode support for the hardware, Lisp code just polls the status of the ethernet controller and copies packets around a byte at a time. The microcode buffer handling routines for CHAOSNET could have been reused for Ethernet.


Issue with the CADR (and Lambda ..) microcode isn't that it is hard to modify, it is that there is a very deep snake pit, and lots of complex interaction between the microcode and the Lisp Machine.

The CADR already had support for (pre-)Ethernet via microcode very early (~1979) and did it more or less the same way as for Chaosnet. The Lambda I think modified this quite heavily though to something else ...


It's fair to say that x86 or x86-64 instructions are also emulated by some lower-level machine which maps the small number of registers onto a larger register file and translates instructions into something else.


Not in the same sense that the typical minicomputer CPU used microcode, no.


i agree, but i also think it's fair to say that the i386 and amd64 instruction sets are the native machine code they implement. both of these contradictory points of view are useful oversimplifications of reality that are useful for certain purposes


This is an interesting discussion, first it’s true that they implemented a p-code variant called q-code.

Second I’m just making a distinction about what people refer to as emulation. Although you could change the microcode, that typically meant you had to reprogram the board. Microcode is typically inaccessible outside of the cpu. Microcode provides sub-operations within the cpu.


just to be clear, the microcode instruction set is not a p-code variant, and in the case of the perq, the microcode memory was volatile memory that had to be loaded on every boot, and could easily be loaded with custom microcode. you didn't have to burn new eproms or anything

i don't think we have any substantive disagreements left, we're just getting tangled up in confusing, ambiguous terminology like 'native' and 'emulation'


What you linked to doesn't seem to conflict with the article at all.

Article:

> [...] user-written microcode and custom instruction sets, and the PERQ ran Pascal P-code. Through a microcode emulator. Things were wild.

PDF:

> It will also prove useful to advanced programmers who wish to modify the PERQ’s internal microcode and therefore need to understand how this microcode controls the operation of the CPU, [...]

It sounds like it came with microcode that interpreted P-code, but that was user-changeable.

The "wild" part is doing p-code interpretation in microcode, instead of a normal program. See also https://en.wikipedia.org/wiki/Pascal_MicroEngine


I think we have a mild disagreement over what is meant by “emulation”. Typically this means the native instruction set is something other than what is being emulated.

There is microcode inside cpu chips today too, they are used to implement parts of the instruction set. The microcode is not typically accessible outside of the cpu, and it is not considered the native machine language, the instruction set.

The article you link to uses the word “emulator” once, to describe emulation on top of another system without this native support.


the 'microcode' inside cpu chips today is a totally different animal—it doesn't interpret the amd64 or other instruction set, but rather compiles (some of) it into the micro-operations supported natively by the hardware. but from the user's point of view the bigger difference is that the perq's microcode was accessible in the sense that you could write your own microcode and load it into the cpu. current popular cpus do have the ability to load new microcode, but that ability is heavily locked down, so you cannot control the microcode you are running

microcode became a popular implementation technique in the 01960s and fell out of favor with the meteoric rise of risc in the 80s

i think it's reasonable to say that an instruction set emulated by the microcode is a 'native instruction set' or 'native machine language'; it's as native as 8086 code was on the 8086 or lisp primitives were on the cadr. but in this case there were evidently several machine languages implemented in microcode, p-code being only one of them. so it's incorrect to say that p-code was the native machine language, it's incorrect to say that 'the cpu board was all logic chips implementing the p-code machine language', it's incorrect to say that 'they built a microcode sequencer (...) to execute a p-code variant', and it's incorrect to say 'they designed a board to execute p-code directly'


They did design the CPU board absolutely with the intention of the "user facing" instruction set being a bytecode, though. In particular there's hardware support for an opcode file of up to the next 8 bytes in the instruction stream, which gets auto filled with a 64 bit memory load when it's empty. And there's a "256 way branch on the next byte in the opcode file" microcode instruction. The core of the thing is some standard AMD bitslice ALUs, but the board as a whole is clearly designed to let you implement a fast p-code interpreter.

The other thing the CPU board is designed for is fast graphics -- the rasterop hardware is set up so that with the right carefully designed microcode sequences it can do a 'load two sources, do a logical op and write to destination' as fast as the memory subsystem will let you do the 64 bit memory operations. It takes about four CPU cycles to do a memory operation, so you kick it off, do other microcode ops in the meantime, and then you can read the result in the microinsn that executes in 4 cycles' time. The rasterops microcode source code is all carefully annotated with comments about which T state each isnn executes in, so it stays in sync with the memory cycles.

The other fun thing is that the microcode sequencer gives you a branch "for free" in most insns -- there's a "next microinsn" field that is only sometimes used for other purposes. So the microcode assembler will happily scatter flow of execution all over the 4K of microcode ram as it places fragments to ensure that the parts that do need to go in sequence are in sequence and the parts at fixed addresses are at their fixed locations, and them fills in the rest wherever...


i see, thanks! i hadn't investigated it that deeply, and that definitely does sound optimized for implementing things like p-code interpreters (or 8080 emulators)

next-microinstruction fields are pretty common even in vertical microcode like this. do you have the microcode assembler running? have you written custom perq microcode?


I haven't turned my Perq 1 on in 20 years or so -- who knows if it would still run. I did play around with "what would microcode for a rot13 instruction look like" back when I was a student, but I didn't even try assembling it, let alone running it.


Except that the literature from Three Rivers Computing describes the “native instruction set is the P-code byte sequences that a compiler generates for an “ideal” PASCAL (or other structured language) machine.”

So I think we are quibbling, but it’s their words.


the literature you linked in https://news.ycombinator.com/item?id=41473755 (the cpu technical reference, bill of materials, and photo) doesn't say that, nor does it mention pascal or p-code. and the microcode instruction set documented in the cpu technical reference doesn't look anything like p-code. perhaps you're referring to some advertising materials?

i think we agree that it supports p-code as a native instruction set, but it's easy to draw incorrect inferences from that statement, such as your claim that the microcode sequencer executed a p-code variant. it would be reasonable inference from the literature you quote, but it's wrong


Fascinating. I started undergrad at CMU in 1996 and immediately got jobs doing computer support. I came across many old Macs and even an old VAX from the 1980s, but had never heard of a PERQ. By then all the Andrew machines were either HP Apollos running HP-UX or Sun SPARCstation 4s and 5s running SunOS or early Solaris.


The PERQs were on display at CMU around 1980, I remember seeing them in Science Hall (later Wean).

Fun fact, the cpu board ran Pascal P-Code as a machine language. The cpu wasn’t a chip, they designed a board to execute P-Code directly.


not quite right. see graydon's post for the correct explanation


I sister won buggy at CMU with PIKA in 1984...


Was she short, petite? That was preferable so the pushers could climb the hill!


Yes, PIKA designed the buggy around her dimensions.


My feeling was that Andrew and SPICE were completely separate workstation projects at CMU, but only from using the software that came out of each of them.


Not completely, but it wasn't a huge place, and the communities overlapped, went to some of the same seminars, etc. It was all in Wean, pretty much clustered by floor. Andrew was very much about distributed computing ala Athena; SPICE was more along workstation lines (and what was then called AI). As I recall, internet access (IMP, when /etc/hosts was exhaustive) was only the Vaxen and maybe some TOPS systems.

I had access to a PERQ in about 1985, and it was running Pascal firmware - at the time, Pascal was the baseline language for CS courses. I seem to recall it had a tiling WM and a mouse about the size of a softball. There were Altos upstairs though I think they only acted as queues for the building's laser printers (which implemented some pre-PS page description language). But those were the days when 9600 baud was the norm...


They were; their main point of connection was at the OS level, as Accent on PERQ begat Mach, and Andrew originally ran atop CMU’s Mach+BSD environment (MK+UX). That let it take advantage of features such as Mach IPC and dynamic loading of shared libraries and plug-in modules. Later Andrew was ported to vendor operating systems, and then to run atop X11 instead of wm.


It’s fascinating to me how after forty years we are still piecing together that genealogy like it’s some ancient scriptures. And keeping it scattered in blog posts and forum threads like this one.


i think 40 years ago it was pretty well-known; it's just the people it was well-known among were fairly few in number, because personal computers like the perq weren't yet a widespread cultural phenomenon. even well into the 90s, talking to your friends by typing text messages into a computer was still a 'geek' thing


It is funny how the 1970s computer industry was much more geographically inclusive than it is today. Heck, even IBM alone was more geographically inclusive than the industry is today.


"Was"? IBM headquarters are in a Armonk, NY, the nearby population centers are some 4000 and 12000 strong. Back when IBM was making Linux moves, I remember they were hiring in Poughkeepsie, a city of 32000. And Red Hat the linux vendor bought by IBM is headquartered in Raleigh, NC which a city of only half a million people.

https://en.wikipedia.org/wiki/List_of_the_largest_software_c...


shenzhen and tel aviv are further apart than boston and menlo park

or do you mean that shenzhen is closer to taiwan than boston is to menlo park? i wouldn't dismiss the importance of intel, nvidia, micron, berkeley, stanford, asml, samsung, apple, etc., just yet


I would point to this forgotten book from the 1970s

https://www.amazon.com/Dispersing-Population-America-Learn-E...

which describes a nearly universal perception in Europe at the time that it was a problem that economic and cultural power is concentrated in cities like Paris and London. (It takes a different form in the US in that the US made a decision to put the capital in a place that wasn’t a major center just as most states the did the same; so it is not that people are resentful of Washington but rather a set that includes that, New York, Los Angeles and several other cities.)

At that time there was more fear of getting bombed with H-Bombs but in the 1980s once you had Reagan and Thatcher and “globalization” there was very much a sense that countries had to reinforce their champion cities so they can compete against other champion cities so the “geographic inclusion” of Shenzhen and Tel Aviv is linked to the redlining of 98% of America and similar phenomena in those countries as well.

It is not so compatible with a healthy democracy because the “left behind” vote so you get things like Brexit which are ultimately destructive but I’d blame the political system being incapable of an effective response for these occasional spams of anger.


interesting!

i'm not quite sure what you're saying about reagan and shenzhen


Things changed and 1980 looks like an inflection point although for China in particular the inflection point could have been when Nixon went to China in 1972.

Until then China was more aligned with Russia but Nixon and Kissinger really hit it off Mao and Zhao Enlai and eventually you got Deng Xiaoping who had slogans like "To get rich is glorious" and China went on a path of encouraging "capitalist" economic growth that went through various phases. Early on China brought cheap labor to the table, right now they bring a willingness to invest (e.g. out capitalize us) such that, at their best, they bu make investments like this mushroom factory which is a giant vertical farm where people only handle the mushrooms with forklifts

https://www.finc-sh.com/en/about.aspx#fincvideo

(I find that video endlessly fascinating because of all the details like photos of the founders using the same shelving and jars that my wife did when she ran a mushroom lab)

Contrast that to the "Atlas Shrugged" situation we have here where a movie studio thinks they can't afford to spend $150 M to make a movie that makes $200M at the box office (never mind the home video, merchandise, and streaming rights which multiply that) which is the capitalist version of a janitor deciding they deserve $80 an hour.

This book by a left-leaning economist circa 1980

https://www.amazon.com/Zero-Sum-Society-Distribution-Possibi...

points out how free trade won hearts and minds: how the US steel industry didn't want to disinvest in obsolete equipment which had harmful impacts on consumers and the rest of our industry. All people remember though is that the jobs went away

https://www.youtube.com/watch?v=BHnJp0oyOxs

That focus on winning at increased international competition meant that there was no oxygen in the room for doing anything about interregional inequality in countries.


i see. so you think that perhaps in 01974 a computer industry startup in pittsburgh had more of a chance than it would have today, 50 years later, and that there are many more places in the world that are like pittsburgh in that way than places that are like shenzhen? that is, that maybe in 01974 there were dozens of cities where you could launch something like perq, and now there are only ten or something, so it was 'much more geographically inclusive' 50 years ago, even though those ten cities are much farther apart now, and the dozens in 01974 were all in the usa?

i have two doubts about this thesis, i'm not sure there were dozens of cities where you could launch something like perq 50 years ago, particularly considering they went bankrupt in 01985, possibly because of being in pittsburgh. i also suspect there are dozens of cities where you could do it today—i don't think you have to be in a 'champion city' to do a successful hardware startup. it's a little hard to answer that question empirically, because we have to wait 10 years to see which of today's hot startups survive that long

but maybe we can look at the computer companies that are newly-ish big, restricting our attention to mostly hardware, since that's what three rivers was doing and where you'd expect the most concentration. xiaomi, founded 02010 in beijing. huawei, founded 01987 in shenzhen. tencent, founded 01998 nominally in the caymans but really at shenzhen university. loongson, founded 02010 in beijing. sunway (jiangnan computing lab), founded 02009 (?) in wuxi. dji, founded 02006 in hong kong, but immediately moved to shenzhen. allwinner, founded 02007 in zhuhai (across the river from shenzhen and hong kong). mediatek, founded 01997 in hsinchu, where umc is. nanjing qinheng microelectronics (wch, a leading stm32 clone vendor), founded 02004 in nanjing. espressif, founded 02008 in shanghai. rockchip, founded 02001 in fuzhou. nvidia, founded 01993 in silicon valley. alibaba (parent company of t-head/pingtouge), founded 01999 in hangzhou, which is also where pingtouge is. bitmain, founded 02013 in beijing (?). hisilicon, founded 01991 in shenzhen, which designed the kirin 9000s cpu huawei's new phones use and is the largest domestic designer. smic, which csis calls 'the most advanced chinese logic chip manufacturer' (someone should tell csis about tsmc), and makes the kirin 9000s, founded 02000 in shanghai. smee, ('the most advanced chinese lithography company' —csis) founded 02002 in shanghai. mobileye, founded in jerusalem in 01999. biren, the strategic gpu maker, founded 02019 in shanghai. moore threads, the other strategic gpu maker, founded 02020 in beijing. unisoc (spreadtrum), fourth largest cellphone cpu company, founded 02001 in shanghai. tenstorrent, founded 02016 in toronto. zhaoxin, the amd64 vendor, founded 02013 in shanghai. changxin (innotron), the dram vendor, founded 02016 in hefei. fujian jinhua, the other dram vendor, founded 02016 in, unsurprisingly, fujian, i think in fuzhou. raspberry pi, founded 02009 in cambridge (uk). ymtc, the nand vendor, founded 02016 in wuhan. wingtech, the parent company of nexperia (previously philips), founded 02006 in jiaxing. huahong, founded 01996 in shanghai. phytium, makers of feiteng, founded 02014 in tianjin. oculus, founded 02012 in los angeles. zte, founded 01985 in shenzhen. gigadevice, founded 02005 in beijing. piotech, founded 02010 in shenyang. amec, founded 02004 in shanghai. ingenic, founded 02005 in beijing. silan, founded 01997 in hangzhou. nexchip, founded 02015 in hefei. united nova technology, founded 02018 in shaoxing. cetc, the defense contractor, founded 02002 in beijing. naura, the largest semiconductor equipment manufacturer in china, founded 02001 in beijing.

maybe my sampling here is not entirely unbiased, but we have 7 companies in beijing, 7 in shanghai, 6 in shenzhen, 2 in hangzhou, 2 in hefei, 2 in fuzhou, and then one each in each of wuxi, zhuhai, hsinchu, nanjing, silicon valley, jerusalem, cambridge, wuhan, toronto, jiaxing, tianjin, los angeles, and shaoxing. that seems like 19 cities where you could plausibly start a new computer maker today, given that somebody did in the last 30 years. did the usa ever have that many in the 01970s?

to me this doesn't support your thesis 'countries had to reinforce their champion cities so they can compete against other champion cities [resulting in] the redlining of 98% of america and similar phenomena in those countries as well' so that 'focus on (...) international competition [promoted] interregional inequality in countries'. rather the opposite: it looks like winning the international competition and heavy state investment has resulted in increasingly widespread computer-making startups throughout china, mostly in the prc, while computer makers founded elsewhere in the last few decades mostly failed. i mean, yes, none of these startups are in xinjiang or mississippi. but they're not all in one province of china, either; they're distributed across hebei province (sort of!), jiangsu, anhui, hubei, zhejiang, fujian, israel, taiwan, california, england, shanghai, and guangdong. taiwan and 7 of the 22 provinces† of the prc are represented in this list. they tend to be the more populous provinces, though there are some provinces conspicuously missing from the list, like shandong and henan

guangdong: 126 million

jiangsu: 85 million

hebei: 75 million

zhejiang: 65 million

anhui: 61 million

hubei: 58 million

fujian: 41 million

shanghai (city): 25 million

beijing (city): 22 million

tianjin (city): 14 million

the total population of the above regions is 570 million

taiwan is another 24 million people, bringing the total to some 600 million. so this is not quite half the population of china. so while there is surely some 'redlining' going on in china, i don't think it's against 98% or even 60% of the population

that is, probably nobody is going to start a company in inner mongolia making photolithography equipment for new chip fabs, but inner mongolia only has a population of 24 million people despite being 12% of china's land area, bigger than france and spain together. it has no ocean ports because it's nowhere near an ocean. it's a long airplane flight away from where people are building the chip fabs, and you can't fit a vacuum ultraviolet photolithography machine under your airline seat, or possibly on an airliner at all. wuhai didn't get its first airport until 02003. so, though i obviously know very little, i don't think even massive state investment to promote an electronics industry in chifeng would be successful

especially if we add california, israel, england, and ontario to the list, i think it's clear that the computer industry today is far more geographically inclusive than it was 50 years ago

but it's possible i'm just not understanding what you mean. what metrics of geographic inclusivity would you suggest?

______

† the prc also contains 5 'autonomous regions', two 'special administrative regions' (hong kong and macau), and four direct-administered municipalities, of which three are on the list


https://bitsavers.org/pdf/perq/PERQ_Brochure.pdf - this has a price list for the computer. $19200 in 1980-s dollars.

One option stands out: "Memory Parity Option - $500". Ahh... How the times don't change, with the ECC RAM being a premium feature.


Probably a more obvious choice at the time we'd just gone through a point where the early releases of the latest memory generation (64k chips) had come with mysterious memory failures - eventually tracked down to natural alpha radiation from the ceramics used for packaging. We'd not long switch from core and semiconductor memory was still new


I remember seeing the PERQ at trade shows. The best thing about the PERQ was its monitor, which was unusually sharp for that era. It used a yellow-white long persistence phosphor. A CMU grad student friend told me that the monitor designer was “a close personal friend of the electron”, implying that the analog circuitry of the PERQ monitor was especially high quality.


Oh, the PERQ. I never saw one, although I came across most of the weird machines of that era. Lucasfilm bought a number of them. They were so unhappy with them that they ran a large display ad offering them for sale to get rid of them. There must be a story there, but I don't know it. Anyone remember PERQ at Lucasfilm?


screen capture of the present day PERQ emulator running a demo shown originally at SIGGRAPH 1982

https://imgur.com/gallery/3-rivers-computer-corporation-perq...


The predecessor to the "Blit" at Bell Labs was originally named the "Jerq" as a rude play on "Perq" borrowed by permission from Lucasfilm, and the slogan was "A Jerq at Every Desk".

Blit (computer terminal):

https://en.wikipedia.org/wiki/Blit_(computer_terminal)

>The folk etymology for the Blit name is that it stands for Bell Labs Intelligent Terminal, and its creators have also joked that it actually stood for Bacon, Lettuce, and Interactive Tomato. However, Rob Pike's paper on the Blit explains that it was named after the second syllable of bit blit, a common name for the bit-block transfer operation that is fundamental to the terminal's graphics.[2] Its original nickname was Jerq, inspired by a joke used during a demo of a Three Rivers' PERQ graphic workstation and used with permission.

https://inbox.vuxu.org/tuhs/CAKzdPgz37wwYfmHJ_7kZx_T=-zwNJ50...

  From: Rob Pike <robpike@gmail.com>
  To: Norman Wilson <norman@oclsc.org>
  Cc: The Eunuchs Hysterical Society <tuhs@tuhs.org>
  Subject: Re: [TUHS] Blit source
  Date: Thu, 19 Dec 2019 11:26:47 +1100 [thread overview]
  Message-ID: <CAKzdPgz37wwYfmHJ_7kZx_T=-zwNJ50PhS7r0kCpuf_F1mDkww@mail.gmail.com> (raw)
  In-Reply-To: <1576714621.27293.for-standards-violators@oclsc.org>

  [-- Attachment #1: Type: text/plain, Size: 890 bytes --]

  Your naming isn't right, although the story otherwise is accurate.

  The Jerq was the original name for the 68K machines hand-made by Bart. The
  name, originally coined for a fun demo of the Three Rivers Perq by folks at
  Lucasfilm, was borrowed with permission by us but was considered unsuitable
  by Sam Morgan as we reached out to make some industrially, by a company
  (something Atlantic) on Long Island. So "Blit" was coined. The Blit name
  later stuck unofficially to the DMD-5620, which was made by Teletype and,
  after some upheavals, had a Western Electric BellMac 32000 CPU.

  If 5620s were called Jerqs, it was an accident. All the software with that
  name would be for the original, Locanthi-built and -designed 68K machines.

  The sequence is thus Jerq, Blit, DMD-5620. DMD stood for dot-mapped rather
  than bit-mapped, but I never understood why. It seemed a category error to
  me.

  -rob
https://inbox.vuxu.org/tuhs/CAKzdPgxreqfTy+55qc3-Yx5zZPVVwOW...

  The original name was Jerq, which was first the name given by friends at
  Lucasfilm to the Three Rivers PERQ workstations they had, for which the
  Pascal-written software and operating system were unsatisfactory. Bart
  Locanthi and I (with Greg Chesson and Dave Ditzel?) visited Lucasfilm in
  1981 and we saw all the potential there with none of the realization. My
  personal aha was that, as on the Alto, only one thing could be running at a
  time and that was a profound limitation. When we began to design our answer
  to these problems a few weeks later, we called Lucasfilm to ask if they
  minded us borrowing their excellent rude name, and they readily agreed.

  Our slogan: A jerq at every desk.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: