Hacker News new | past | comments | ask | show | jobs | submit login
Apollo DN10000: Quad CPU/128Mb RAM workstation from 1988 [pdf] (rees.org)
171 points by stare_spb 3 months ago | hide | past | favorite | 155 comments



Had the dubious joy of managing Domain/OS Unix children inside the Apollo virtualisation method of the day. Along with Nixdorf, these systems were all through Civil Engineering and NMR facilities at the university campus. And, just like Nixdorf they came with multiple open CVE (didn't exist as a term of art at the time, but lets ignore the anachronism) such as the games user, well known passwords, root sendmail, you-name-it.

Nice enough CPU and memory. Not a very nice Unix experience, analogous to HP-UX it was complex, and a bugger to configure C code on if you came from BSD land (Ultrix, SunOS/Solaris)

Sockets felt like a badly coded bolt-on. They really wanted to pretend the internet protocol stack wasn't there.


> Not a very nice Unix experience, analogous to HP-UX it was complex, and a bugger to configure C code on if you came from BSD land (Ultrix, SunOS/Solaris)

Apollo DomainOS (Aegis) was technically better than Unix.

At some point (before my time), marketing must've decided they needed to be more Unix-compatible, and they did some clever things to also give it multiple Unix personalities (both SysV and BSD), while still keeping their original better personality.

One place you'd feel pain around then, on most any Unix platform, is in downloading "portable" code off the Internet. Even for code already made portable in some sense, you'd probably have at least modify the Makefile, if not at least one C file, to select what portability you wanted, or to work around flaws in their portability. (SunOS 4 was my favorite for awhile, and at least half the reason was that code I'd grab from an FTP site was usually easy to get running, because it was probably developed on either SunOS or at least another BSD. Ultrix was also OK.)

As a teen, I managed a porting lab, so I got to sysadmin and own-time develop on all of them. Writing portable C code for the different Unix workstations, I'd have to do it in a subset of K&R C, plus a portability mechanism that I built as I went (for architecture differences, library differences, C compiler differences).

I do recall the Apollos being one of the more annoying Unix workstation platforms for C. I think they preferred safer languages for systems programming (again, they were ahead of their time), and maybe they didn't go all-in on developer ergonomics for C. Also, I guess the compiler probably predated ANSI C.


I've seen the term in passing, but what are "Unix personalities"?

Is it just different optional command line arguments to the common utilities? TAR vs Gnu TAR?


It's a bunch of small things: command line flags, whether a command line tool is even present at all, compiler built-ins / differences in behavior, headers possibly being in different places, compiler support for various language standards and more.

Even to this day, it's not uncommon to find libraries that won't compile with one of GCC, clang, etc. or even the same compiler but Linux vs MacOS.

It was even worse in ye olde times before package managers, I'm assuming.

EDIT: I forgot to mention that System V and BSD are two of the major families.

Both influenced Unix-like OSes far and wide, such as SysV style init scripts in certain Linux distros, MacOS being derived partly from BSD, Solaris being a continuation of SysV IIRC, and more.

There was a rough standardization in where certain things could be found, command line flags, etc.


From an Apollo Domain OS Design document[0]:

"In order to provide maximum portability of software from Domain/ as to other Berkeley or System V UNIX systems, Apollo provides two complete and separate UNIX environments, rather than a hybrid of the two. Any workstation can have one or both UNIX environments installed, and users can select which environment to use on a per-process basis.

Two key mechanisms support this facility. First, every program can have a stamp applied that says what UNIX environment it should run in. The default value for this stamp is the environment in which it was compiled. When the program is loaded, the system sets an internal run-time switch to either berkeley or att, depending on the value of the stamp. Some of the UNIX system calls use this run-time switch to resolve conflicts when the same system call has different semantics in the two environments.

The other mechanism is a modification of the pathname resolution process, such that pathname text contains environment variable ex- pansions.

[...]

When UNIX software is installed on a node, the standard trees (/bin, /usr) are installed under directories called bsd4.3 and sysS.3. The names /bin and /usr are actually symbolic links dependent on the value of an environment variable named SYSTYPE. That is, /bin is a symbolic link to /$(SYSTYPE)/bin. When the program loader loads a stamped program, it sets the value of SYSTYPE to either bsd4.3 or sys3.3, according to the value of the program stamp. Therefore, a program that refers to a name in one of the standard directories will access the correct version for its environment."

[0] https://bitsavers.org/pdf/apollo/014962-A00_Domain_OS_Design...


Even 10-15 years later, multiple CPUs and 128Mb RAM was quite uncommon.

How much did this beast cost back in the 80s?

Edit: Max configs were up to $250,000 ($663,937.02 today) back then according to this price list: http://www.bitsavers.org/pdf/apollo/Apollo_Price_List_Jul88....


I'm triggered every time I see things like "$663,937.02 today". I just can't help it. There should be a word for [mindlessly] reporting the results of a calculation to arbitrary precision when only one significant digit of precision is warranted. There probably is such a word in German.


You and me both! When I see something like "a blue whale can be nearly 100 feet (30.48m) long", I want to throw a math book at the author. "Nearly 100 feet" is "nearly 30 meters". "About $250K" is "about $660K today" at best.


Not to mention the number of times you see geographical locations specified to nanometer precision.



My favourite is "average July temperatures have increased by 2°C (35.6°F)".


My favorite is 98.6F - your whole perspective changes when you realize that it's actually 37C.


Yes, my comment implied "about $700K" since I said one significant digit, but I have no problem at all with "about "$660K" especially when you add "at best" as you did :-). In general I think these inflation estimators are inherently very imprecise across multi decade timespans because the societal changes that occur over long periods of time mean you inevitably are comparing apples to oranges.


It's almost like everyone should just adopt the system of units and measurements that makes the most sense and is used by the very, VERY vast majority of the world.


Also true but not relevant here. That original example of overly precise dollar amounts would still be an issue. The root cause isn’t units, but innumeracy.


Sure, that example would still exist, but the example of C to F and Meters to Feet would be gone forever. Add in Miles to Kilometers, pounds to kilograms and all the rest and you've reduced the problem space dramatically.


Should be 1/4 M$, 2/3 M$.


Another issue is that computer prices are really hard to compare across time, especially for systems targeted at businesses. Do you just use the CPI? Why is that meaningful? Do you use the one for IT? (Then it would be like, "$250,000 in 1989 ($1,400 today)") Something in-between? How do you capture the notion that workstations of this era were basically the first generation that allowed businesses to computer-aid stuff they used to do on paper, bringing a massive productivity boost?


Überpräzisionsberichterstattung


As a German, I won't say "doesn't exist" because it's possible of course. At best, people would think it some term from legalese.

The opposite thing though, "really roughly approximately (and couldn't care less how precise)" (you get what I mean) is often colloquially used in German, under the term "Pi times Thumb" (Pi mal Daumen).

So yes you can say "250tsd anno 1988 sind pi mal daumen 600tsd heutzutage".


Would be possible but seems overcomplicated. Why not just "Überpräzisierung"? This is actually in use.


Maybe, haven't heard it IRL though. As opposed to the more vague 'Übergründlich'.


That’s when someone takes ages to e.g. clean something up because they are using way more effort than necessary.


Sometimes you just gotta love the German language. We joyfully smash words together to make new words, like a linguistic particle accelerator. :-)


Google Translate accepts it, so I guess it's a word. Only one Google result though (this page).


Google Translate just understands how German allows "on the fly" compound words. E.g. it knows "Überpräzisionsberichterstattunghundhund" is " "Over-precision reporting dog dog" not because it's anymore a word than other nonsense but just because it's what it would mean if you made it up.


To provide figures in this manner is to report with "false precision", but we could workshop that phrase into something pithier.


Extremzufallsgenauigkeitsangabe?


I really love it when this has clearly happened in US children's books. Things like, "ate nearly 551 pounds of food per day!" or similar.


Schadenfreude


True, but there were wonderful exceptions. For instance, the Mac SE/30 (1989) could take up to 128 megs, and the Amiga 3000 (1990) was designed to allow for even more (although only 16 megs on the motherboard, but 256 meg Zorro III cards were available in 1993).

Later, some m68k Macs like the Quadra 605 allowed for 128 meg SIMMs, and the PCI Power Macs really took things to a new level. The Power Mac 9500 could take up to 1.5 gigs of memory in 1995.

It's a shame that no computers really distinguish themselves like that any more, aside from high end server systems.


> The Power Mac 9500 could take up to 1.5 gigs of memory in 1995.

> It's a shame that no computers really distinguish themselves like that any more, aside from high end server systems.

Well, intel mac pro from 2019 could take up to 1.5TB of ram while I would guess standard back then for PCs was 16-32GB at most. So should be comparable (1.5GB vs 16-32MB of ram in 1995)?


in the bracket that intel mac pro from 2019 competed, 1.5TB capability would be about normal. With some supporting more.


Eh to be fair I think the main reason Zorro III RAM cards didn't appear until way after the first Zorro III machine is that Zorro III wasn't coupled to CPU speed so sticking RAM there wasn't great from a performance perspective - you really needed to use the CPU slot to extend beyond 18MB (remember the 2MB of Chip in addition to the 16MB of Fast) in a useful way


The studio I worked at during that time actually did have a CPU slot based memory extension that can handle I believe up to 256 Mb, this would have been around 92 or 93, but I'm not sure how early it was offered on the market that's just when I started there and it was already in the shop.

I don't recall the name of it though but I remember that it was a peculiar PCB design, with part of it resembling a breadboard. I think the name started with Ex, like Excelsior or Excalibur or something like that.


You didn't mention what machine this was on. Since you mention a breadboard, are you sure it wasn't an addon to an existing CPU card? Perhaps the X-Calibur, that plugged into the Commodore A3640? That's a bit later, though.


Dual (50% overclocked) Celerons on a BP6 was a relatively common enthusiast setup in the early 2000s. 128M of RAM was also not a huge amount then.


I bought one in 1998 or 1999 with dual 366s OCd to 700 each; anything over 700 was too unstable for my setup.

I kept that thing running until 2008 or 2009. Great machine.


Dual 450s running at 900 was the price/power sweet spot for a while :)


Tell me more about 450MHz socked 370 PPGA CPU that could be overclocked to 900MHz.


They were bare (no plastic) Slot1 not socket 370 iirc?

I don’t remember whether it was the base or the multiplier or both that you tweaked, but there was a bunch of Celeron 450s that would run at 900 typically with standard voltage and decent cooling.

IIRC they may have been pIII-900s or 800s that got binned down due to bad cache.

Or, maybe im remembering wrong and they were 266s clocked to 450, and people were slapping them into dual slot boards to get “900”? That kinda feels like maybe it was that? It was a long time ago, man, idk.

At any rate, every one we bought from MA labs in COI CA for a couple of months clocked up with no problems. We must have shipped a couple hundred of those golden celerons before they dried up.


From 1988 to the 2000’s is a LONG time.


The comment I was replying to was talking about "10-15 years later" (i.e. 1998-2003).


Quite a few Unix commands and concepts are exactly the same!


From 1988 you would watch b/w or 4 colours on PC made for offices being really expensive, and from y2k you could play DivX videos and TV at home. And from 2k-2k1, even under Unix at home too (something unthinkable by 1988) with some nice GNU/Linux distro :D.

The technology change from 1988 to 1998 was really huge, much more compared to what you could get since 2010 until today.


You should look up “Commodore Amiga”, it was way more powerful than PC


Although the Amiga was quite a neat piece of kit when it first came out in 1985, I disagree with your comment unfortunately. The custom chipset sorta "froze" the Amiga design to a particular level and it was very hard to make meaningful upgrades, unlike PCs, where you could slap a new CPU into the CPU socket, or a new video accelerator, sound card, etc, into the ISA or PCI busses.

Yes I realize there were accelerator cards for Amigas later on, like the BlizzardPPC, but they weren't an officially supported option by Commodore (most accelerators came out after Commodore was pronounced dead in 1994) and were more like kludges.


I was a Amiga fan until the mid 90's, and you're spot on. The Amiga was too difficult to iterate on. Chip set upgrades like AGA were far too little, too late. (And forget about ECS, which barely added anything a normal user would care about.) By the time AGA was out, 386 systems with SVGA, Soundblaster, etc were cheap and common.


As a collector, this is why I prefer Atari ST family systems over Amigas. Because they are simpler from the jump, they are easier to work on and less finnicky - floppies being a prime example of this... as Commodore liked using aggressive GCR encoding instead of IBM PC standards for formatting diskettes. For everything else, PC based hardware is just more familiar and flexible for me, since that's what I grew up with.


The Amiga was definitely the more exotic platform! I learned a ton from Amiga OS though. It was my first exposure to a "real" operating system with libraries, multitasking, IPC, etc. Once I got a Linux box though, I never looked back.


Very fair! Amiga did bring a good bit to the table, and on paper was certainly better than the ST. I kinda wish the Amiga came with an MMU.


The 486 and later the Pentium killed it.


The PC being effectively an open platform killed it.

(if not Commodore's incompetence. E.g. Apple did manage to hang on)


Yes, I remember getting my Tualatin Celeron in 2003 with 256mb RAM. It was a beast compared to my friends machines at the time. :P


I had that as well, and also grabbed a Sempron 3100+ a few years later. Both those chips are in the class of punching way beyond their intended weight.

Tualatin for its over clockable head room. The Sempron for its oddly large L1 cache for the spec.


It was a good time to be buying Intel when they were naming their chips after Oregon cities.


Yeah, "Toilets" were the way to go.


My friends and I all built BP6 machines. Such a wonderful workstation machine. I donated mine to the PC recycler when I moved in 2005. It still is my favorite beige box unix workstation.


I was too late to the party for the BP6...

I was however lucky enough to be able to get a Tyan Tiger MPX alongside a couple of 'Morgan' core Durons... Can't remember if they were 1.0 or 1.2GHZ... But wow that thing was -nice- on Windows 2000 and held up better than most than other systems I built in that era (Although, in retrospect that may have been because most other builds I did suffered from the capacitor plague...)


This does look like an interesting board https://www.tyan.com/Motherboards_S2466_Tiger%20MPX


It was pretty fun.

The -intended- purpose was Dual Athlon MPs.

That said, AFAIK some early AXPs could be used, many later ones could have the bridges 'repaired' to allow dual operation.

Early Morgan core (Model 7) Durons however could also run MP.

One big pain point however, the built in USB was broken on early revs (but hey, they bundled a PCI USB2.0 card to compensate) but also the chipset was... -meh-.

But wow it was a solid board!


Pentium 2 was released in 1997 and was released with maximum speed of 450 MHZ which could be overclocked much higher.


In 98 I ordered a p3 500mhz with 128mb and planned to upgrade to 384mb. before it even shipped out to me Intel announced the 550mhz irrc or 10% faster. two years later at least 50% faster with the 700mhz and copper mine.

Things moved rapidly back then.


500 was released in February 99. 550 4 months later, 700 five more months later. $700 worth of CPU performance from February 1999 could be had in a low end March 2000 system with $130 CPU. In 2001 500MHz system was already obsolete.


Yeah. My first PC build was in 98: PII 300MHz / 128MB RAM. No way a P3 500 was available at the time.


No Pentium 2 could be overclocked much above ~500MHz due to the way they were all manufactured. 10% is not really much higher.


I vividly remember pushing 233 to 300.


The FAA command center had rooms full of them.


And those 10 years were the 90’s, advancement was crazy back then.


So true. A year or two after this machine, you could have bought an AT&T StarServer "E" with 4x i486 CPUs and 2GiB total memory, for dramatically less money.


I assume a 486 was dramatically slower than the Apollo PRISM (which looks reasonably well-designed by today's standards).


It was probably not dramatic if it was slower at all, there is a reason so many of those workstation processors died out in the 90s.


I at first thought a 486 was probably faster, but that Apollo CPU has 64 bit data path apparently. It might have been a beast. I didn't even know Apollo used anything but Motorola 68k CPUs.


I wonder if it might have been a beast at some tasks, but not others?

I imagine these chips might have been optimized for throughput on well-suited and carefully tuned programs, where the compilers could be convinced to put out good code (since it was apparently something like a VLIW).


> I wonder if it might have been a beast at some tasks, but not others?

Domain OS was a multiuser, multitasking OS running on a Motorola 68020 with 4 Megs of RAM with distributed filesystem and diskless clients which did swap over the network. Ah, and with versioning filesystem. And Spice simulations were resonably fast compared to a 386/40 running Pspice on MS-DOS.


Well, with a performance of ~22-25 MIPS, PRISM@18Mhz was about 2.5 times faster than original 25Mhz 486DX and roughly on par with 1992 486DX2 66Mhz.


2 MFLOPS for 33MHz and 4 MFLOPS for 2/66MHz 486.


I used to have an Alpha server from that vintage with I think 512MB of RAM. Came on two daughterboards full of 32K (or maybe 16K) SIMMs.


I had a Alpha 21164PC[1] on an 164SX motherboard with 512MB RAM back around that time. That thing absolutely flied. I installed Debian on it and I remember compiling and running Gnome 1.x from scratch on it. Fun times.

[1] https://en.wikipedia.org/wiki/Alpha_21164#Alpha_21164PC_(PCA...


Alpha first shipped in 1993:

http://www.cpu-collection.de/?l0=co&l1=DEC&l2=Alpha+AXP

Half a decade later, which was a long time back then.

Comparison: 80286, 1982; 80386, 1985; 80486, 1989.


Yeah, to put it in perspective, in 1988 I was still dreaming about an Amiga 500 or 2000 with less than 1% of that RAM.


We were still a year away from the 486. And that wouldn't become common in the home for another 3-4 years.


Yup! A while back I made a little timeline for putting things like this in perspective. https://gist.github.com/cellularmitosis/9a1b96ed3109690a2840...


Today a top of the line Epyc Genoa system can have a total of 256 cores and 12 TB of memory, spread out over two sockets. I wonder how that'll look 10-15 years from now.


Same Apollo company where original version of uuid came from https://en.m.wikipedia.org/wiki/Universally_unique_identifie...


Also the http:// syntax in urls.



Just a quick heads up. MAME supports Apollo workstations and servers since 0.240

https://wiki.mamedev.org/index.php/Driver:Apollo


Thank you.


The UK government ran an initiative in the mid 1980s to get electronics CAD into every university and polytechnic. The institution I worked for got a lab of DN300 and DN500 workstations running various digital and SPICE-based tools. We used them for undergrad classes and also for some consultancy projects. They felt like a huge leap to us. To put it in perspective, our other labs at the time were either CP/M or mainframe terminals.


In retrospect, did you think this initiative worked?


ARM, Argonaut Software (SuperFX), DIP Research Ltd. (Atari Portfolio, Sharp PC3000 ASICs), INMOS (which I wouldnt call a success), Technophone PC105 first real pocket sized cellphone (https://www.youtube.com/watch?v=gI6Uf-Of4fs).

There was a ton of high integration semi innovation coming out of UK in eighties, on par with Japanese.


I managed to get an Apollo on auction while working at ncd in the early-mid 90’s, mine had a very high res black and white monitor and a very weird Unix implementation that appeared to let you choose between bsd and AT&T environments at boot.

Mostly I remember it for the amazing monitor and the laser mouse and the mirrored mouse pad with the grid - truly amazing to me at the time.


Speaking of mouses: the DN3000 Apollo mouses never got clutered with dirt like the PC mouses of the 1995 era.


We had at least a few Apollo DN10k pedestals as servers at HQ, for purposes like SCM and what today would be called CI.

Even though by the time I arrived we were mostly buying Sun, HP-PA, and Windows NT, the company started as an Apollo shop. And so SCM at HQ was based on Apollo DSEE ("dizzy"), which they liked, hence, DN10k servers scattered around.

https://bitsavers.org/pdf/apollo/008788-A01_Getting_Started_...


Oh the memories of that machine. UIdaho used one as a file server for the rest of the labs (home dirs were //snake/home/$user iirc, with snake = dn10k) and it was also available for use as a speedy workstation. Spaceball + 40bpp graphics made for some fun early forays into graphics programming.

Was also where I happened to hit ctrl-enter after the admin had neglected to flip it back into normal mode. During the middle of a class day. Chaos.


I think every engineer had these in my 80186 clone group when I cooped at AMD, but I still had to submit SPICE timing simulation runs to the VAX mainframe. The CRT monitors were nice but huge.


What does cooping at AMD mean, does it mean a contract job? What is a 80186 clone group (a division at AMD designing 80186 clone CPUs, or a hobby group of 80186 PC enthusiasts?)

But these machines are from 1988, wouldn't one be designing 386 or even 486 clones at that time?


Co-op is a term used by some universities for alternating a work semester at a company with school semester in class in the USA for 2nd/3rd/4th year students. We were paid something like 10-15 an hour which was much better than most non professional work jobs. The group I was in was making money off of refinements and continual production of AMD 80186 chips for drop in replacements for Intel 80186 at a lower cost. The group adjacent to mine was working on reverse engineering 386 clone design from a black box approach (using only public info like the Intel handbook on 386 which I remember they were all reading like it was the bible) and was at a pretty early stage, every one on that team was very senior and IIRC most had PhDs in Electrical Engineering so I was of course very underqualified for that.


Why tho? AMD had full license and even AM386 shipped with 1:1 copy of Intel microcode ripped directly from reverse engineered die shots.


Off the top of my head without looking at reference materials - there were a lot of legal issues going on at the time involving AMD/Intel disputes and AMD was on the ropes back then. The 386 project was highly contentious with Intel, I think they started the project as a worse case scenario in the licensing deal - if Intel forbade 386 from the initial contract, AMD could show proof of work on an entirely ground up project to replicate the 386 - the guys on the team were up against a rock and a hard place and were determined but not always optimistic. Also, there was some grumbling when I started because shortly before I started they had some layoffs. AMD was on the brink. This is off of my memory from 25 years ago and I was a pretty lowly coop and only heard this from meals and shared open workspace with the other project.

edit: I looked it up and there was a dispute if the license applied until 1991 for the 386.

https://en.wikipedia.org/wiki/Am386


A co-op is more or less a fancier/better/longer term internship.


Thanks. In BrEn it means something radically different: a worker's cooperative.


It can mean that too here in the USA, though I only have experience with the University co-op work thing.


Never ever heard that usage in over half a century. I do not think we ever use it for that.


Here's Quest: A Long Ray's Journey Into Light -- Apollo's contribution to that quintessential genre of 80s HPC marketing: early CG animation, presumably rendered on the company's own computers.

https://www.youtube.com/watch?v=_-W0ktaNsLg


Our science/electronics lab was given a bunch of Apollo workstations around 1990. Used for schematic capture and SPICE simulation. They were very powerful but one thing I most clearly remember was the absolutely weird graphical user interface they used, quite unlike anything before or since (and not in a good way). Does anyone remember what that was called?

Edit: Domain/OS! https://en.wikipedia.org/wiki/Domain/OS


I worked with developers who really knew their way around the Apollo Display Manager (DM), and apparently it was a power-user platform if you knew how to use it.

(Maybe a bit like how Emacs is a power user platform, but if you stuck it in front of someone who only knew NOTEPAD.EXE mouse editing and File->Save, they'd be unaware of the Emacs power, and fixate on why the most basic things don't quite work how they thought everything works.)

There were some hints at the sophistication, like you could see that the windows with the command line shells in them were designed to support this. (Rather than the windows being emulators of character-cell terminals, like the command line windows we're still using almost everywhere else 40 years later.) And you could see the special function keys on the keyboard, and the strange bar at bottom of screen, and the indicator icons that appear in the titles of windows, though it'd take awhile to learn what they all they can do or mean. And you might be aware of how smoothly integrated the networking is, compared to everything else you'd seen.


I worked at LTX as my first job out of college, and they were re-architecting their test systems based on the Apollo 68K based DN systems. Domain/OS was weird and even those expensive boxes still had trace wires soldered on the motherboards.

A few years later, I ended up at the DEC GEM compiler project but by then the Alpha code generator / optimizer were fairly developed already.

Then I went to HP and worked on the PA-WW --> Itanium code generator.

Good times.


I love the watercolor and hand-drawn diagrams.

Makes this look like a truly artisan crafted machine -- lovingly crafted, all the way down to the documentation.


I saved a copy of this as soon as it popped up. These brochures are something special in themselves.

A big part of it is merely nostalgia, but another part is that they put a sizable effort behind these things. They were more than happy to highlight the unique qualities and capabilities of the computer and put more than a few days effort into the promotional material.

We still have that a little nowadays but it is usually on a website that has 'that' layout we have seen a thousand times trying to copy the Apple website design language.


Their logo is really pretty too!


Brown University's CS department had an entire auditorium full of Apollo workstations when I was a freshman in the early 1980's. The intro programming course (CS11) was taught by Andy van Dam, who might (or might not) be the namesake of the boy in Pixar's Toy Story. It wasn't a difficult decision to choose computer science as my concentration.


Wow, never heard of that ambitious CPU arch before. Looks like an early loser in the RISC wars that was recycled into later PA-RISC and Itanium.

https://en.m.wikipedia.org/wiki/Apollo_PRISM


> In some respects, the VLIW design can be thought of as "super-RISCy", as it offloads the instruction selection process to the compiler as well. In the VLIW design, the compiler examines the code and selects instructions that are known to be "safe", and then packages them into longer instruction words. For instance, for a CPU with two functional units, like the PRISM, the compiler would find pairs of safe instructions and stuff them into a single larger word. Inside the CPU, the instructions are simply split apart again, and fed into the selected units.

Wait a second.

So while this is all fancy and gives a better performance (with a mature compiler chuckles in Itanium)... but it's totally incompatible with virtualisation (as we know it on x86 systems)?

Cache trashing is surely always a way to throw out the performance, but this design would make it times worse, AFAIU?


I don't think VLIW is incompatible with virtualisation. The amount of CPU state you need to load and save on context switches is a little higher than other architectures so it has a minor effect on the efficiency of virtualisation. But other than that there's no reason VLIW can't be virtualised.

Not that the Apollos actually had hardware support for virtualisation features such as nested page tables. These machines were from the days before virtualisation was popular. Only IBM mainframes had it back then AFAIK.


I've built VLIW hardware, virtualisation is not an issue, cache usage is not different than other CPUs - in fact modern superscalar, out-of-order CPUs effectively do what the compiler for VLIWs do dynamically on the fly.

The big issue about VLIW hardware is that it tends to address a particular performance point - usually lets you squeeze out the last bit of performance without having to go to the complexity of dynamic instruction scheduling (and doubling the CPU size) - the problem comes when you want to do your next chip and the trade off ends up being different, you can't keep the same instruction set and so making an ARCHITECTURE is hard


Nobody cared about virtualization when those systems were designed, other than IBM (and compatible) mainframe designers. None of that generation of workstation/server architecture - MIPS, SPARC, HP-PA, Alpha or for that matter 68k - had that as a consideration.


This was an interesting box at the time. We had a few where I worked and they weren't too bad, but the UI setup was...unique. I'm not saying it was bad, but it was a bit of a traumatic adjustment after using any of the other options in the lab.


Used these for several years. The Apollo Domain OS is still a thing of wonder to learn from.


Do Apollos and Domain have any real presence in the vintage community? Casually, I have never seen it mentioned.

My very brief Domain experience was that it had a very similar shell style UX, with cryptic, Unix style commands (i.e. ld vs ls) making very “tounge tying” for someone comfortable with Unix.

We had a very early Apollo with some CASE software that nobody was particularly interested in, so it pretty well just sat there. And being all alone it couldn’t really show off Domains networking.


As far as i remember, their hard drives were prone to failure (the motor will not spin off).


As someone who lives in a place where shipping one would be more expensive than the machine itself, are there any good emulation options?

Even if we need to deal with the lack of Domain/OS specific keys.


Yes, MAME emulates the DN3500 era hardware mostly acceptably. Setup instructions are available online but reference MESS since they were written before MESS was merged back into MAME.


I need to find those. And get a more powerful workstation.


If you get or do a full build of MAME they’re just in it. Try the `dn3500` system. ROMs will be wherever you get MAME ROMs and Domain/OS install tape images are on Bitsavers.


Previous discussion about the File System and how it influenced SMB: https://news.ycombinator.com/item?id=23519826


Some of the first computer based PCB design work took place with Apollo/Mentor graphics. I'm really curious what the user experience was like but can't find any information.


The user experience was great. The same Mentor Graphics product survived as Boardstation on HP-UX and Windows NT (with Exceed).

The GUI was the best marriage between a TUI and a GUI. You had on the keyboard a holder with all the functional keys for your program (DM, Spice, etc). I really miss such a thing today.


I would love a video walkthrough of the OS and one of CAD packages by someone proficient with the system, even on an emulator.


I worked on one of these, this was a nice machine. Unix was a little weird, but the graphics subsystem was very impressive for the time, almost on par with SGI's offering.


Yes, from available sources, 3D hardware on DN10000VS was also state-of-the-art, capable of texture mapping, antialiasing and depth buffer. Some additional info for those who are interested https://dl.acm.org/doi/pdf/10.1145/97879.97912

Fascinating to see real users in the comments, you're very welcome to share any experience you had with a machine :-)


A smart detail was that the DN10000 had both VME and ISA slots.


I love reading this stuff!

Love seeing the fiber optics stuff at the end, every era had their own buzzwords with cool pictures!

The only thing I don't like is the inside of the case, it reminds me of some dell or HP systems, densely packed stuff with lots of metal brackets that's hard to work with and probably "vacuum cleaner" style fans in it too (I don't see the fans in the fotos, but I just feel it from the looks of it)


They seem to have put some effort into thermal design and claim a noise level (at 30°C) of 55dB, which fits into an office.


I had a 9000 machine for a bit, and I have rarely seen a denser interior.


I worked summers in a lab in the 1980s that had Silicon Graphics, Apollo and Sun workstations. The Sun was the easiest to program by far so it got the most use.


Anybody else remember the Apollo/Domain clock bug that made us reset the time and date:

https://news.ycombinator.com/item?id=34370897

https://news.ycombinator.com/item?id=34029638

Just to clarify, the DN10000 PRISM processor was apparently not subject to it.


Yes. It took me some days to discover (around 1997 in universitys lab, with some donated DN3000s) that setting the date in the past will fix it, with the expense of a salvol, which could take even half of hour.


I thought for a moment that Apollo had used DEC's PRISM processor for some reason, but no, Apollo had it's own PRISM.

Also, there a bunch more Apollo brochures here:

https://www.1000bit.it/ad/bro/brochures.asp?id=39


Please update title to MiB or MB.

Mb looks more like megabits.


Mb is unambiguously megabits but it's a lost battle at this point. This thread includes so far MiB, MB, Mb, mb all to mean megabyte.

It's often easy to find what's meant from context but sometimes we're discussing network stuff and it becomes hard to figure out what people mean by "gb" or "mb" or other frankenunits because the context refers both to data in transit and at rest. It's almost certainly not gram bit and milli bit which helps narrow it down...


Had access to this beautiful machine, during my Computer Graphics course in the late 80s. Brings back memories.


What I really loved about the graphics is that having two superimposed frame buffers with alpha and z-buffering it'd be a great gaming platform.

A $700K (in 2024 dollars) gaming machine, that is.


Innovation in the turn of the century was amazing.

Today it seems all innovations follow in step with whatever the current trend is as opposed to being a trendsetter of its own design.

Anyone else getting exhausted about hearing LLMs and AI already?


There's always a fad of greater or lesser magnitude going around. Before AI it was blockchain. And before that we had virtual reality, 3D television, netbooks, the information super highway, juicero, segways, internet of things and Rust. I just tune most of it out.


Is Rust a fad?


You can see that the image of the four CPU boards on page six is actually a quad exposure of a single CPU board being moved around. Seems kinda wonky to me.

The hand-drawn watercolor diagrams on the next few pages are neat though


The diagram of connected Apollo computers reminds me of Nvidia's presentation of their ai cpu farms. Will we feel the same about nvidia presentations in 30 years?


Ah, 128MiB RAM _supported_. Much like the IBM PC AT _supported_ 16MiB. Not that such a configuration was actually delivered/installed.


The difference is that there were 128MB quad-CPU Apollo DN10000 systems purchased, delivered, and installed at customer sites.

They were used for things like VLSI design and simulation, so a US$250000 system was actually worthwhile—especially since it and other Apollo systems could share their resources with each other on the local network.


Seems the anything is editable ux is similar to plan 9?


I really miss long form marketing.


What would the equivalent specs look like today?

I mean, is it even possible to get a personal supercomputer today?


I ran a stack of Apollos in the 90s at the U of Iowa. Student hobby thing. We bought them from university surplus. Had a DN5500, a DN4500, and a DN3500. Ran email, a telnet BBS, and web services on them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: