Had the dubious joy of managing Domain/OS Unix children inside the Apollo virtualisation method of the day. Along with Nixdorf, these systems were all through Civil Engineering and NMR facilities at the university campus. And, just like Nixdorf they came with multiple open CVE (didn't exist as a term of art at the time, but lets ignore the anachronism) such as the games user, well known passwords, root sendmail, you-name-it.
Nice enough CPU and memory. Not a very nice Unix experience, analogous to HP-UX it was complex, and a bugger to configure C code on if you came from BSD land (Ultrix, SunOS/Solaris)
Sockets felt like a badly coded bolt-on. They really wanted to pretend the internet protocol stack wasn't there.
> Not a very nice Unix experience, analogous to HP-UX it was complex, and a bugger to configure C code on if you came from BSD land (Ultrix, SunOS/Solaris)
Apollo DomainOS (Aegis) was technically better than Unix.
At some point (before my time), marketing must've decided they needed to be more Unix-compatible, and they did some clever things to also give it multiple Unix personalities (both SysV and BSD), while still keeping their original better personality.
One place you'd feel pain around then, on most any Unix platform, is in downloading "portable" code off the Internet. Even for code already made portable in some sense, you'd probably have at least modify the Makefile, if not at least one C file, to select what portability you wanted, or to work around flaws in their portability. (SunOS 4 was my favorite for awhile, and at least half the reason was that code I'd grab from an FTP site was usually easy to get running, because it was probably developed on either SunOS or at least another BSD. Ultrix was also OK.)
As a teen, I managed a porting lab, so I got to sysadmin and own-time develop on all of them.
Writing portable C code for the different Unix workstations, I'd have to do it in a subset of K&R C, plus a portability mechanism that I built as I went (for architecture differences, library differences, C compiler differences).
I do recall the Apollos being one of the more annoying Unix workstation platforms for C. I think they preferred safer languages for systems programming (again, they were ahead of their time), and maybe they didn't go all-in on developer ergonomics for C. Also, I guess the compiler probably predated ANSI C.
It's a bunch of small things: command line flags, whether a command line tool is even present at all, compiler built-ins / differences in behavior, headers possibly being in different places, compiler support for various language standards and more.
Even to this day, it's not uncommon to find libraries that won't compile with one of GCC, clang, etc. or even the same compiler but Linux vs MacOS.
It was even worse in ye olde times before package managers, I'm assuming.
EDIT: I forgot to mention that System V and BSD are two of the major families.
Both influenced Unix-like OSes far and wide, such as SysV style init scripts in certain Linux distros, MacOS being derived partly from BSD, Solaris being a continuation of SysV IIRC, and more.
There was a rough standardization in where certain things could be found, command line flags, etc.
"In order to provide maximum portability of software from Domain/
as to other Berkeley or System V UNIX systems, Apollo provides two complete and separate UNIX environments, rather than a hybrid of the two. Any workstation can have one or both UNIX environments installed, and users can select which environment to use on a per-process basis.
Two key mechanisms support this facility. First, every program can
have a stamp applied that says what UNIX environment it should
run in. The default value for this stamp is the environment in which
it was compiled. When the program is loaded, the system sets an internal run-time switch to either berkeley or att, depending on the value of the
stamp. Some of the UNIX system calls use this run-time switch to
resolve conflicts when the same system call has different semantics
in the two environments.
The other mechanism is a modification of the pathname resolution
process, such that pathname text contains environment variable ex-
pansions.
[...]
When UNIX software is installed on a node, the standard trees
(/bin, /usr) are installed under directories called bsd4.3 and
sysS.3. The names /bin and /usr are actually symbolic links dependent on the value of an environment variable named SYSTYPE.
That is, /bin is a symbolic link to /$(SYSTYPE)/bin. When the program loader loads a stamped program, it sets the value of SYSTYPE
to either bsd4.3 or sys3.3, according to the value of the program
stamp. Therefore, a program that refers to a name in one of the
standard directories will access the correct version for its environment."
I'm triggered every time I see things like "$663,937.02 today". I just can't help it. There should be a word for [mindlessly] reporting the results of a calculation to arbitrary precision when only one significant digit of precision is warranted. There probably is such a word in German.
You and me both! When I see something like "a blue whale can be nearly 100 feet (30.48m) long", I want to throw a math book at the author. "Nearly 100 feet" is "nearly 30 meters". "About $250K" is "about $660K today" at best.
Yes, my comment implied "about $700K" since I said one significant digit, but I have no problem at all with "about "$660K" especially when you add "at best" as you did :-). In general I think these inflation estimators are inherently very imprecise across multi decade timespans because the societal changes that occur over long periods of time mean you inevitably are comparing apples to oranges.
It's almost like everyone should just adopt the system of units and measurements that makes the most sense and is used by the very, VERY vast majority of the world.
Also true but not relevant here. That original example of overly precise dollar amounts would still be an issue. The root cause isn’t units, but innumeracy.
Sure, that example would still exist, but the example of C to F and Meters to Feet would be gone forever. Add in Miles to Kilometers, pounds to kilograms and all the rest and you've reduced the problem space dramatically.
Another issue is that computer prices are really hard to compare across time, especially for systems targeted at businesses. Do you just use the CPI? Why is that meaningful? Do you use the one for IT? (Then it would be like, "$250,000 in 1989 ($1,400 today)") Something in-between? How do you capture the notion that workstations of this era were basically the first generation that allowed businesses to computer-aid stuff they used to do on paper, bringing a massive productivity boost?
As a German, I won't say "doesn't exist" because it's possible of course. At best, people would think it some term from legalese.
The opposite thing though, "really roughly approximately (and couldn't care less how precise)" (you get what I mean) is often colloquially used in German, under the term "Pi times Thumb" (Pi mal Daumen).
So yes you can say "250tsd anno 1988 sind pi mal daumen 600tsd heutzutage".
Google Translate just understands how German allows "on the fly" compound words. E.g. it knows "Überpräzisionsberichterstattunghundhund" is "
"Over-precision reporting dog dog" not because it's anymore a word than other nonsense but just because it's what it would mean if you made it up.
True, but there were wonderful exceptions. For instance, the Mac SE/30 (1989) could take up to 128 megs, and the Amiga 3000 (1990) was designed to allow for even more (although only 16 megs on the motherboard, but 256 meg Zorro III cards were available in 1993).
Later, some m68k Macs like the Quadra 605 allowed for 128 meg SIMMs, and the PCI Power Macs really took things to a new level. The Power Mac 9500 could take up to 1.5 gigs of memory in 1995.
It's a shame that no computers really distinguish themselves like that any more, aside from high end server systems.
> The Power Mac 9500 could take up to 1.5 gigs of memory in 1995.
> It's a shame that no computers really distinguish themselves like that any more, aside from high end server systems.
Well, intel mac pro from 2019 could take up to 1.5TB of ram while I would guess standard back then for PCs was 16-32GB at most.
So should be comparable (1.5GB vs 16-32MB of ram in 1995)?
Eh to be fair I think the main reason Zorro III RAM cards didn't appear until way after the first Zorro III machine is that Zorro III wasn't coupled to CPU speed so sticking RAM there wasn't great from a performance perspective - you really needed to use the CPU slot to extend beyond 18MB (remember the 2MB of Chip in addition to the 16MB of Fast) in a useful way
The studio I worked at during that time actually did have a CPU slot based memory extension that can handle I believe up to 256 Mb, this would have been around 92 or 93, but I'm not sure how early it was offered on the market that's just when I started there and it was already in the shop.
I don't recall the name of it though but I remember that it was a peculiar PCB design, with part of it resembling a breadboard. I think the name started with Ex, like Excelsior or Excalibur or something like that.
You didn't mention what machine this was on. Since you mention a breadboard, are you sure it wasn't an addon to an existing CPU card? Perhaps the X-Calibur, that plugged into the Commodore A3640? That's a bit later, though.
They were bare (no plastic) Slot1 not socket 370 iirc?
I don’t remember whether it was the base or the multiplier or both that you tweaked, but there was a bunch of Celeron 450s that would run at 900 typically with standard voltage and decent cooling.
IIRC they may have been pIII-900s or 800s that got binned down due to bad cache.
Or, maybe im remembering wrong and they were 266s clocked to 450, and people were slapping them into dual slot boards to get “900”? That kinda feels like maybe it was that? It was a long time ago, man, idk.
At any rate, every one we bought from MA labs in COI CA for a couple of months clocked up with no problems. We must have shipped a couple hundred of those golden celerons before they dried up.
From 1988 you would watch b/w or 4 colours on PC made for offices being really expensive, and from y2k you could play DivX videos and TV at home. And from 2k-2k1, even under Unix at home too (something unthinkable by 1988) with some nice GNU/Linux distro :D.
The technology change from 1988 to 1998 was really huge, much more compared to what you could get since 2010 until today.
Although the Amiga was quite a neat piece of kit when it first came out in 1985, I disagree with your comment unfortunately. The custom chipset sorta "froze" the Amiga design to a particular level and it was very hard to make meaningful upgrades, unlike PCs, where you could slap a new CPU into the CPU socket, or a new video accelerator, sound card, etc, into the ISA or PCI busses.
Yes I realize there were accelerator cards for Amigas later on, like the BlizzardPPC, but they weren't an officially supported option by Commodore (most accelerators came out after Commodore was pronounced dead in 1994) and were more like kludges.
I was a Amiga fan until the mid 90's, and you're spot on. The Amiga was too difficult to iterate on. Chip set upgrades like AGA were far too little, too late. (And forget about ECS, which barely added anything a normal user would care about.) By the time AGA was out, 386 systems with SVGA, Soundblaster, etc were cheap and common.
As a collector, this is why I prefer Atari ST family systems over Amigas. Because they are simpler from the jump, they are easier to work on and less finnicky - floppies being a prime example of this... as Commodore liked using aggressive GCR encoding instead of IBM PC standards for formatting diskettes. For everything else, PC based hardware is just more familiar and flexible for me, since that's what I grew up with.
The Amiga was definitely the more exotic platform! I learned a ton from Amiga OS though. It was my first exposure to a "real" operating system with libraries, multitasking, IPC, etc. Once I got a Linux box though, I never looked back.
I had that as well, and also grabbed a Sempron 3100+ a few years later. Both those chips are in the class of punching way beyond their intended weight.
Tualatin for its over clockable head room. The Sempron for its oddly large L1 cache for the spec.
My friends and I all built BP6 machines. Such a wonderful workstation machine. I donated mine to the PC recycler when I moved in 2005. It still is my favorite beige box unix workstation.
I was however lucky enough to be able to get a Tyan Tiger MPX alongside a couple of 'Morgan' core Durons... Can't remember if they were 1.0 or 1.2GHZ... But wow that thing was -nice- on Windows 2000 and held up better than most than other systems I built in that era (Although, in retrospect that may have been because most other builds I did suffered from the capacitor plague...)
That said, AFAIK some early AXPs could be used, many later ones could have the bridges 'repaired' to allow dual operation.
Early Morgan core (Model 7) Durons however could also run MP.
One big pain point however, the built in USB was broken on early revs (but hey, they bundled a PCI USB2.0 card to compensate) but also the chipset was... -meh-.
In 98 I ordered a p3 500mhz with 128mb and planned to upgrade to 384mb. before it even shipped out to me Intel announced the 550mhz irrc or 10% faster. two years later at least 50% faster with the 700mhz and copper mine.
500 was released in February 99. 550 4 months later, 700 five more months later. $700 worth of CPU performance from February 1999 could be had in a low end March 2000 system with $130 CPU. In 2001 500MHz system was already obsolete.
So true. A year or two after this machine, you could have bought an AT&T StarServer "E" with 4x i486 CPUs and 2GiB total memory, for dramatically less money.
I at first thought a 486 was probably faster, but that Apollo CPU has 64 bit data path apparently. It might have been a beast. I didn't even know Apollo used anything but Motorola 68k CPUs.
I wonder if it might have been a beast at some tasks, but not others?
I imagine these chips might have been optimized for throughput on well-suited and carefully tuned programs, where the compilers could be convinced to put out good code (since it was apparently something like a VLIW).
> I wonder if it might have been a beast at some tasks, but not others?
Domain OS was a multiuser, multitasking OS running on a Motorola 68020 with 4 Megs of RAM with distributed filesystem and diskless clients which did swap over the network. Ah, and with versioning filesystem. And Spice simulations were resonably fast compared to a 386/40 running Pspice on MS-DOS.
I had a Alpha 21164PC[1] on an 164SX motherboard with 512MB RAM back around that time. That thing absolutely flied. I installed Debian on it and I remember compiling and running Gnome 1.x from scratch on it. Fun times.
Today a top of the line Epyc Genoa system can have a total of 256 cores and 12 TB of memory, spread out over two sockets. I wonder how that'll look 10-15 years from now.
The UK government ran an initiative in the mid 1980s to get electronics CAD into every university and polytechnic. The institution I worked for got a lab of DN300 and DN500 workstations running various digital and SPICE-based tools. We used them for undergrad classes and also for some consultancy projects. They felt like a huge leap to us. To put it in perspective, our other labs at the time were either CP/M or mainframe terminals.
I managed to get an Apollo on auction while working at ncd in the early-mid 90’s, mine had a very high res black and white monitor and a very weird Unix implementation that appeared to let you choose between bsd and AT&T environments at boot.
Mostly I remember it for the amazing monitor and the laser mouse and the mirrored mouse pad with the grid - truly amazing to me at the time.
We had at least a few Apollo DN10k pedestals as servers at HQ, for purposes like SCM and what today would be called CI.
Even though by the time I arrived we were mostly buying Sun, HP-PA, and Windows NT, the company started as an Apollo shop. And so SCM at HQ was based on Apollo DSEE ("dizzy"), which they liked, hence, DN10k servers scattered around.
Oh the memories of that machine. UIdaho used one as a file server for the rest of the labs (home dirs were //snake/home/$user iirc, with snake = dn10k) and it was also available for use as a speedy workstation. Spaceball + 40bpp graphics made for some fun early forays into graphics programming.
Was also where I happened to hit ctrl-enter after the admin had neglected to flip it back into normal mode. During the middle of a class day. Chaos.
I think every engineer had these in my 80186 clone group when I cooped at AMD, but I still had to submit SPICE timing simulation runs to the VAX mainframe. The CRT monitors were nice but huge.
What does cooping at AMD mean, does it mean a contract job? What is a 80186 clone group (a division at AMD designing 80186 clone CPUs, or a hobby group of 80186 PC enthusiasts?)
But these machines are from 1988, wouldn't one be designing 386 or even 486 clones at that time?
Co-op is a term used by some universities for alternating a work semester at a company with school semester in class in the USA for 2nd/3rd/4th year students. We were paid something like 10-15 an hour which was much better than most non professional work jobs. The group I was in was making money off of refinements and continual production of AMD 80186 chips for drop in replacements for Intel 80186 at a lower cost. The group adjacent to mine was working on reverse engineering 386 clone design from a black box approach (using only public info like the Intel handbook on 386 which I remember they were all reading like it was the bible) and was at a pretty early stage, every one on that team was very senior and IIRC most had PhDs in Electrical Engineering so I was of course very underqualified for that.
Off the top of my head without looking at reference materials - there were a lot of legal issues going on at the time involving AMD/Intel disputes and AMD was on the ropes back then. The 386 project was highly contentious with Intel, I think they started the project as a worse case scenario in the licensing deal - if Intel forbade 386 from the initial contract, AMD could show proof of work on an entirely ground up project to replicate the 386 - the guys on the team were up against a rock and a hard place and were determined but not always optimistic. Also, there was some grumbling when I started because shortly before I started they had some layoffs. AMD was on the brink. This is off of my memory from 25 years ago and I was a pretty lowly coop and only heard this from meals and shared open workspace with the other project.
edit: I looked it up and there was a dispute if the license applied until 1991 for the 386.
Here's Quest: A Long Ray's Journey Into Light -- Apollo's contribution to that quintessential genre of 80s HPC marketing: early CG animation, presumably rendered on the company's own computers.
Our science/electronics lab was given a bunch of Apollo workstations around 1990. Used for schematic capture and SPICE simulation. They were very powerful but one thing I most clearly remember was the absolutely weird graphical user interface they used, quite unlike anything before or since (and not in a good way). Does anyone remember what that was called?
I worked with developers who really knew their way around the Apollo Display Manager (DM), and apparently it was a power-user platform if you knew how to use it.
(Maybe a bit like how Emacs is a power user platform, but if you stuck it in front of someone who only knew NOTEPAD.EXE mouse editing and File->Save, they'd be unaware of the Emacs power, and fixate on why the most basic things don't quite work how they thought everything works.)
There were some hints at the sophistication, like you could see that the windows with the command line shells in them were designed to support this. (Rather than the windows being emulators of character-cell terminals, like the command line windows we're still using almost everywhere else 40 years later.) And you could see the special function keys on the keyboard, and the strange bar at bottom of screen, and the indicator icons that appear in the titles of windows, though it'd take awhile to learn what they all they can do or mean. And you might be aware of how smoothly integrated the networking is, compared to everything else you'd seen.
I worked at LTX as my first job out of college, and they were re-architecting their test systems based on the Apollo 68K based DN systems. Domain/OS was weird and even those expensive boxes still had trace wires soldered on the motherboards.
A few years later, I ended up at the DEC GEM compiler project but by then the Alpha code generator / optimizer were fairly developed already.
Then I went to HP and worked on the PA-WW --> Itanium code generator.
I saved a copy of this as soon as it popped up. These brochures are something special in themselves.
A big part of it is merely nostalgia, but another part is that they put a sizable effort behind these things. They were more than happy to highlight the unique qualities and capabilities of the computer and put more than a few days effort into the promotional material.
We still have that a little nowadays but it is usually on a website that has 'that' layout we have seen a thousand times trying to copy the Apple website design language.
Brown University's CS department had an entire auditorium full of Apollo workstations when I was a freshman in the early 1980's. The intro programming course (CS11) was taught by Andy van Dam, who might (or might not) be the namesake of the boy in Pixar's Toy Story. It wasn't a difficult decision to choose computer science as my concentration.
> In some respects, the VLIW design can be thought of as "super-RISCy", as it offloads the instruction selection process to the compiler as well. In the VLIW design, the compiler examines the code and selects instructions that are known to be "safe", and then packages them into longer instruction words. For instance, for a CPU with two functional units, like the PRISM, the compiler would find pairs of safe instructions and stuff them into a single larger word. Inside the CPU, the instructions are simply split apart again, and fed into the selected units.
Wait a second.
So while this is all fancy and gives a better performance (with a mature compiler chuckles in Itanium)... but it's totally incompatible with virtualisation (as we know it on x86 systems)?
Cache trashing is surely always a way to throw out the performance, but this design would make it times worse, AFAIU?
I don't think VLIW is incompatible with virtualisation. The amount of CPU state you need to load and save on context switches is a little higher than other architectures so it has a minor effect on the efficiency of virtualisation. But other than that there's no reason VLIW can't be virtualised.
Not that the Apollos actually had hardware support for virtualisation features such as nested page tables. These machines were from the days before virtualisation was popular. Only IBM mainframes had it back then AFAIK.
I've built VLIW hardware, virtualisation is not an issue, cache usage is not different than other CPUs - in fact modern superscalar, out-of-order CPUs effectively do what the compiler for VLIWs do dynamically on the fly.
The big issue about VLIW hardware is that it tends to address a particular performance point - usually lets you squeeze out the last bit of performance without having to go to the complexity of dynamic instruction scheduling (and doubling the CPU size) - the problem comes when you want to do your next chip and the trade off ends up being different, you can't keep the same instruction set and so making an ARCHITECTURE is hard
Nobody cared about virtualization when those systems were designed, other than IBM (and compatible) mainframe designers. None of that generation of workstation/server architecture - MIPS, SPARC, HP-PA, Alpha or for that matter 68k - had that as a consideration.
This was an interesting box at the time. We had a few where I worked and they weren't too bad, but the UI setup was...unique. I'm not saying it was bad, but it was a bit of a traumatic adjustment after using any of the other options in the lab.
Do Apollos and Domain have any real presence in the vintage community? Casually, I have never seen it mentioned.
My very brief Domain experience was that it had a very similar shell style UX, with cryptic, Unix style commands (i.e. ld vs ls) making very “tounge tying” for someone comfortable with Unix.
We had a very early Apollo with some CASE software that nobody was particularly interested in, so it pretty well just sat there. And being all alone it couldn’t really show off Domains networking.
Yes, MAME emulates the DN3500 era hardware mostly acceptably. Setup instructions are available online but reference MESS since they were written before MESS was merged back into MAME.
If you get or do a full build of MAME they’re just in it. Try the `dn3500` system. ROMs will be wherever you get MAME ROMs and Domain/OS install tape images are on Bitsavers.
Some of the first computer based PCB design work took place with Apollo/Mentor graphics. I'm really curious what the user experience was like but can't find any information.
The user experience was great. The same Mentor Graphics product survived as Boardstation on HP-UX and Windows NT (with Exceed).
The GUI was the best marriage between a TUI and a GUI.
You had on the keyboard a holder with all the functional keys for your program (DM, Spice, etc). I really miss such a thing today.
I worked on one of these, this was a nice machine. Unix was a little weird, but the graphics subsystem was very impressive for the time, almost on par with SGI's offering.
Yes, from available sources, 3D hardware on DN10000VS was also state-of-the-art, capable of texture mapping, antialiasing and depth buffer. Some additional info for those who are interested
https://dl.acm.org/doi/pdf/10.1145/97879.97912
Fascinating to see real users in the comments, you're very welcome to share any experience you had with a machine :-)
Love seeing the fiber optics stuff at the end, every era had their own buzzwords with cool pictures!
The only thing I don't like is the inside of the case, it reminds me of some dell or HP systems, densely packed stuff with lots of metal brackets that's hard to work with and probably "vacuum cleaner" style fans in it too (I don't see the fans in the fotos, but I just feel it from the looks of it)
I worked summers in a lab in the 1980s that had Silicon Graphics, Apollo and Sun workstations. The Sun was the easiest to program by far so it got the most use.
Yes. It took me some days to discover (around 1997 in universitys lab, with some donated DN3000s) that setting the date in the past will fix it, with the expense of a salvol, which could take even half of hour.
Mb is unambiguously megabits but it's a lost battle at this point. This thread includes so far MiB, MB, Mb, mb all to mean megabyte.
It's often easy to find what's meant from context but sometimes we're discussing network stuff and it becomes hard to figure out what people mean by "gb" or "mb" or other frankenunits because the context refers both to data in transit and at rest. It's almost certainly not gram bit and milli bit which helps narrow it down...
There's always a fad of greater or lesser magnitude going around. Before AI it was blockchain. And before that we had virtual reality, 3D television, netbooks, the information super highway, juicero, segways, internet of things and Rust. I just tune most of it out.
You can see that the image of the four CPU boards on page six is actually a quad exposure of a single CPU board being moved around. Seems kinda wonky to me.
The hand-drawn watercolor diagrams on the next few pages are neat though
The diagram of connected Apollo computers reminds me of Nvidia's presentation of their ai cpu farms. Will we feel the same about nvidia presentations in 30 years?
The difference is that there were 128MB quad-CPU Apollo DN10000 systems purchased, delivered, and installed at customer sites.
They were used for things like VLSI design and simulation, so a US$250000 system was actually worthwhile—especially since it and other Apollo systems could share their resources with each other on the local network.
I ran a stack of Apollos in the 90s at the U of Iowa. Student hobby thing. We bought them from university surplus. Had a DN5500, a DN4500, and a DN3500. Ran email, a telnet BBS, and web services on them.
Nice enough CPU and memory. Not a very nice Unix experience, analogous to HP-UX it was complex, and a bugger to configure C code on if you came from BSD land (Ultrix, SunOS/Solaris)
Sockets felt like a badly coded bolt-on. They really wanted to pretend the internet protocol stack wasn't there.