Hacker News new | past | comments | ask | show | jobs | submit login
The cult of Amiga and SGI, or why workstations matter (czanik.hu)
151 points by kgerzson on April 5, 2022 | hide | past | favorite | 274 comments



I don't want to come across as disrespectful to my elders but in many ways I feel that certain kinds of nostalgia like this are holding open source back. One of my favorite pieces of software is GNU Make. Having read the codebase, I get the impression that its maintainer might possibly be a similar spirit to the OP. The kind of guy who was there, during the days when computers were a lot more diverse. The kind of guy who still boots up his old Amiga every once in a while, so he can make sure GNU Make still works on the thing, even though the rest of us literally would not be able to purchase one for ourselves even if we wanted it.

It's a pleasure I respect, but it's not something I'll ever be able to understand because they're longing for platforms that got pruned from the chain of direct causality that led to our current consensus (which I'd define more as EDVAC -> CTSS -> MULTICS/CPM -> SysV/DOS/x86 => Windows/Mac/Linux/BSD/Android/x86/ARM).

My point is that open source projects still maintain all these #ifdefs to support these unobtainable platforms. Because open source is driven by hobbyism and passion. And people are really passionate about the computers they're not allowed to use at their jobs anymore. But all those ifdefs scare and discourage the rest of us.

For example, here's a change I recently wrote to delete all the VAX/OS2/DOS/Amiga code from GNU Make and it ended up being 201,049 lines of deletions. https://github.com/jart/cosmopolitan/commit/10a766ebd07b7340... A lot of what I do with Cosmopolitan Libc is because it breaks my heart how in every single program's codebase we see this same pattern, and I feel like it really ought to be abstracted by the C library, since the root problem is all these projects are depending on 12 different C libraries instead of 1.


> I recently wrote to delete all the VAX/OS2/DOS/Amiga code from GNU Make and it ended up being 201,049 lines of deletions.

That commit 10a766eb you linked to appears to have significantly more than removal of those architectures -- you also appear to have deleted all the tests, documentation, and support for multiple (human) languages. Of the portions of diff which aren't deleted files, many of the deletions seem to be around code reformatting (i.e. they add a line for each line they remove), and removal of includes which your libc doesn't need / support. I do see that you have removed several #ifdef VMS (and similar) stanzas, but the vast bulk of the changes are either removal of, or modifications to, unrelated files.

Although I agree instinctively that we shouldn't expect to run the latest version of Make on the VAX in our basement, this diff doesn't make that argument very well and IMO borders on disingenous.

Or, to put it another way, after looking at your diff, I feel much happier about your hypothetical make-on-Amiga afficionado, because I know that they also care about i18n, documentation, and testing!


Identifying the moment we need to stop supporting a platform is frequently non-obvious. Unisys still supports MCP (as Clearpath OS), VMS is supported and was ported to x86, Atos supports GECOS, and some people are making CP/M fit inside dedicated word processors. A couple months back there was a report of ncurses failing on Tandem NonStop OS (still supported, IIRC, by HPE). As long as something works, we'll never hear about all those wonderful exotic platforms people still use for various reasons. There must be a lot of PCs controlling machinery doing GPIO through parallel ports while emulating PDP-8's with some poor intern having to figure out how to make changes to that code.


Here's a simple criterion I propose: is the platform disbanded?

For example, in GNU Make's dir.c file. There's a lot of stuff like this:

    #ifndef _AMIGA
        return dir_file_exists_p (".", name);
    #else /* !AMIGA */
        return dir_file_exists_p ("", name);
    #endif /* AMIGA */
There should be a foreseeable date when we can say, "OK the Amiga maintainers have added a feature that lets us use '.' as a directory so we can now delete that #ifdef". But that day is guaranteed to never come, because the Amiga project is disbanded. So should we keep that until the heat death of the universe?

I would propose that we instead say, if you use Amiga, there's great support for it in versions of GNU Make up until x.y.z. So if you love old Amigas, you'll be well served using an older version of GNU Make. I think it's straightforward.


This kind of thing shows why ifdefs are usually the wrong tool for multi-target projects.

There should be a platform adapter API instead that defines a shared header with function names for these actions, multiple platform-specific implementation files, and only one of them gets compiled.

That way you could simply ignore the existence of "filesys_amiga.c", and then maybe delete it 50 years from now.

(I realize it's probably not realistic to do such major internal surgery on Make at this point.)


This was one of the brilliant aspect of Knuth's web system: You could have change files that would be applied to the immutable source to manage ports to individual platforms.¹ I really wish that this sort of patching had spread to other programming paradigms.

1. It works even on somewhat actively developed code since the likely to require porting parts of the program could be clumped together in a web source file and, once isolated, generally saw few if any changes. I remember maintaining the public domain vms change files for the TeX 2–3/MF 1–2 upgrades and as I recall, even with these significant changes to the programs, no updates for the change files were necessary.²

2. Most of the work that I did back then in maintenance was centered around enhancements to the VMS-specific features. For example, rather than having iniTeX be a special executable, instead, iniTeX features could be enabled/disabled at runtime from a single executable.³ Similarly with debug code.

3. This feature appeared soon after in the web2c port of TeX and friends, but I think that Tom Rokicki might have got the idea from me (or else it was a case of great minds thinking alike in the late 80s).


> This was one of the brilliant aspect of Knuth's web system

As they say, those who don't know the past, are bound to repeat it.


I respectfully disagree to this very common argument against ifdefs, and will also point out that developers who hold this anti-ifdef belief hold it quite strongly! I generally prefer the ifdef-style.


> is the platform disbanded?

amigaos had a major new release less than a year ago so i guess the amiga ifdefs should stay


All three users rejoice! ;-)


> But that day is guaranteed to never come, because the Amiga project is disbanded. So should we keep that until the heat death of the universe?

Surely there are more options "keep that until the heat death of the universe" and "remove that the moment its platform is out of production". A more practical metric for free/open software, I think, would be "is there still someone maintaining this software on this platform": every major release (e.g., 5.x -> 6.0) could be a time to do a sanity check, documenting ports that don't have an active maintainer as "deprecated". At the next major release, if nobody's stepped up to maintain it, then it gets removed. (One could argue even that's too draconian, because there may be people still using the port even if it's not maintained.)


Surely this is a problem of source code organisation not multi-platform support? A better model would be having clearly articulated platform/arch specific files doing all the per-platform/arch defines, which then get included for the target specific build. However, reorganising a code-base this way when it wasn't like that at the outset is can be a fraught (and hard to justify) exercise.

A good example of source code organisation that does a decent job of separating per-platform concerns, is the Nethack code base.


What do you mean by 'disbanded'? Should FOSS stop support the moment a manufacturer discontinues a device/platform?


That's what Homebrew does


Why on earth do you need to hack on GNUMake in the first place?

It’s old software which has far better and faster replacements like Ninja.

The whole design of Make itself is outdated and inefficient which is why tools like Ninja are much faster.


Maybe because tons and tons of software build with GNU make, not with Ninja, and if you want to be able to build that software, you need GNU make?

Also, Ninja by itself is not really a replacement for GNU make. Rather it's a tool one can build such a replacement on, so the comparison is a bit off to start with...


The Cosmopolitan Libc repo uses GNU Make. It builds 672 executables, 82 static archives, and 17,637 object files and runs all unit tests in under 60 seconds on a $1000 PC. How fast would Ninja do it?


make has a -j switch.


While I agree that having one library could be a good solution, I don't think all those #ifdefs are wasted. There are a lot of legacy tech programs that use systems way older than I ever imagined would still be in use. There was a minor crisis at for an org I was working at one time where they were going to need to flip a multimillion dollar system because the only source of replacement parts was a hobbyist in his garage and for new gov compliance purposes that guy was going to need to become a cleared contractor supplier...which can be problematic if the person in question is an open source advocate whose main purpose in running this business in retirement is supplying enthusiasts rather than government departments or contractors.

I'm sure some of those systems and ones like it make plenty of use out of those #ifdefs though, and it's not just a handful of old fogey enthusiasts cramping everyone elses style. Established systems can't always evolve as fast as the general market.


Because open source is driven by hobbyism and passion. And people are really passionate about the computers they're not allowed to use at their jobs anymore. But all those ifdefs scare and discourage the rest of us.

Isn't this the same process you yourself referenced? There's nothing stopping people from forking and building leaner versions of these programs, but it turns out that projects with those passionate, nostalgic developers are more successful even with the support burden than that same project without them. That backwards-support might be a cost rather than a waste.


Scaring new talent away from spending their precious time on a solved problem like GNU make is a feature not a bug. Work on something more relevant to today's challenges.

There's plenty of things "holding open source back", this isn't a significant one of them IMNSHO.


Saying make is a solved problem is a real failure of imagination. I used to do a lot of work on Blaze and Bazel. I intend to add support for a lot of the things it does to GNU Make. Such as using ptrace() to make sure a build rule isn't touching any files that aren't declared as dependencies. I can't do that if our imagination is stuck in the 80's with all this DOS and Amiga code.


> Saying builds are a solved problem is a real failure of imagination.

Don't put words in my mouth, I said GNU make is a solved problem.


That sentence doesn't even make sense.


People can just use other tools like Bazel or Ninja.

Make works the way it’s intended to work. Leave it as is.


I wrote Bazel's system for downloading files. https://github.com/bazelbuild/bazel/commit/ed7ced0018dc5c5eb... So I'm sympathetic to your point of view. However some of us feel like people should stop reinventing Make and instead make Make better. That's what I'm doing. I'm adding ptrace() support. That's something I asked the Bazel folks to do for years but they felt it was a more important priority to have Bazel be a system for running other build systems like Make, embedded inside Bazel. So I asked myself, why don't we just use Make? It's what Google used to use for its mono repo for like ten years.


> this isn't a significant one of them IMNSHO.

"You can't have systemd in Debian, what about kFreeBSD" "You can't use Rust until it supports DEC Alpha"

...there are no shortage of examples where open and free software is held back by hyper-niche interests, where our pet twenty and thirty year old, long-dead projects and processor architectures create absurd barriers to improve anything.


Hey, NetBSD and MacPPC are still used, and people still backports Nethack/Slashem/Frotz to those archs.

Old hardware is always useful.


All Actually Portable Executables run on NetBSD. I love NetBSD. I helped fix a bug in their /bin/sh. I even put the little orange flag on my blog. https://justine.lol/lambda/#binaries See also https://github.com/jart/cosmopolitan#support-vector


On non amd64 NetBSD?


It is supported very well on AMD64 NetBSD. Perhaps that will be expanded in the future.


NetBSD is so cool, and I have so many machines sitting around I need to get running on (SGI, Alpha, Dreamcast, etc.)

Sadly I’ve heard it can be rough on older architectures still. I’ve been told, that at least on VAX, for example is not in the best of states because usermode dependencies on Python. From what I was told, Python currently doesn’t have a VAX port due to the architectures floating point design.


The VAX backend in GCC was recently modernized and improved so it could survive the cc0 removal.

There was a fundraiser which I created for that purpose.


The weird thing I keep seeing is that many C libraries still define their own integer types for some reason instead of just using the ones from stdint.h. Even new ones, that certainly didn't ever need to support ancient platforms and ancient compilers, like libopus.


> instead of just using the ones from stdint.h. Even new ones, that certainly didn't ever need to support ancient platforms and ancient compilers, like libopus.

But stdint.h is from C99, and AFAIK there are non-ancient compilers for non-ancient platforms that still don't fully support C99.


stdint.h is usually in the part they do support though (I think, in my experience, I haven't done a survey.)


> many C libraries still define their own integer types for some reason instead of just using the ones from stdint.h

Many C developers in my experience (as recent as 5 years ago) haven't really adapted to C99. This is in part because some platforms (Microsoft, I'm looking at you) resisted adopting it for quite some time and partly because a lot of C development is carried out by older developers who have long given up on keeping up with new developments. And I say that as a dev in my 40s.

I think stdint etc are great, but for some people the start of any 'serious' C codebase still requires a whole load of int and pointer type redefinitions.


> some platforms (Microsoft, I'm looking at you)

Ah, the wonderful world where you pass the dwLength of your lpszName, which is a LPCSTR, as a DWORD.


The ROM on my Amiga is from March 28th this year.


A example of holding on to old stuff is still making use of a system programming language designed in 1972 in 2022.


I agree. That's why the Cosmopolitan Libc repository includes support for C++ as well as a JavaScript interpreter. It has Python 3. You can build Python 3 as a 5mb single file Actually Portable Executable that includes all its standard libraries! Then you put your Python script inside the executable using a zip editing tool and it'll run on Mac, Windows, Linux, name it. You can also build Actually Portable Lua too. More are coming soon.


Sounds like an advertisement.


Make is a very old codebase that you shouldn’t change in dramatic way anyway. It’s in itself an outdated piece of software which has far better and more modern replacements.

No need to break it for older systems.


> I feel that certain kinds of nostalgia like this are holding open source back

i'm misunderstanding what the post had to do with promoting open source


[flagged]


Everything I do, I do for you. I don't expect you to use it or thank me or pay me. All I'm saying is I could have more impact serving the community with fewer ifdefs.


And if you cut down every #ifdef in the code base, would you be able to stand in the winds that would blow then?

It's not just GNU make. Lots of GNU software is as you describe, because the GNU project took on the burden of abstracting the mess that was interoperating across many very different platforms -- many of which have far less capability than a modern or even decade-old x86-64 box -- and so became an internal reflection of that mess. It's not pleasant, I wish I could chuck autotools into the fucking sun, but it gets GNU going on a variety of exotic platforms that still run and are made more pleasant by the presence of GNU there. This effort is not helped by you going in and trying to yeet all the code you personally have decided is obsolete and gets in your way. GNU Make doesn't need such a "service", it's not "held back" by refusing it, and if you want a build tool that runs on and helps you build your little inner-platform effect without considering anything you personally deem irrelevant, write your own! Maybe start with Plan 9's mk as a base, it's tiny and comes from a somewhat similar philosophy.

Sheesh. Terry Davis believed he was important enough to be gangstalked by the CIA, and even he took a hobbyhorse, take-it-or-leave-it approach to TempleOS.


I don't have any authority over the GNU project. The things I do in the Cosmopolitan Libc repo have no bearing on them. They're free to keep doing what they're doing. However I'm willing to bet that once I add ptrace() support, it'll be attractive enough that folks will be willing to consider the equally libre copy of the GNU Make software that I intend to distribute instead. Just as they'll be free to copy what I did back into their own den of #ifdefs if they want to compete. Competition is good. Open source is good. It's all done with the best intentions. We're going to end up with a better Make once I've contributed this feature.


I apologize sincerely, it wasn't clear you were going the fork-and-hack route from your initial post.

Still, do consider starting from Plan 9's mk... GNU make is... a hairball, even without the #ifdefs :)


No worries friend. Not the first time I've gotten the Terry Davis treatment. Didn't Gandhi or someone say first they ignore, then laugh, then fight, and you win? Changing the world one line of code at a time!



Oh, it's a lot longer story than that. I worked as SGI from just around its peak, to its downfall, seeing the company shrink to a tenth of its size while cutting products.

At the time, I was a fairly junior employee doing research in AGD, the advanced graphics division. I saw funny things, which should have led me to resign, but I didn't know better at the time. Starting in the late 90's, SGI was feeling competitive pressure from 3DFx, NVIDIA, 3DLabs, Evans and Sutherland (dying, but big), and they hadn't released a new graphics architecture in years. They were selling Infinite Reality 2's (which were just a clock increase over IR1), and some tired Impact graphics on Octanes. The O2 was long in the tooth. Internally, engineering was working on next gen graphics for both, and they were both dying of creeping featureitis. Nothing ever made a deadline, they kept slipping by months. The high end graphics pipes to replace infinite reality never shipped due to this, and the "VPro" graphics for Octane were fatally broken on a fundamental level, where fixing it would mean going back to the algorithmic drawing board, not just some Verilog tweak, basically, taping out a new chip. Why was it so broken? Because some engineers decided to implement a cool theory and were allowed to do it (no clipping, recursive rasterization, hilbert space memory organization).

At the same time, NVIDIA was shipping the GeForce, 3DFx was dying, and these consumer cards processed many times more triangles than SGI's flagship Infinite Reality 2, which was the size of a refrigerator and pulled kilowatts. SGI kept saying that anti-aliasing is the killer feature of SGI and that this is why we continue to sell into visual simulation and oil and gas sector. The line rendering quality on SGI hardware was far better as well. However, given SGI wasn't able to ship a new graphics system in perhaps 6 years at that point, and NVIDIA was launching a new architecture every two years, the reason to use SGI at big money customers quickly disappeared.

As for Rick Beluzzo, man, the was a buffoon. My first week at SGI was the week he became CEO, and in my very first allhands ever, someone asked something along the lines of, "We are hemmoraging a lot of money, what are you going to do about it"? He replied with, "Yeah, we are, but HP, E&S, etc, are hemmoraging a lot more and they have less in the bank, so we'll pick up their business". I should have quit my first week.


Trying to be both sell a seller of very high end computer products while also doing your own chips and graphics at the same time is quite the lift. And at the same time their market was massively attacked from the low end.

The area where companies could do all that and do it successfully kind of ended in the late 90s. IBM survived but nothing can kill them, I assume they suffered too.

What do you think, going back to your first day, if you were CEO could have been done?

I always thought for Sun OpenSource Solaris, embracing x86, being RedHat and eventually Cloud could have been the winning combination.


> What do you think, going back to your first day, if you were CEO could have been done?

Not quite sure. You correctly pointed out SGI (HP, Sun, everyone else in the workstation segment) was suffering with Windows NT eating it from below. To counter that, SGI would need something to compete in price. IRIX always had excellent multiprocessor support and, with transistors getting smaller, adding more CPUs could give it some breathing room without doing any microarchitectural changes. For visualization hardware the same also applies - more dumb hardware with wider buses on a smaller node cost about the same while delivering better performance. To survive, they needed to offer something that's different enough from Windows NT boxes (on x86, MIPS and Alpha back then) while maintaining a better cost/benefit (and compatibility with software already created). I'd focus in low-end entry-level systems that could compete with the puny x86's by way of superior hardware-software integration. The kind of what Apple does, when you open the M1-based Air and it's out of hibernation before the lid is fully opened.

> I always thought for Sun OpenSource Solaris, embracing x86, being RedHat and eventually Cloud could have been the winning combination.

I think embracing x86 was a huge mistake by Sun - it helped legitimize it as a server platform. OpenSolaris was a step in the right direction, however, but their entry level systems were all x86 and, if you are building on x86, why would you want to deploy on much more expensive SPARC hardware?

Sun never even tried to make a workstation based on Niagara (first gen would suck, second gen not so much), and OpenSolaris was too little, too late - by then the ship had sailed and technical workstations were all x86 boxes running Linux.


> IRIX always had excellent multiprocessor support and, with transistors getting smaller, adding more CPUs could give it some breathing room without doing any microarchitectural changes.

That kind of exactly what Sun did and likely gave them legs. This might not have made it out of the 90s otherwise.

> I think embracing x86 was a huge mistake by Sun - it helped legitimize it as a server platform.

x86 was simple better on performance. I think it would have happened anyway.

> OpenSolaris was a step in the right direction, however, but their entry level systems were all x86 and, if you are building on x86, why would you want to deploy on much more expensive SPARC hardware?

That's why I am saying they should have dropped Sparc already in the very early 2000s. They waste so much money on machines that were casually owned by x86.


> That's why I am saying they should have dropped Sparc already in the very early 2000s

They had two SPARC architectures - the big core, high-performance one and Niagara, the many wimpy core one and Sun never thought about combining both on the same machine, which is more or less what x86 is doing now because they are being forced to do it by Apple and its M1. Sun was there in the early 2000's.

There's no fundamental reason why x86 has to be faster than SPARC, in fact, SPARC machines trounced x86 ones.

Another thing that killed Sun was that it could never decide whether they were Apple or Microsoft - they never decided whether they wanted to make integrated hardware or a become plain software company.


> There's no fundamental reason why x86 has to be faster than SPARC, in fact, SPARC machines trounced x86 ones.

Other then Intel having 100x more money and more architects with better nodes ...

SPARC was already worse by 1998.

Its really only continued to make money because companies couldn't figure out how to scale vertically and rather paid absolutely absurd amounts for these multi-core machines.

Some of the Supercomputer people showed how they could totally destroy Crey/Sun and so on with simple clusters of x86.

Sure had they perfectly executed SPARC and hit on every investment they might have done a lot better, but that just wasn't the reality. Intel just executed much better. SPARC had all kinds of development failure in the 90s and in the late 90s Intel just had better nodes in addition to better architecture.

> Another thing that killed Sun was that it could never decide whether they were Apple or Microsoft - they never decided whether they wanted to make integrated hardware or a become plain software company.

I think they did want to be Apple but they simply were not that god at making products. They were actually better at making software, but then didn't do very well at making products based on that.

The did some good stuff like Fishworks, AMD x86 Servers and so on.

They should really have turned into Open Apple, RedHat and AWS. With OpenSolaris and Zones on x86 costume machines they could have dominated the Cloud space (they even had products going into that direction early in the 2000s).


SGI also offered x86 based machines, of all things running NT or WIN 2K. That was when the writing really was on the wall.


Precisely. When you start offering x86 boxes with Windows, it's obvious your own architecture and OS are dead.

But I don't remember those. I remember Intergraph did (and they were quite good, but died quickly)



Oh wow... In my brain I was confusing it with the Intergraph one.

I also was not surprised to see Rick Belluzzo's name in the press release... The devastation that man caused has no parallel in the history of personal computing. It's comparable to what Steven Elop did in the mobile space.

I use to joke Microsoft had perfected the use of executive outplacement as an offensive weapon.


I think some kind of discipline around releasing products in a timely way by cutting features would have done wonders. However, the kinds of computers SGI built were on the way out, so they couldn't have survived without moving in the direction that people wanted. Maybe it was a company whose time had come. SGI wasn't set up to compete with the likes of NVIDIA and Intel.


Why couldn't they compete with NVIDIA? Were the not just as big?


The PC market grew bottom up to be 10x the size of the workstation market during the 90s. Even with thinner margins, eventually workstation makers couldn't compete any longer on R&D spend.

The book The Innovator's Dilemma describes the process.


^meant thinner margins of PC industry.


Engineering culture. SGI was not pragmatic in building hardware, more of an outlet for brilliant engineers to ship experiments.


I can see how that was your view if you came in on the tail end but it definitely wasn't always so. I've owned quite a few of them and if you had the workload they delivered - at a price. But for what they could do they would be 3 to 4 years ahead of the curve for a long time, and then in the space of a few short years it all went to pieces. Between NVIDIA and the incredible clock speed improvements on the x86 SGI was pretty much a walking zombie that did not manage to change course fast enough. But CPU, graphics pipeline, machine and software to go with it is an expensive game to play if the number of units is smaller than any of your competitors that have specialized.

I'm grateful they had their day, fondly remember IRIX and have gotten many productive years out of SGI hardware, my career would definitely not have taken off the way it did without them, in fact the whole 'webcam/camarades.com/ww.com' saga would have never happened if the SGI Indy did not ship with a camera out of the box.


I wasn't familiar, so I searched and found your great account of the history! https://jacquesmattheij.com/story-behind-wwcom-camaradescom/


Fun times! Also frustrating but an excellent school and all is well that ends well.


Do you know anything about the rumor that an O2 successor was prototyped that used NVIDIA graphics? (I think I read that on Nekochan long ago).

The slow pace and poor execution of CPU and graphics architectures after ~1997 is crazy to think about. The R10000 kept getting warmed over, same for IR, and VPro, and the O2.

The Onyx4 just being an Origin 350 with ATI FireGL graphics (and running XFree86 on IRIX) was the final sign that they were just milking existing customers rather than delivering anything innovative.


>NVIDIA was launching a new architecture every two years

at the peak of nvidia 3dfx war new chips were coming out every 6-9 months

Riva 128 (April 1997) to TNT (June 15, 1998) took 14 months, TNT2 (March 15, 1999) 8 month, GF256 (October 11, 1999) 7 months, GF2 (April 26, 2000) 6 months, | 3dfx dies here |, GF3 (February 27, 2001) 9 months, GF4 (February 6, 2002) 12 months, FX (March 2003) 13 months, etc ...


In many cases, an executive's behavior makes sense after you figure out what job he wants next.


Thank you so much for your inside story. Hilbert space memory organization sounds great :)


Texture memory is still stored like that in modern chips (presuming they meant Hilbert curve organization). It's so that you can access 2D areas of memory but still have them close by in 1D layout to make it work with caching.


Is it really Hilbert?

A project I worked on a couple of decades ago had interleaved the bits of the x and y indexes to get that effect “for free”, I imagine a Hilbert curve decode would take quite a bit of silicon.


Ah right, youre right it's probably the interleaved bits (aka Morton code) in actuality. Or more likely, something tuned to the specific cache sizes used in the GPU.


I have no clue what hilbert space memory organization could possibly be - arbitrarily deep hardware support for indirect addressing? - but it sounds simultaneously very cool and like an absolutely terrible idea.


the framebuffer had a recursive rasterizer which followed a hilbert curve through memory, the thinking being that you bottom out the recursion instead performing triangle clipping, which was really expensive for the hardware at the time.

The problem was that when you take some polygons which come close to W=0 after perspective correction, their unclipped coordinates get humongous and you run out of interpolator precision. So, imagine you draw one polygon for the sky, another for the ground, and the damn things Z-fight each other!

SGI even came out with an extension to "hint" to the driver whether you want fast or accurate clipping on Octane. When set to fast, it was fast and wrong. When set to accurate, we did it on the CPU [1]

1 - https://www.khronos.org/registry/OpenGL/extensions/SGIX/SGIX...


Nowadays all GPUs implement something similar (not necessarily Hilbert but maybe Morton order or similar) to achieve high rate of cache hits when spatially close pixels are accessed.

3D graphics would have terrible performance without that technique.


Got it. I was imagining something else entirely.


That reads like a tabloid, the way it attacks individuals and t-shirts. I heard the fall of SGI summed up in one sentence once. It went something like, "SGI had a culture that prevented them from creating a computer that cost less than $50,000." That's all probably all we need to know.


--> The Innovator's Dilemma


I always find the story of DEC interesting as well.


It was the pinnacle of tech tragedy to see them being acquired by Compaq.

At least until Oracle, of all companies, acquired Sun...


No one else stepped in to cover their offer, it was either Oracle, or be broken into pieces to pay their debts.


Honestly that would have been much better for the world.


I guess, people would enjoy being stuck with Java 6 or going back to C++98, or never have experienced a UNIX where C is actually prevented to do its typical misbehaviours since a decade now.


Why? Just because a company gets massively smaller or sells of division doesn't mean its bad.

Almost everybody other then Oracle would likely not have closed Solaris development for example.

Whoever would have ownership of Java would not just close it. Java was way to popular and would have continued development no matter where it landed.


Solaris development is still going on, even if very slowly.

It was Oracle that introduced SPARC ADI, the very first sucessful use of hardware memory tagging in commercial UNIXes, which others are still followers.

Yeah, that is why Oracle still does like 80% of the work on OpenJDK, where are those contributors doing language innovation instead of repacking OpenJDK distributions?


Oracle did far more bad then good and the can go fuck themselves.


They rescued the assets they cared about, from a dying company no one else cared about, other than doing cheap talk and bad badmouthing Oracle.


They continued development on many of the open source and make them proprietary again them requiring forks to lots of projects. And those projects actually did continue internal development, it would be different if they just stopped all development.

And even without that they suck as a company and it sucks to be their costumer for the most part.

Even if they release something open source its often a license that prevents anybody else from doing much with it.

They are pretty much one of the worst tech companies in the world and pretty much most people agree with this.


> being stuck with Java 6

OpenJDK would have remained an option and other organizations would be more than able to steer the language.


You mean the same organizations that currently contribute about 10%, and mostly create their own distributions out of Oracle work?


If Oracle wants to control it, they'd better put the effort in.


Someone needs to step in when no one else does.


Also Itanium.


Podcast seems to be gone.


I love all the nostalgia, but the post doesn't really answer the most interesting part of the title: why do workstations matter? I was really hoping there was some revelation in there!


Alan Kay attributes a big part of the advances of PARC to the custom workstations they built for themselves. They cost $20k(?) but ran much faster than off the shelf high-end machines at a time when Moore’s Law was accelerating CPU speed dramatically. He says it let them work on machines from the future so they had plenty of time to make currently-impossible software targeting where them common machines would be when they finished it.


I am currently working with a hardware start-up, that happens to have "some monies" in the bank to deliver what we need. And if I was asked to describe how the culture inside the company feels, I would say "like the early days of NeXT." There's money here to do what we want, there's technically smart guys in the room, nothing is off the table in terms of what we're willing to try, we have a vision of what we want to build, nobody is being an architecture astronaut, all of us have shipped product before and know what it takes.

Where I am going with all this is that what we're trying to build, the consumer grade hardware to run it won't exist for two more years so we're having to use really beefy workstations in our day-to-day work. Not quite PARC level of built-from-scratch customization, but not exactly cheap consumer grade desktops either.


A long time ago I suggested developing on Xeon Phi-based workstations because, in order to run well on future computers, you need to be able to run on lots of slow cores. The idea kind of still holds. These days the cores are quite fast and running on one or two of them gives acceptable performance, but if you can manage to run on all cores, your software will be lightning fast.


Yes, we're very much taking a distributed, multi-threaded approach, but at the same time, the distributed parts are still local to the user.


Are you creating an OS and/or softball as well there?


We are not creating a custom OS at this time. We have to be aware of the limits of what we can achieve given the size of our team and the desire to actually get to market in a timely fashion. That said, there's heavy customization of the OS we are using, along with some bare metal "OS? Where we're going we don't need no steenking OS" work. We're more focused on the h/w, the UI and UX that interfaces between the h/w and the user, and the graphics pipeline.


Sounds interesting, thanks!


It also helps if you are Alan Kay or the other talents that were at PARC back then. What future would you create if you had a custom $100K (2022 dollars) workstation?


The NVIDIA DGX Station A100 has a list price of $149k, I believe. It's a workstation that's advertised as an "AI data center in a box":

https://www.nvidia.com/en-us/data-center/dgx-station-a100/


That looks like it would be an absolute hoot to experiment with, but I don't know what I could possibly do with one that would generate a return on $150K. What would you do?


In some circumstances making a hedge fund model 0.001% better would return 10x that.

John Carmack tweeted about buying one a while back. I'm not sure if a DGX on your desk does anything for working with ML at the bleeding edge, though, since those all run on megaclusters of A100s or TPUs.


What's he doing with it?



I think we tend to overestimate how "good" those people were. Yes they were definitely good professionals, but they happened to be in a very special place at a special time with very few constraints compared to how we work now. It was a lot easier for them to innovate than for any of us now.


Folks at PARC designed and built their own PDP-10 clone to get around internal politics. It's hard to overestimate the amount of talent concentrated there at the time.

It always looks like all the low-hanging fruit has already been plucked. So, stop looking for low-hanging fruit.


And they did it as a warm-up before tackling something difficult.


They were working in a total vacuum. Computers were classically giant things that simply tabulated or ran physics simulations. To create an entire well articulated vision of HCI is extremely difficult and requires both creativity and technical competence. I would not make such statements that it was easier to innovate then. In fact, I’d say it’s way easier to innovate now that so much exists to play with and mix and match, not to mention the ability to have perspective on negatives of assumptions previously made that can be corrected.


well... not a TOTAL vacuum. a number of the PARC people (thinking of Tesler & Kay) were intimately familiar with Englebart's work at SRI. When the ARC (Engelbart's Augmentation Research Center at SRI) was winding down and PARC was staffing up, the people who left first were supposedly those who rejected the "brittleness" of the expert-focused software ARC developed.

It's definitely true that the cost of implementing a frame buffer fell well into the affordable range as they were moving to PARC. And politics at PARC made it easy to say you were developing a system for "inexpert document managers." They were definitely exploring new ideas about HCI as PC hardware was emerging. But Larry Tessler was pretty clear that Lisa learned what not to do from looking at the Alto & Star. And the Alto & Star learned what not to do by looking at various bits of ARC software. And Engelbart was adamant his team not repeat the UI/UX mistakes of ITS.

So sure... they were trailblazers, but they had a good idea of where they wanted to go.


> So sure... they were trailblazers, but they had a good idea of where they wanted to go.

And where would that be? Not so obvious. Stick in a random person and they’d have no idea. You could easily say the same thing today. Are you not wanting to make the mistakes of all previous computing and know the direction it should take? If you do, and you’re able to execute and change the direction of computing, you’d be a very rare talent indeed.


Ah, the old "they didn't earn that but lucked into it" argument.


so you're saying if those SPECIFIC people didn't exist we wouldn't have modern computers. I know Hacker News suffers from an extreme case of hero worship / cult of personality but I can't agree that technology and more generally progress works that way. Every new development is child of its time, talented people HAVE to meet with opportunity to generate extraordinary results, but I don't think Alan Kay is any more extraordinary as a human than anyone I work with, he just had an above average opportunity available to him and he seized it.

Same for Page/Brin, find someone as smart as they were and try starting Google now, I bet it's not gonna go the same way.


They allow you to spend much less time thinking about resource constraints and/or performance optimization and just focus on what you're trying to get done and/or do more than would be possible with conventional systems. Workstations let you buy your way past many limitations.

The closest example today would be people like developers, AI researchers, 3D designers and video editors buying high-end video cards (quite possibly multiple) running in Threadripper systems. They're paying up for GPU power and huge amounts of cores/RAM/IO bandwidth/whatever to either do something that isn't feasible on a lower end system or to complete their work much more quickly.


This is correct. I do video and 3D with a Threadripper 3990X with 128GB RAM and a 3090 because I don't want to even think about computational restraints. It is overkill for 95% of my work but, that other 5% where I am rendering something arduous, it pays off.


I think an analogy to supercars is pretty relevant. They are a minuscule percentage of cars developed/sold but have a disproportionate influence on the car market overall.

I'm sure there are analogies for a lot of other industries as well.

Also - there is no cloud, just someone else's computer. Which is why I will never rely on something like a Chromebook, the web or other modern day equivalents of dumb terminals :)


Let's be clear, those workstations were hella expensive. (The Amiga was not in the true workstation range, but rather more of a glorified home computer. Their workstation equivalents would probably be the stuff from NeXT.) Their closest modern equivalent would probably be midrange systems like whatever Oxide Computer is working on these days. A workstation was simply a "midrange" level system that happened to be equipped for use by a single person, as opposed to a shared server resource. The descendant of the old minicomputer, in many ways.


I'd say when you get into a fully kitted 2k video toaster you get into 'workstation' territory for my potentially personal definition of 'workstation'. For me a 'workstation' is a machine built and optimized for a task that primarily runs that task and that task only. It is sometimes the 'core hardware' that is interesting, but often many of the peripherals are more interesting. Things I consider workstations include Avid and other video editing systems, machines built for cad, and yes many of the 'desktop' sgi machines which generally did nothing but run software like softimage all day every day.

The 'workstation' largely died because general off the shelf machines because fast enough to perform those task almost as well. You now see a more open market for the peripherals that help 'specialize' a general purpose computer. Wacom tablets, video capture devices, customized video editing controllers, midi controllers, GPUs, etc


Yep, the closest I ever got to a SGI was drooling over their product brochures as a kid. The cost of a modest Indy was about the same as a mid-range car. It's hard to grasp as a modern PC user that these workstations could handle classes of problems that contemporary PCs could not, no matter what upgrades you did. Today, it would be like comparing a PC to a TPU-based (or similar ASIC) platform for computing.

From what I've read, Oxide is making racks of servers and has no interest in workstations that an individual would use.


When a game company I worked at went out of business and couldn't unload their aging Indigo Elans and Indys I picked up one of each for about a hundred bucks. I now have some regrets simply because their monitors have strange connectors, so i keep them around and they are heavy and annoying to store. That said I could probably pay off my initial purchase and then some by unloading one of their 'granite' keyboards (ALPs boards, collectors love them).


That 13W3 connector is the worst. I also had an Indy many years ago and getting an adapter together for it was a real challenge. These days I expect it to be somewhat simpler though.


Anecdotally a friend had a computer store that sold Amiga's and had his entire inventory bought out by the CIA of whom never paid him so they must have been powerful for something. This was in the late 90's. No idea what they were using them for. I used one to help a friend run a BBS. I could play games with incredible graphics whilst the BBS was running in the background.


If it was late 90's, as much as I love Amiga, it would have been for niche stuff like replacing a bunch of information screens or something like that where they could have replaced it with PCs but would then need to change their software setup. In terms of "power" the Amiga was over by the early 90's, even if you stuffed it full of expensive third party expansions. It still felt like an amazing system for a few years, but by the late 90's you needed to seriously love the system and AmigaOS to hold onto it, and even for many of us who did (and do) love it, it became hard to justify.


Could have been a case of designing a platform around the hardware in the early 90s, then being desperate for parts to keep the platform going while designing the upgrade.


Maybe, in which case it'd likely still be something video-oriented. The Amiga was never particularly fast in terms of raw number-crunching. As a desktop computer it felt fast because of the pre-emptive multitasking and the custom chips and the use of SCSI instead of IDE. Even the keyboard had it's own CPU (a 6502-compatible SOC) on some of the models - "everything" was offloaded from the main CPU, and so until PCs started getting GPUs etc. it didn't matter that much that Motorola from pretty early on was struggling to keep up with the x86 advances.

But for video it had two major things going for it: Genlock, allowing cheap passthrough of a video signal and overlaying Amiga graphics on top of the video, and products like the Video Toaster that was initially built around the Amiga.

So you could see Amiga's pop up in video context may years after they were otherwise becoming obsolete because of that.


Amiga's were used for a lot of weird video things like touch-screen video kiosks. Genlock a serial controlled laser disc player to the Amiga and put it in a cabinet with a serial port touch screen.

A PC could certainly replace it by 2000 but if you developed your content in the mid-1980's then Amiga was probably your solution and you needed to keep it going for a while.


It seems entirely plausible to me that three letter agencies could also have done render farm type things like this: http://www.generationamiga.com/2020/08/30/how-24-commodore-a...

(I think this is a subset of your comment rather than an extension but Babylon 5 reference ;)


That'd be Video Toasters. But Newtek ditched Amiga support in '95, and by late 90's PCs or DEC Alphas would cream the Amigas for the render farms.

Even Babylon 5 switched the render farms for seasons 4 and 5.

Not impossible someone would still want to replace individual systems in a render farm rather than upgrading, but given the potential speed gains it'd seem like a poor choice.


Yeah, I had a friend in the late 90s that used an Amiga with Genlock to fansub anime. I wouldn't be surprised if the CIA had some generic media rack kit or whatever that did something similar.

People also kept Amigas going past their prime for apps like Deluxe Paint.


Hell even in the early aughts you would still see video toasters in use at small local television stations until they were finally killed by HD.


More likely they needed to develop exploits for Amiga driven SCADA systems.


No kidding:

https://daringfireball.net/linked/2019/12/17/sgi-workstation...

> The Octane line’s entry-level product, which comes with a 225-MHz R10000 MIPS processor, 128MB of memory, a 4GB hard drive, and a 20-inch monitor, will fall to $17,995 from $19,995.

Really makes the M1 Ultra look affordable.


The Indy, which predated the Octane, started much lower ($5k according to Wikipedia, presumably in mid-nineties dollars), but yeah your point very much stands.


The Indy, though, was notoriously underpowered. Very much the “glorified home computer” the GP described, albeit running MIPS.

Still, sure did stand out in the MIT computer labs!


Indys weren't truly that slow. The problem was the base models were memory constrained to the point where IRIX could barely boot. 16MB was not enough, and IRIX 5.x had memory leaks that made it even worse. An Indy with 96MB+ will run IRIX 6.5 pretty well.


That sounds right. I believe most or all developers at the place I worked had either 32 or 64 MB in their machines. At first (~1995) most were probably using IRIX 5.3, but by 96 or 97 I think most if not all had moved to 6.5.

Whatever I had, I don't recall lack of memory ever being a problem. And the GUI was quite snappy.


The GUI was fantastic. Minimal got out of your way as much as possible and used the hardware acceleration to great effect. IRIX 6.5 was rock solid, I used it as my main driver for years before switching to Linux, we also had some windows boxes floating around because we supported a windows binary but that and admin were the only things done on those, everything was either SGI or Linux. I was still using my SGI keyboard two years ago but it finally died.


I seem to remember a very classic rant about how bad Irix was back then but I can't seem to find it.


SGI some places did a great job at giving good deals to computer labs. When I was at university in Oslo, there were rows and rows of Indy's on one side of the biggest undergrad computer lab, and then a bunch of Suns with multiple monochrome Tandberg terminals hooked up on the other.

No big surprise that the Indy side always filled up first, and that "everyone" soon had XEarth and similar running as backgrounds on the Indys... Of course "everyone" loved SGI and were thoroughly unimpressed with Sun after a semester in those labs.


There's a running joke about the Indy that it's the Indigo (its much-more-expensive brother) without the "go".


> Really makes the M1 Ultra look affordable.

The amount of power you can buy today for under $1000 let alone under $10000 is insane compared to back then. The M1 Ultra is not that expensive compared to mid-range workstations or even high-end PCs of previous eras.


Yeah, I don't quite get the way people sometimes reminisce about the hardware costs of the past. We used to have consumer CPUs topping $1000 back when that was some serious money, and big-boy graphics workstations could easily run the tens or hundreds of thousands.


Non-toy computers were only available to the relatively wealthy for nearly a decade. The original Apple II was the equivalent of around $5000, which certainly wasn't a casual purchase for most people.

If you look in back-issues of Byte the prices of early PCs with a usable spec are eye-watering, even before correcting for inflation.

Prices didn't start dropping to more accessible levels until the 90s.


An Irix workstation is like buying a Countach. It's a thing you have because it's the thing that you always wanted.


I used to run an e-mail service with ~2m user accounts '99-'01. Our storage was an IBM ESS "Shark" stocked with 1.5TB of drives and two RS/6000 servers as the storage controllers.

Add on web frontends and mail exchangers, and the entire system was slower and had less aggregate RAM and processing power, less (and slower) disk (well SSD in my laptop) than my current $1500 laptop.


That's just over $31,000 in 2022 dollars. I don't think I can even imagine what kind of modern desktop you could build for that much money.


Could easily get there with a Mac Pro: https://www.apple.com/shop/buy-mac/mac-pro/tower


I'm not even sure you could build a $31,000 desktop computer even if you wanted to without resorting to some ridiculous "expensive for the sake of being expensive" parts. Even quad RTX 3090 Ti's would only set you back $8,000 if you got them at MSRP.

EDIT: Just saw the other comment and I stand corrected.


You can run up costs pretty much arbitrarily with big memory and big storage. 2TB of RAM in a workstation will run you at least $30k if not more (it was $40k last time I checked), and you can go as high as 4TB in current systems. And big storage and NVMe arrays, it's almost a matter of "how much you got?", you can really scale capacity arbitrarily large if you've got the cash (although performance won't increase past a certain point).

This was always the dumb bit with the "apple wants HOW MUCH for a mac pro!?!?" articles about the "$50k mac"... it had $40k of memory in it alone, and the "comparable" systems he was building maxed out at 256GB theoretical and 128GB actual. That's great if it works, using a lower spec will push costs down on both sides, but it's not comparable.


Top-of-the-line 512GB LRDIMM DDR4 will run you about $2,500 before tax if you buy name brand Samsung. I know this because that is what is in both of my dual Xeon workstations. It gets pricey when you go through Dell or HP of course.


> 2TB of RAM in a workstation will run you at least $30k

The trick here is to use a board with 32 dimm sockets -- which requires an oddball formfactor-- but it radically lowers the cost of reaching 2TB.

But your point remains, change your target to 4TB ram (which really isn't an absurd amount of ram) and the astronomical costs come back (unless you go to 96 dimm socket systems, which have their own astronomical costs).


>I'm not even sure you could build a $31,000 desktop

A decked out Mac Pro can reach over $50,000 and it's not even that powerful as your 2x 3090Ti example, but that's the Apple tax for you.


Have you tried to price out a comparative PC? If you even can? Because there aren't many that will take as much RAM as that $50K Mac Pro and when you do find a PC that will all of the sudden you realize there isn't much of an Apple tax at all for the equivalent hardware.

Want to argue that Apple should have more variation in their offerings and price points? Sure - I heartily agree. But blithely tossing out a contextless $50K price tag as being some sort of "tax" is just silly.


>Have you tried to price out a comparative PC? Because there aren't many that will take as much RAM as that $50K Mac Pro and when you do find a PC that will all of the sudden you realize there isn't much of an Apple tax at all for the equivalent hardware

You're deluding yourself with your claim. If you would have done 30 seconds of googling, you would have found out that Dell, HP, Lenovo and Puget Systems, all sell workstations that can beat the top spec Mac Pro or can build one yourself for way cheaper than them but as a business you want the support and no headache so it makes sense to buy prebuilts.

So yeah, there's an Apple tax on it.

https://www.imore.com/53000-pc-competition-apples-53000-mac-...


It’s reasonably easy to build a threadripper workstation with 1TB of RAM (upgrade able to 2TB) and 2 3090’s for $20,000.

LambdaLabs will even sell you a prebuilt around that price with the same specs.

That said, I wouldn’t personally get into a pissing match over x86 Apples. I think their ARM offerings are far more interesting.


Quad RTX A6000s would be $24k and that’s what would go in a “workstation”


Venturing off-topic a bit here, but what exactly makes a "workstation" GPU? What's the difference between an RTX A6000 and an RTX 3090?


ECC RAM, different cooling setup (blower vs side fans), very different thermal characteristics, 24GB or 48GB, more bus width usually, optimized paths for data load & unload, GPU interconnects for direct GPU to GPU communication, shareable vGPUs between VMs, GPU store & halt, h/w support for desktop state, GPU state hand-off to another machine. It isn't just a "more memory" kind of thing.


> GPU interconnects for direct GPU to GPU communication

Note that nvlink is also available on the 3090, which does also have the same bus width as the A6000.


So interestingly to get multiple RTX 3090 cards using NVLINK you really need the blower style, which NVidia recently stopped making in the consumer line, otherwise you are fighting thermal properties and getting in to exotic cooling solutions. And you also get in to placement in slots on the motherboard on three-slot cards. And then you get into the number of PCIe PSU connectors on most supplies. You also get in to weird corner cases of four way NVLINK on 3090's acting as a "single card" but four way NVLINK on RTX A5000 is still independent cards as far as the OS goes. Regarding NVLINK, 3090 cards only support NVLINK (NVL 3.0 IIRC), rather than NVSWITCH. With NVLINK/NVSWITCH, with two RTX A5000 cards I get a total of 48GB of usable RAM for storing the machine learning models, and so on for multiple cards in the RTX A5K & A6K & AX00 line. But with the consumer cards, even though each card has, e.g. 12GB of RAM on them, your deep learning model is also limited to 12GB of RAM, no matter how many 3090 cards you put in. You're also limited to a maximum of four 3090 cards in total, whereas with the NVLINK & NVSWITCH on the professional grade cards, you are limited only by how many cards you can stuff in your data center - technically it is 16 GPUs on a single switch IIRC, things get funky with the fabric once you progress beyond 16 GPUs on a single switch but that's outside of the scope of this conversation. I may also have some details wrong, my brain is a little fuzzy afer dinner.


The A6000 has ECC memory and the 3090 does not. I think that's the chief differentiation between workstations and any other kind of desktop computer. Like a server, they will have ECC everywhere.


In addition to the other stuff people posted you also get to use the certified GPU drivers. Which means they actually tested that the card would work 100% with AutoCAD or whatever


The main difference is you get twice the memory. If you don't need that, there is very little reason to get an A6000.


The RTX is really slow for FP64.


https://zworkstations.com/configurations/3010617/

24x 4.5GHz cores, 96GB memory, 48TB NVMe storage, 2 giant GPUs, etc.


That's wild, although it seems to be server parts in a workstation case. I guess none of Intel's "desktop" chips support a dual/quad CPU configuration though, so that's your only choice. Quad 8 TB NVMe drives is definitely one way to get to $30K of parts pretty quickly.


Neither Intel or AMD support SMP with consumer chips. To go dual-processor with AMD you have to buy EPYC skus, which are several times more expensive than their threadripper core-count equivalents.


FWIW, EPYCs sell on ebay with $/core prices much closer to threadripper prices-- presumably that's closer to what AMD is selling them for to large companies after discounts.

The MSRP on them is ... quite staggering though!


Well dual XEON SP2 CPUs, multiple RTX A5000 GPUs, 30TB of SSD storage, 512GB of RAM and dual BlackMagic quad-input 4K capture cards can get you pretty darn close when it comes to your computer vision work.


You can easily get well past $40k once you start adding some Quadro GPUs, 192gb RAM and a few TBs of PCIe storage into any of the mainstream manufacturers’ workstation products.


A loaded up Amiga (i.e. add a CPU accelerator board, more RAM than most PCs could handle, specialized video processing cards etc) could get into the low end of workstation territory. But you are right that architecturally, they had more in common with high end PCs than workstations of their day. The Amiga's main claim to fame from a hardware standpoint was their specialized chipset.


My late dad was a huge Amiga fan back in the day. I was just a little kid at the time and didn't see what the big deal was.

Looking back at what it was capable of though...they were doing 256 colors and sampled audio at a time when x86 was still pushing 16 colors and could only produce generated tones through the speaker built into the case.

There was some really good music on the Amiga, too. Some of my favorites:

Hybris theme: https://youtu.be/Siwd7b0iXOc

Pioneer Plague theme: https://youtu.be/JSLcN6GBzO0?t=17

Treasure Trap theme: https://youtu.be/n5h_Wu7QRpM

And of course, you can't mention Amiga music without also mentioning Space Debris: https://youtu.be/thnXzUFJnfQ


they were doing 256 colors and sampled audio at a time when x86 was still pushing 16 colors

4,096 colors.

https://en.wikipedia.org/wiki/Hold-And-Modify


I knew it could do 4,096 colors and I played around with it in Deluxe Paint, but I don't recall any games that used it. Being a kid at the time, the games were all I cared about.


Games span a bit of a range. With the copper you could change the palette while the screen updated, so you could do more than 256 on AGA or more than 64 on ECS Amiga's, but even on AGA Amiga's (A1200, A4000, CD32) it was rare for games to even reach 256 because of memory bandwidth. 32-64 was a more common range. Even that often used copper tricks, because using fewer bitplanes meant less memory bandwidth used, and so it was often worthwhile using copper palette tricks.

A handful of games did use HAM for up to 4096 colours, but mostly for static screens (there's somewhere in the region of half a dozen exceptions total)


Yeah, Amiga was light years ahead of PC and others in the music scene. It was truly incredible at the time. ProTracker blew my mind -- you could just.. make music, by hand. So many games had rich soundtracks. Not to mention the demo scene, arguably an art form, blending audio and visual effects in a constant race of one-upmanship while squeezing every last bit of performance from these ancient chipsets. Or the whole genre of cracktros (short intros put in bootloaders before games by the crackers / pirates distributing them). I still have this as a ring tone lol: https://www.youtube.com/watch?v=JAmTHCgvtnQ

Edit: I'm just going to throw this in here for good measure: the Lotus Turbo Challenge II soundtrack https://www.youtube.com/watch?v=vETonlaTZ4c

I mean, somebody made an instrumental cover of it years and years later, that's how it stuck in some people's minds! https://www.youtube.com/watch?v=b6Ijk17osUc


The Hybris music is kind of peculiar for the Amiga, since it uses a ton of fast arpeggiated chords, like you had to do on the 8 bit computers to play chords at all (but on the Amiga, you could just sample the chord and still have three hardware channels left for the rest of the music). It was essentially retro even when it was made!

Hybris was a gorgeous game too. It was top notch arcade quality graphics back when it came in 1987.


I still boot up my maximum-spec Octane2 (2x600MHz R14k, 8GB RAM, VPro V12, PCI shoebox) every so often to bask in the good ol’ days.

After nekochan went offline, there isn’t really a central gathering place for SGI fans anymore, but we are out there.


That's a beast of a machine!

In my home office I have a little mini museum that consists of display of esoteric 90's workstations:

* Apple Quadra 610 running A/UX (Apple's first UNIX)

* NeXT NeXTstation Turbo

* SGI Indigo2 IMPACT10000

* Sun Ultra 2 Elite3D

* UMAX J700 (dual 604e) running BeOS

* HP Visualize C240

All working and all fun to fire up and play around with from time to time. Tracking down software to play with is a challenge at times. Since most of what I want to fiddle around with is proprietary and long since abandoned (Maya, CATIA, NX, etc.). If by some chance we were to end up on a conference call, you'd see them displayed in the background. :-)


neat, here is my museum:

* Apollo Domain 4500

* HP 9000

* m86k/25 NeXTstation

* NeXTstation Turbo

* NeXT Cube with NeXTDimension card

* SPARCstation 5

* SGI OEM machine from Control-Data with mips R2000A/R3000

* SGI Indy

* IBM RS6000/320H

* IBM RS6000/250

* Cobalt Qube 2700D

* Sun JavaStation1

* Sun Ray1

* SPARC IPC

* Alpha-Entry-Workstation 533

they are in storage at my grandmothers now, and i don't know if any of them still run. some of these i was using actively as my workstation at home. some where just to explore. as i got more and more into free software, dealing with the nonfree stuff on those machines got less and less appealing. though i was also running linux on machines that were supported.


My museum is quite small in comparison:

* HP 9000 712/60

* SUN Ultra 1 Creator (With SunPC DX2)

* Mac Quadra 610

These are still working, albeit waiting for recapping. The Gecko is running NextSTEP for the original id Software DooM map editor and the Ultra and Quadra are on the original OS they came with. Would love to get more SUN and SGI hardware but the prices are getting quite out of hand...



My 900MHz V12 DCD Fuel says hi. I miss Nekochan.



Forgot to add the DCD. :)


That's like driving a classic in modern day traffic, it's a bit slower but it does the job with elegance. Nice rig!


Sounds like it will run circles around my Indigo2 R10K in the workshop. What do you do with all that power?


Sometimes I miss 3dwm though apple stole a lot of it's best ideas and put them into the original osx


There were some attempts at getting 3dwm to be ported to Linux, but I'm not sure what came of them.


It was thanks to SGI hosting of C++ STL documentation (pre-ISO/ANSI version) that I learned my way around it.

Being graphics geek, I also spent quite some time around the graphics documentation.

For me, one of the biggest mistakes was only making IrisGL available while keeping Inventor for themselves.

To the subject at hand, this is one difference I find with most modern computers, the lack of soul in a vertical integration experience blended between hardware and software.


I have fond memories of the SGI machines - workstation and larger - I worked on in the 1990s and early 2000s. Octanes, O2s, Origins, Indigos, and so on.

They were best-in-class for visualization, and when used with Stereographic Crystal Eyes hardware/glasses, 3D was awesome. We also rendered high-quality POV-Ray animations on an O2 in 1996, when the software was barely 5 years old!

My last big computing efforts were on a SGI Origin 2000 (R12000) in 2002, and the allure of that machine was being able to get 32 GB of shared RAM all to myself.


3D was awesome

Around 1990 or 2000, I was able to see how some of the big energy companies in Houston were using SGI machines' 3D capabilities.

They'd have rooms about 10 feet square with projectors hanging from the ceiling that would take seismic data and render it on the walls as colorful images of oil deposits and different strata of rocks and gas and water and such. Using a hand controller, the employees could "walk" through the earth to see where the deposits were and plot the best/most efficient route for the drilling pipes to follow.

Pretty much today's VR gaming headset world. Except, without a headset. And this was almost a quarter of a century ago.

I can't imagine what the energy companies are doing now, with their supercomputers and seemingly limitless budgets.


I saw a driving simulator built with an actual car and a couple of reality engines driving projectors that were projecting on screens all around the car. It was a pretty impressive setup.

Now you can probably build that out of forza, a decent gaming pc, and some hobbiest electronics.


Most of my memories of SGI machines from the 1990s are not so good. As I remember SGI seemed to value looks and performance on paper to "it works".

There was the professor who bought an SGI machine but didn't put enough RAM in it, plugged it into the AC power and Ethernet, couldn't do anything with it, and left it plugged in for a few years with no root password.

There was the demo I attended up at Syracuse where a pair of identical twins from eastern Europe were supposed to show off something but couldn't get it to work, Geoffrey Fox's conclusion was "never buy a gigabyte of cheap RAM" back when a gigabyte was a lot.

When SGI came out with a filesystem for Linux I could never shake the perception that it would be a machine for incinerating your files.


<shrug> I had an SGI Indigo back in the mid-90s, and it functioned fine as a Unix workstation, as well as being very useful for the weather satellite imagery work I was doing. It ran circles around the Sun machines I had access to at the time.


My first job was at a company that made visual simulation software, and literally everyone had an SGI on their desk (the Indy), and there were also a few refrigerator-sized machines in the server room.

I did development work on the Indy and a much larger 8(!) CPU machine daily for several years. I remember there were some complications related to shared libraries (which, IIRC, were a relatively new feature at the time), but overall I remember those machines working quite well. The Indy was a great daily driver for the time.


In the mid 1990s I was at a military research lab doing VR research. SGIs all around, working quite well. The indy on my desk worked well as a workstation, and the larger machines (High/Max impacts, Indigo2s I think) blew me away with their hardware-accelerated GL (IrisGL, still, I think) apps. They worked very well for what we bought them for, and I was sad to see the company get eaten away by competition that had much, much, much cheaper solutions to the same problems, but none of the pizazz or desktop UI. Intergraph on NT, mostly.


I saw some of that happen too, but it wasn't appreciably different from other expensive Unix workstations in that respect. That is, while there were people getting actual value from them, there were also people buying them that didn't need them.


I remember the SGI machines well. IRIX was easily my favorite graphical UNIX.

We used SGI Indy systems for network administration at the university, and SGI Octanes for niche graphical applications and databases, but they were always considered an expensive luxury for both of those use cases. Nearly every other UNIX system at the university back then was Sun Microsystems.


I spent many years working on high-end TV weather graphics on SGI Indigo, Onyx, and the O2 (toaster over shaped) boxes; they were remarkable for their time and the hours and hours it took to render graphics made for really nice downtime at work letting me say things like "sorry, can't do anything, graphics are rendering".

The best source for hardware ended up being a local university surplus shop where we could get the big SGI monitors for pennies on the dollar.


In the mid-nineties I worked for Parallax Graphics in London, doing support for their SGI based paint and compositing software. One application, Matador, was heavily used in the video and film industry - also for TV and things like weather forecast graphics. Feels like ancient history. I loved working on my SGI and learning a bit of Irix.


What is old is new again. I am running a query against a very large datalake since yesterday. Only thing is, with the elasticity of resources in the cloud, I have no plausible reason to not be doing other work :-/


I have a hard time taking anyone seriously when they drop something like this: "MacOS felt a kind of dumb, and does so ever since" ... I mean...MacOS is just *nix these days and has been for 20+ years. I jump back and forth between it and linux pretty much all day long and I see nothing that indicates that macOS is any dumber than linux.


I used to be in that camp until I actually used a Macbook. For some reason I was convinced that it wasn't "real" Unix, unlike Linux.

It was a naive perspective.


It's definitely more locked down, less open than something like Linux or BSD and less developer-friendly (signed software, etc.), which takes away from the Linux/Unix hacker ethos IMO.

I respect that Apple makes good quality hardware, but I wish there was an equivalent that was more developer-friendly. System76 is almost that but not quite.


on the commandline it's a decent unix, sure, but no proprietary unix can measure up with linux nowadays.

they all suffered from lack of packagemanagement and old versions of commandline tools. you almost always had to manually install better tools like GNU.

i did use a macbook for some time, but the only reason i managed was that most of my work is on remote servers, so most of the terminals on my mac were running linux anyways. yet, when i switched back to a linux machine as my main workstation i just immediately felt better, and didn't miss the mac at all. and now when i use the mac i really just want to go back to linux.


I ran Linux for a decade full time, and I feel like the latest version of Gnome are actually very good, but sadly the lack of software support is what keeps me on macOS, particularly for media. FinalCut Pro is, in my opinion, a much better video editing suite than Lightworks (the best editor I'm aware of on Linux), and there really isn't anything even comparable to ToonBoom on Linux [1].

The media scene on Linux is definitely improving (Blender and Lightworks and Krita have gotten good) but I think it still has awhile before I'm fully able to abandon my Mac setup. Honestly I just wish Darling would improve [2] enough to where I could just run everything I care about within Linux.

[1] I actually did google and apparently there is an OpenToonz Snap package, so I could be wrong on this. I'll need to play with it.

[2] No judgement to the Darling team, I realize it's a difficult project.


It never came with a standardized package manager, and many user tools are ancient. Newer versions won't let you turn off telemetry services because they are started in a read-only boot volume. It's pretty but pretty dumb at times.


I never saw an SGI in "personal computer" mode until the sun was setting - they were always being banged on by departments, or rendering 24-7. I'm jealous of anyone who had one to themselves when they were still a power.

The Amigas were something else, though, but every time I ran into one it was getting its lunch eaten by the nearby Mac. Only for a year or two, in TV production environments, did I see an advantage with Amiga.

Now, with both among my collection, the SGI is the one I turn on most frequently (when the power grid can handle it).


You were probably seeing the amiga too late in its life - it was only really impressive in the 80s, but it was really impressive in the 80s (especially when it was competing with 286, EGA & PC-speaker).


Actually, SGI released he first affordable, decent flatscreen monitors in 2001 or so (the 1700sw TN monitors that Apple rebranded with translucent cases at the time).


Ha 1600SW! As a minor around that time I bought out the remainder of the 1600SW inventory that SGI was clearing out for ~$550 a piece ... and proceeded to make more money reselling them that I've made my entire life before that, internships included.

To this day I love the on-brand SGI industrial design aesthetic.


Monitors that required a special video card to use with a PC. At least the 1600sw did. IIRC it came with a Revolution9 card or some such.


You could order an external DVI-to-LVDS converter (or whatever the native connector/signalling was) later when it became clear DVI was set to become the standard.


It was several years after the 1600sw came out that I had a machine with DVI. I've still never seen one in person. The industrial design looked really nice but I have no idea if the panel quality was decent.


The TN panel, especially the black level, was crap compared to modern IPS let alone OLED LCDs, but was decent enough that SGI guaranteed a reference color garmut, with compensation algorithms over the panel's lifetime. I bought mine towards the end of the product's lifecycle at a good price with converter included, and ran it with high-end (at the time) nvidia cards on Linux without problem.


I'm glad to hear the panels were nice or at least not the same crap as early VGA panels. My first desktop LCD panel was a G5 iMac. I had a big Trinitron monitor for a long time and lusted after a 1600sw.


The UI of SGI's IRIX was better than the current macOS on some aspects, e.g., sound effects. Wish there are more competitions in computer UI.


Have you seen how many desktops Linux has?


I still power up my amigas and Indigo2 10k Max Impact occasionally. Just to think of how much that purple computer cost back then (I worked on them, VFX).

Anyways, here's desktop for linux to mimic IRIX 4dwm/motif https://docs.maxxinteractive.com/ if you're into that sort of thing.


Talking about "the cult of SGI", and then using the new logo instead of the old cube logo, that's blasphemy! :-D


Had an Amiga (500/A4k40), always wanted an SGI. We were on several CEBITs asking SGI for selling us a machine for the 3d graphics, but didn't happen (we bought lots of monitors at CEBITs though). Later worked with SGIs in the 90s for at my first developer job <3


You can emulate a machine with Qemu and run Irix there to enjoy the GUI.

The killer apps on the platform were the various proprietary high en graphical and 3D suites. These are much better on modern computers now anyways.


My favourite thing about my Indigo2 (bought second hand for relatively cheap in the early '00s) was that unlike all the x86 kit I had running, it survived power brownouts with a line logged to console mentioning it had happened.

When the area I was living at the time was having periodic power issues I'd check the console any time I got home, and if there was a new log message I knew I'd need to bring all the x86 kit back up once I'd had a coffee or three.


From reading about it, I was under the impression that the emulation was painfully slow, so I did not even try it. Is it possible to get close to a real machine under emulation with a current computer?


Real experience for what workloads? Clicking around in the colourful purple GUI and typing commands in the terminal? Yes.

Running Hollywood-level graphical software, perhaps no. But I never tried.


As a side note, back in the 90s I dabbed into the hobby of FPS level design. Some tasks, for example, calculating light for Quake I maps could take a lot of computational time to complete. I fondly remember that people back then discussed a lot about purchasing powerful gigs, or even workstations to build very large maps for games such as Quake, Unreal and such. Typical machine at the time IIRC only has around 32-128GB(sorry MB) rams which were good enough for gaming but fell short for level designing tasks. Even opening large levels for Half-Life requires a good machine.


32MB-128MB, not GB, presumably.

Our desktop workstations in '95-'96 had 16MB. Our servers had 128MB and it was extravagant (and cost way too much - we should have made do with half that or less).


Yeah you are absolutely right. I'll correct.


I want to say that iD was cooking BSPs on SPARCs or SGIs at one point.


They did have the big bucks :D. I'm wondering what's the setup for modern game designers such as Skyrim.


Most of what I remember about 90s RISC workstations is how unbelievably slow they were. On an SGI Octane in 1998 was giving integer performance about the same as 1996-released Pentium Pros. And that's why RISC died: not because it was cheaper but because it was both cheaper and faster. Sometimes dramatically faster. The idea that RISC was somehow elegant turned out to be a myth. The complexity of x86 mattered less as cores grew in size and sophistication, but the code compression properties of CISC continued to benefit x86.


It depends. Before peecee memory architectures got gud in the early oughts, the Octanes were sufficiently better for big finite element codes than Intel based machines that you'd still see them used for that. But it didn't last, and you're right that the writing was on the wall already in the mid nineties. I got a Pentium 90 running Slackware running some less demanding scientific code faster than a pretty loaded Indigo2 in that era.


>I got a Pentium 90 running Slackware running some less demanding scientific code faster than a pretty loaded Indigo2 in that era.

Well, FVWM and URxvt were lighter than MWM and Irix' setup for Winterm.


RISC is better if you have finite money and want to build a chip.

The simple reality is that Intel had the Wintel monopoly and they had gigantic volume and absurd amounts of money to invest. If you compare the size of teams working on SPARC to what Intel invested its totally clear why they ended up winning.

> The idea that RISC was somehow elegant turned out to be a myth.

No, it didn't. The reality is a bunch of literal students made a processor that outperformed industry cores. Imagine today if a university said 'we made a chip that is faster then i9'.

The early RISC processors with pretty small amount of work very incredibly competitive.

So yes, its was actually amazing and revolutionary and totally changed computing forever.

That this advantage would magically mean 'RISC will be the best thing ever for the rest of history' is pretty crazy demand to make for it to be called revolution.

> code compression properties of CISC continued to benefit x86.

Not actually that much, code density of x86 for 64 bit systems isn't all that amazing. Its certainty not why they won.


The moment at which it seemed like the RISC people were really on to something important was the moment when the size and complexity of the x86 frontend was really quite large compared to the rest of the core. Now you can't even find the x86 decoder on a die shot, because it's irrelevant. The 512x512b FMA unit is like the size of Alaska and the decoder is the size of Monaco. So the advantages of RISC were overtaken by semiconductor physics for the most part.


>The 512x512b FMA unit is like the size of Alaska and the decoder is the size of Monaco. So the advantages of RISC were overtaken by semiconductor physics for the most part.

There still is a hardware advantage (has nothing to do with sizes, everything to do with complexity), but let's ignore that.

RISC being simpler doesn't just help the hardware. It also helps the software, the whole stack.

Extra complexity needs strong justification. RISC-V takes that idea seriously, and this is why it already has the traction it does, and is going through exponential growth.


RISC-V has many nice properties but it didn't exist 25 years ago so what does it have to do with why those companies and their objectively inferior CPUs disappeared?


It seems to be more about skill and budget of the development teams, and those are both getting bigger. I think a major underlying factor is the massive increase of transistor budgets relative to clock speed and latency to memory. That pulls every architecture down the path of big caches, speculation, specialized functional units, multiprocessors, etc, that add complexity dwarfing anything in the front end. If I was starting fresh the labor to do x86 would be a handicap but on the other hand Intel switching to RISC-V or whatever wouldn't do anything for them.


Again, if you give 2 teams 50M to develop a new processor. One using x86 and the other using RISC-V I have not question in my mind what team would come out ahead.


That is exactly the ivory tower attitude that torpedoed all of the RISC workstation companies. Nobody, literally not one single customer cares how easy or hard it was to design and implement the CPU. They only care how much it costs and how fast it goes.


Its the opposite of ivory tower, its simple analysis of complexity.

To make the argument that CISC had anything to do with anything would be, what if in mid-95s Sun, SGI would have released their own CISC ISA. That would have been literally crazy.

> Nobody, literally not one single customer cares how easy or hard it was to design and implement the CPU

The companies who produce them care.

> They only care how much it costs and how fast it goes.

Yes and I explained why x86 won based on that logic so I don't know what you arguing.


>They only care how much it costs and how fast it goes.

Parent is telling you: At any given development cost, you'll end up with a faster CPU if you go RISC.

This is why almost every new ISA to meet success in the last three decades has been RISC.


>The idea that RISC was somehow elegant turned out to be a myth.

Citation needed.

Still, if you're thinking about Intel success, I can tell you there's two factors for it: IBM PC Clones and Intel's monopoly on advanced fab nodes. These were enough to overcompensated for CISC being trash.

>the code compression properties of CISC continued to benefit x86.

x86 had good code density. AMD64 (x86-64) has really bad code density, dramatically worse than RV64GC.


The expense of fab node transitions back when every chip manufacturer built their own fabs seems to've contributed significantly to the demise of a fair few architectures over the years.


If anybody has podcasts or very good video resources about the 80/90s computer industry I would be interested. I couldn't find very much, lots of bits and pieces.


Look up the Computer Chronicles https://archive.org/details/computerchronicles


I was a 16 years old kid back then when I bought PC Magazine only too see the new models of SGI workstations. I drooled so much and I remember the crazy prices back then +30k for small workstations. I loved the style of the PC cases, the colors and the OS and then I hit 19 and started working with a Mac and my desire to acquire SGI have gone


Irrelevant knowledge: "SGI" is actually an abbreviation of a top cult organization in Japan.


SGIs were the first Unix systems I used. We had a lab of them in school. They worked wonderfully for me and I never lost any files on them. There was also a really good Unix Sysadmin that maintained them and was available if you had any questions.


I remember seeing an SGI Octane [0] at Comdex. I think it was 1997 or 98? I was still in high school, but I remember thinking that it was just the absolute coolest thing ever. When my home computer could barely play DOOM, this thing was just blowing up over here with beautiful video animations that didn't stutter at all. Things my PC wouldn't be capable of until a few years after that. Not to mention it just looked cool. In an era of beige boxes, you had this striking blue cube.

[0] https://en.wikipedia.org/wiki/SGI_Octane


what kind of low bar is that "never losing any files" - what files did you lose in other systems?


The contemporary consumer OSes of the time, MacOS 8 and Windows 95/98 were pretty crash prone back then. If you weren’t religious about hitting command-S as your worked it definitely wasn’t unusual to lose your work.


Ok more like, never using unsaved work. I was horrified thinking some old school OSes ate files from FS left and right.


The movie "Hackers" prophesied that RISC was going to "change everything". And it did, but not in these workstations, but rather in smart phones, raspberry pi's and other projects that have made RISC viable again.


As an aside, I kinda find funny how Apple flip-flops from CISC (68k) to RISC (PPC) to CISC (x86) back to RISC (ARM). Let's see whether RISC is here to stay now.


x86 from Pentium Pro and K6 on are basically hardware x86->RISC recompilers. (This is a good bit of why Transmeta failed in the end, the last good bits of Moore's Law ensured that the recompiling became cheaper/more efficient in hardware)


ARM is very CISCy in their instruction set.


So far as I can tell, people still call ARM RISC because it's load/store and x86 isn't (feel free to correct me on this, I'm not a CPU person), and ARM's instruction proliferation gets glossed over.

I do remember writing asm for the arm26 (arm2) chip in my Archimedes and that was definitely actual RISC, but obviously these days not so much.


Most likely, however load/store alone doesn't make a RISC.


My favorite “CISCy” instruction on ARM64 is FJCVTZS: Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero.


This is going to sound weird... but I really loved my 43P. And now I have a flood of nostalgia about it (and AIX).


I started buying all workstations and Amigas I could find. To be honest the Amigas are annoying because each and every one need some kind of upgrade or repair in order to work. And then it is the illogical AmigaOS and Workbench where purists are the only ones who truly get it.

I prefer Unix workstations instead and play retro games on my MiSTER.


It's only illogical in retrospect now that Unix/Linux 'won'. Back then, every platform had its own quirky OS and hardware. Of the bunch, I found the Amiga running AmigaOS the among the least quirky and illogical.


What's so illogical about Workbench/AmigaOS? It seems very intuitive to me, even by modern standards.


Sure the GUI is okay you are right. But when you are repairing and when you are tinkering you need to shim in drivers and make stuff work. Google-fu is not enough to find solutions in my experience. That aspect of it.


There are a handful of sites to go if you need help repairing:

https://amigaworld.net

https://forum.amiga.org/

https://amigans.net

https://eab.abime.net/ (English Amiga Board)

You'll be able to get help there much more easily than via Google.


Thanks a lot for these resources. I will surely look into these when I get any further on the 1200 machines or 600 vampire accelerated or something.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: