Hacker News new | past | comments | ask | show | jobs | submit login

Your analysis except for your first sentence is good.

But it is not true that SGI failed to understand there was a point where desktop PCs would be good enough to replace dedicated workstations. They had built a $3500 PC graphics card (IrisVision) way early on, and did the Nintendo deal before PC 3D really became a thing, they partnered in 1991 with Compaq on an investment and a $50-million joint effort to develop a workstation (https://www.encyclopedia.com/books/politics-and-business-mag...), and they were themselves taking advantage of the ability to squeeze more and more 3D into a single chip; the tech trends were obvious.

SGI was a client of tech research a firm I joined at the time and it was heartbreaking to see them lose and very hard to figure out what they could/should do. It wasn't my explicit role to solve their problem but I spent a lot of time thinking about it.

You do capture some of the dynamics well but you don't capture the heart of it. The heart of it is point #1. The rest below are also inhibitors.

1) SGI had revenues of $2 billion/year and the 3D market revenue was, say, $50/million/year. (OK, maybe a bit more than that; Matrox 2D king had what, $125 million in revenue?) How do you trade the former for the latter? And on top of that trade high margins for low margins? When 95% of your stakeholders don't care about the new (PC games) market?

2) Engineering pride/sophistication/control. The company started out focused on Graphics but, being in Silicon Valley, had grown/acquired huge engineering strengths in other areas besides graphics, CPUs design (MIPS), NUMA cache-coherent SMP hardware+SMP UNIX design, etc and that's before you get to the Cray acquisition and bits. They were the "Apple" of graphics workstation vendors but there was no "iphone" integrated vertical play for them downmarket (except maybe consoles and they half-tried that with N64 and even Nvidia while using that has minimized that due to its low margins and low opportunity for upside.) It was hard technically to give up/deprioritize all those levels of engineering sophistication in favor of competing on graphics performance and price-performance when the PCI(/AGP) bus was fixed, the software API layer was fixed, the CPU layer was fixed, and you had to compete with 80+ fledging companies to win in both performance and price/performance in a low margin PC 3D games graphics which is just a much lower value-added play for engineering.

3) Compensation. Employees with knowledge of how to make a 3D graphics chip had a bigger upside outside of the company than inside with that knowledge. They left. 3dfx founder left. ArtX guys left. Later other guys left for Nvidia.

4) Slow/different engineering cycle times/culture. SGI cycle times for new products were 3-4 years. Sure they'd do a speed bump at the half-way point. Some volume 2D chip companies would have tweaks to their designs coming out every 2 weeks. PC graphics vendors needed a new product every 6-12 months. Nvidia's most critical life-saving measure in their product history was to cut their cycle time radically by using simulation because there was no other option, and it left them with 2/3rds of their target blend modes not working. https://www.acquired.fm/episodes/nvidia-the-gpu-company-1993...

5) Pride around owning/controlling the 3D software layer. Having developed OpenGL, they didn't/wouldn't figure out how to let Microsoft control/change/partner with it for gaming markets at the API level. Yes they licensed OpenGL, and eventually they caved and cross-licensed patents, but Microsoft was never going to let them own/control the API nor was Microsoft ever going to fully support an open API. And there was no love lost between Silicon Valley engineers and Microsoft in the late 90s. Hard to partner.

6) Executive leadership (I don't know any of this firsthand.) The founder who cared about graphics, Ed Clark, by some accounts saw the workstation/PC threat a ways away. When the critical timeframe came to deal with it though (92-94), he also saw the web and saw how big that wave was, and PC 3D graphics was a small wave in comparison so he switched focus and left behind him was the CEO who was an ex-HP executive who had for a decade grown SGI from $5m revenue to billions by remaking SGI into the image of HP rather than focusing on graphics, graphics, graphics and was not equipped to bet the company on rebirthing/growing the company out of a nascent PC 3D games market growing.

As an industry analyst serving SGI as a client (and its competitors) and seeing the forces facing them in the mid/late-90s, what was known at the time was: - 3D games were going to fuel consumer usage, on both consoles and PCs, thanks to one-chip texture mapping (+ over time geometry processing) - Wintel controlled the PC space but did allow third-party graphics cards manufacturers and were fine with that - Linux was good enough, but was for a transition period not nearly as good as conventional UNIX - There was a huge room for improvement in 3D graphics on the PC at first, but at some point you would get good-enough bang for your buck and then the market for your graphics processors would stagnate. Screen resolutions grow more slowly than Moore's law, and once you can do enough shading for every pixel on the screen 4x over and enough polygons matrix operations for one polygon on every pixel on the screen, how much more compute do you really need?

But in 92-95 it was hard to advocate for SGI downsizing/betting-the-company based on this alone.

In mid-1993 Windows NT 3.1 marked "good enough" Windows to compete with UNIX (process isolation) and Windows 3.5 in Nov 1994 solidified that. In Nov 1995, with Pentium Pro specInt figures coming out, it was clear RISC was going to die but just cognitively hard to recognize.

SGI clearly saw the problem and tried to diversify out of it Alias/Wavefront (1995) and going upmarket (buying Cray whom they were cannibalizing) but neither of those "saved it".

I remember thinking at some point charting out with a colleague how the industry consolidation might happen that they really should join up with Apple (both really rode the 'halo' effect of high-end sexy products to sell bread and butter products effectively and both were very vertically-integration oriented); I don't know if that was ever considered. Apple was 100x weaker then and I don't recall if the actual market cap economics would have worked.

What wasn't fully recognized (at least by me or others I read voraciously at the time), but is clearer in hindsight was that: - while SMP parallelism was part of the wave of the future (and required both work on both CPU+OS layers of the stack), and GPUs contained a mixture of ASIC-based parallelism for both shading and vertex operations, that one could construct a general purpose coprocessing engine that would have real market uses beyond graphics (first in HPC then in AI) and a longer-term value proposition that would outlast a gamer-centric niche market and be a general compute engine.

The term GPU alone that Nvidia used early on in marketing now implies and delivers on that vision, but it didn't imply it (to my eyes) at the time; SGI and others had used the term GPU informally even before Nvidia marketing did, to refer to chips with geometry/matrix computations on them and Nvidia was entering that game (and of course talking up their book.) But the true vision of creating a general-purpose parallelism compute engine coprocessor and API along with it was really fleshed out at Stanford PhD work by Ian Buck a decade later in 2004 and he then graduated and took it to Nvidia and it became CUDA. https://graphics.stanford.edu/papers/brookgpu/brookgpu.pdf At least as far as I can tell.

It was always possible as a graphics chip company in the 1990s to see the next 1-2 generations of possible technical improvements, but it was very hard to see a multi-billion dollar business. Tens of millions absolutely. Hundreds of millions, sure. But billions? And this goes back to my point #1.

There was a very real possibility they would enter the 3D PC gaming fray, not be able to compete and lose, and then all that Silicon Valley graphics marketing halo they benefited from so much have would faded. But it faded anyway.

I suppose they could have cross-licensed their texture mapping patents to any startup giving them 5-10% equity in return for capital and then tried to buy them out as they grew. They tried fighting over that issue with Nvidia later. But they would have hemoraged engineers to that approach and I'm less sure that would have worked.

In my view, they should have just bit the bullet, rolled the dice, and played the PC 3D graphics gaming game and stayed true to their name, "Silicon" "Graphics". Not "Silicon Software" (alias/wavefront) or even "Silicon parallelism" (Cray). It can be true that if you stay in a niche (3D graphics for the masses), you end up in a ditch. But they lacked the courage of their potential and went the wrong way. In hindsight, someone probably should have kicked out McCracken in 1992, not 1997 and they could have gotten a more visionary leader less tied to the HP executive way of looking at problems/opportunities. But I don't know how they could have transitioned to a leader with better vision or where they could have found one.

I'd be interested if there was ever an HBS case study on this based on internal SGI documents. Or if others have pointers to internal SGI discussions of this dilemma. It still bothers me 30 years later as you can see by the length of this post. Too bad I posed this on hn a day late.




Your analysis is amazing. Thank you.


Thank you! It encourages me that someone got to see and appreciate it.

What is hard to convey to people outside the 3D hardware space is that the chief problem once you have the 3d pipeline down is really a market-development problem.

How do you sell an ever increasing amount of coprocessor compute?

Because the moment you hit the inevitable S-curve flattening out of your 3D value proposition, your coprocessor gets integrated into the (Intel) CPU die, just like the intel 387sx/dx floating point unit or earlier generations of IO coprocessors. Hence a frantic strategic push always into raytracing, HPC, AI, etc.

In hindsight it looks visionary, but the paranoia is at least as much a driving factor.

It’s now, for now, incredibly lucrative and the mastery of hardware parallelism may last as a moat for Nvidia, but I can sympathize at SGI not wanting to squeeze themselves back into a coprocessor-only business model. It’s a scary prospect. We can see only with hindsight it could have worked. Both business and technical leadership had such huge success diversifying their graphics hardware expertise into full system skills that they couldn’t refocus. They would have had to die in order to live again. Or perhaps slightly less exaggerated, to survive they would have to forsake/kill off a huge part of what they had become.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: