Mixed feelings. On the one hand, Itanium (as a platform) was batshit insane, impossible to write good compilers for and the pinnacle of Intel overengineering. Good riddance.
On the other hand, Itanium was ugly, but had its charm and uniqueness. Itanium is what EFI was first developed for. Itanium is where the C++ ABI got started.
Itanium being discontinued further reduces mainstream CPUs to the most boring, safe designs possible: IA32/amd64. ARM was kinda quirky (conditional execution, barrel shifter), but those were slowly neutered (by introducing Thumb), and then totally thrown out of the window with aarch64. SPARC is dead. PA-RISC is dead. RISC-V is new and promising, but is also the most pragmatic and safe design of an ISA ever. The Mill CPU is interesting, but is underfunded and I don't think it will ever be taped out.
Similar as with OS research (think: Solaris, Plan9/Inferno), researchy and experimental CPU ISAs seem to be a thing of the past now.
Real mode, Mod R/M and SIB byte encoding weirdness, REX prefixes, 80-bit floats, parity flag, hard-wired registers for shifts/multiplies/divisions, builtin CRC32 over the wrong polynomial, Pascal calling convention support, binary-coded decimal, high halves of 16-bit registers, MMX overlap with x87 floating point, etc. etc.
This is completely off topic, but I've seen things like the "^W^W" a few times before, and I don't know what it means.
Is this a weird encoding mismatch thing? is it from some editor/system that people instinctively type? Is it from some other forum which has a strange markup syntax for something?
Emacs/readline bindings for Delete Word. Open up a bash shell, type in `foo bar baz`, then press ctrl-W twice.
'^W' is what would appear instead if you weren't in a readline/emacs editor, but instead a dumb line terminal. Thus, leaving '^W' behind makes it look like you didn't realize what you just corrected is still visible.
It’s from the dec terminal emulation, VT100. I think specifically when you connected to like an ansi type system with dec mode. Or something like that, the memories are fuzzy
The keystroke certainly does, but I was talking about people inserting ^h and ^w in the middle of conversations, which is a bit harder to nail down, though the practice is older than the parent thought.
Ah okay makes sense! It's amazing the gaps in knowledge I still have with a lot of things.
I've used Linux daily for like 15 years now, and I know about control characters, but since I never really used emacs (or readline beyond the copy+pasted command here or there) I just completely missed the meaning.
Not emacs (it means kill-region there, which deletes the text between the cursor and the mark and saves it in a holding area, fairly close to what windows does when you hit ^x, except the holding area is fancier).
This readline behavior follows a Unix tty driver. I'm not sure off the top of my head which one introduced it. I don't think it was on xenix but am pretty sure some 4.x bsd had it.
I thought it was fairly clear 'q3k was referring to the 64 bit ISA introduced in amd64, not all the legacy stuff either kept around in some form for compatibility (x87, encodings) or stuff that isn't part of long mode (BCD, segmentation, real mode).
There's a ton of innovation happening right now for machine-learning optimized ISAs. And RISC-V has excellent support for extensions.
The funniest part of Itanium for me was that it was supposedly helpful to compilers. Yet the compiler people I knew did not like to use the "helpful" parts of the ISA. The loop unrolling stuff got in the way of software pipelining, for example.
Mill is an example of a processor that's actually helpful to compilers, because its design is being led by an actual compiler person.
But it's not just compiler writing right? With long instruction words, you also have to write all the scheduling algorithms as well. I thought that was just something compiler authors didn't want to deal with.
I think that's close to a truism of system design: the more things you simultaneously require a developer to be an expert at to use your system, the fewer developers who will choose to use your system.
Somewhere around 3 disciplines? Nobody's using your system.
Such was the paradoxical brilliance of SQL and Excel. By limiting knowledge scope, they greatly enhanced utilization (and thereby utility).
The high performance compilers I'm familiar with do a lot of worrying about instruction scheduling. Just because you've got an out-of-order processor doesn't mean it's effective to not bother with scheduling.
I don't know who they asked, but it's safe to say that compiler gurus are a diverse lot. The particular concern I mentioned (software pipelining) might not be something that the Intel compilers do.
Intel has its own compiler, surely they could ask their compiler gurus to help with ISA design.
Well that kinda was the concept - but they greatly underestimated the difficulty of producing a "sufficiently smart compiler". If VLIW had worked it would have greatly simplified processor design, no (or at least much less) need to worry about cleverly handling out-of-order execution for example. That in turn would have meant e.g. bigger L1 caches because you would have the die space to play with, or more execution units, or whatever.
It didn't work out and Intel did make some stupid decisions along the way (also underestimating the importance of backwards compatibility with existing code - not only binaries, but you also needed to be able to compile existing source well) but it was worth a punt, and maybe will be again someday - maybe a DL-based compiler could produce good VLIW code? Maybe a new (or old) language will be more amenable to VLIW compilation than C?
It is also problematic to need to recompile code for every different micro architecture. Maybe it could be done if the OS shipped with some sort of JIT/AOT compiler and binaries were all something like LLVM IR.
A workflow I'd like to see: you write your code and ensure that it is functionally correct, all the tests pass. Then you go home and overnight, some ML/DL/NN whatever tool works on your code to find the best way to compile it (fastest binary that passes all tests). Repeat this every day for the duration of the project. At the end your artifacts are the source code, the shippable binary, and a model perfectly trained to produce the latter from the former. It's a shame that Itanic was too soon to take advantage of ML going mainstream.
If you don't have any idea how to solve your hard problem, just claim that magic exists and call this magic Machine Learning/Deep Learning/Neural Network to make it sound more scientific.
OS research is still active though. I see many interesting microkernels being developed left right and center. RedoxOS is trying to build a unix microkernel. Nintendo built Horizon, a capabilities-based microkernel for the Nintendo Switch. Many of Plan9's features have been incorporated into Linux. We have seL4, a formally proven kernel that is still in active development (they are currently working hard on getting a formally proven SMP component).
I don't think it's fair to say Kernel research is a thing of the past.
The argument exists (I'm not saying it's true) that Google hoovers up the smartest people to stop them making another Google. People used to make the same argument about Microsoft.
I don't remember the source, so I might be making this up, I kind of remember Bill Gates stating that they would rather hire someone and keep them busy as letting them work for the competition.
I have a similar vague memory. That being said, that also works as a pretty good post hoc justification for having some guys randomly doing research and not really delivering what you hoped they'd deliver ("Hey, at least they didn't go off and make another Microsoft!").
It limits Google's abilities to force bundling of their revenue-generating and platform-psitioning components.
Giving Android away to OEMs is in large part a vector to get the Play store, Chrome, Maps, and their monetizable services and data scarfing on the device.
But some OEMs don't bundle everything. Think of the de-Googled Android variant ran by the Kindle Fire series, or the Chinese phones that include local stores and service providers that work there instead. Google can try to bludgeon them into playing ball with the trademark and various over-the-top agreements, but they can't stop them from saying "We'll take AOSP and make our own version and call it FireOS."
With full licensing control up and down the stack, they can close that door much tighter.
MS Research also does quite a few cool things, some of them landed up on Windows USB stack, WSL, SQL Server deployment on Linux, .NET Native, async/await, driver validation with Z3, for example.
At least it influenced async/await, TPL, .NET Native, Span<>, local refs, ref structs, blittable structs, and few other C# 7.x and 8.0 features.
Reading between the lines of some Joe Duffy posts, I think Midori was a victim of rise to power from WinDev after that whole Longhorn/Vista history, only to end with UWP slowly getting back all the .NET features that they decided to throw away on the initial version of WinRT.
Apparently post .NET Core 3.0 there will be some changes happening to .NET Native. It was briefly mentioned at Connect 2018, but they didn't wanted to precise what exactly.
I think of the iAPX 432 as "over-designed and under-engineered".
Intel was writing architectural checks (bit-aligned instructions, capability-based permissions, "everything is an object") they couldn't cash (very, very bad performance).
I'm a software guy, so naturally I took a course in VLSI design in college. It was fun. This was maybe a year before the iAPX 432 was canceled; our teacher had some sample chips embedded in clear plastic, gewgaws handed out at some conference he'd been to. He passed them around one day in class, and they were huge and there were like four of them in the processor chipset. And I remember thinking "how the heck is that thing going to be fast with all that off-chip traffic?" And of course it wasn't.
Elliot Organick, of Multics fame, wrote a book about the 432 architecture. I just found it on my bookshelf and leafed through it again; I'd forgotten that all of his code examples were in Ada (another "big bet" at the time).
You're right, it is kind of disappointing. I think the fact that it would cost so much to come up with something novel and then optimise it to the point that conventional CPUs are at that it seems hardly worth the expenditure. The next major frontier in CPU design would appear to be in materials and manufacturing rather than architecture - although no doubt whatever the post-silicon world looks like will influence future architecture substantially.
Mind you, if nothing beyond the smallest silicon processes turns out to be practical then we might see attention turning back to re-thinking CPU architecture, but until then I think a lot of money will be spent looking at materials and manufacturing over interesting ISAs.
Although the possibilities are arguably greater for new designs to get traction today.
There are a lot of reasons. Open source, cloud providers, etc. But a big one is that you can't just depend on CMOS process shrinks to make processors faster. Therefore, your new design isn't necessarily going to be eclipsed by whatever new x86 CPU comes out in 12 months.
That's true, but the difficulty is it's not just going to take 12 months to do something like this. A radically new architecture would take years to become competitive with today's processors for all of the use cases that would make it marketable. Meanwhile the x86 guys aren't just going to sit still, there's almost certainly more IPC to squeeze and they can add fixed function hardware (as they've already done for video encoding/decoding) or FPGA area in new product ranges and keep building new markets that way.
An instruction to insert into a queue on another core would be a great thing. The T3E had it, InfiniPath / OmniPath has it between boxes, and I've heard of a research project that did it on a modern cpu.
Actually, it's been a very exciting time for "CPUs" if you include "GPUs". The programming I can do in CUDA on an NVIDIA GPU is incredible. (Our company is using it for large-scale audio processing.) You can get a true Supercomputer on your desk now for $1,000 -- something that would have been the fastest computer in the world 15 years ago.
I doubt OpenPower is going anywhere anytime soon, IBM has their Z chips, and opensparc is going to remain open (whether anyone decides to adopt it or not).
MIPS today isn't what it used to be. It mostly occupies the "we're too cheap to use ARM" segment of the market these days, which is an ignominious fate for what used to be a great architecture. But it's well known and it still has its wonderful instruction set, so maybe someone can make it shine again once it goes fully open.
It also seems to be popular for countries playing catch-up on that whole "we can make our own computers from grounds up" metric - both China and Russia make their own MIPS CPUs, mostly for military use at the moment.
> It also seems to be popular for countries playing catch-up on that whole "we can make our own computers from grounds up" metric - both China and Russia make their own MIPS CPUs, mostly for military use at the moment.
I am no expert on this, but my personal impression is that projects of this kind slowly try to migrate towards RISC-V.
The few times I've dealt with MIPS architecture, it has seemed like software isn't really that well optimised for it. It works. Just... not as fast as it can do, e.g. zlib.
There are some forked versions of the main zlib library with MIPS optimisations in them that really fly, but nothing that has ever hit upstream, and I'm not crazy enough to want to rely on unpatched/unsupported code like that.
People have different opinions on EFI, but in the time before the modern C++ ABI, every compiler (and often different versions of the same compiler) generated code with different ABI expectations, which made libraries and software distribution absolute hell.
Itanium was created precisely because HP thought it couldn't keep PA-RISC competitive. So they threw in with Intel to create the next great server architecture.
As an aside, Merced too forever for Intel to tape out. Part of the agreements between HP and Intel was that they needed to ship chips for HP. As a result, Intel had to fab some of the last PA-RISC chips due to the schedule slip and contractual obligations.
I worked on some Itanium system software once upon a time.
Timing was at least part of the problem with Itanium. Arguably, had Merced (or, ideally, McKinley) shipped while the great Internet buildout was going on, Itanium might have made it over birthing pains before the dot-com crash and then been in a position to ward off Opteron and AMD64 when those arrived.
I could argue it was doomed from the beginning as well for various technical reasons given how the industry evolved. But there's at least an argument that better time to market might have been enough to get established as a dominant 64-bit architecture.
Being a couple years late definitely was bad but they also completely blew the performance, especially for ia32 compatibility. Shipping on time would have made the performance gap less bad but I rarely saw it being competitive with what would have been the competition at that time even before you factored cost in. Unless your workload was just Intel’s hand-tuned assembly it was usually cheaper to buy 2 x86 boxes and pocket the difference.
You're absolutely right, hence my hesitancy. IA-32 emulation performance was one particular weak spot. A fellow analyst at the time said something along the lines of "Itanium's IA-32 performance was not only not in the ballpark; it was not in the parking lot outside the ballpark."
Itanium was also, in many ways, designed for a world where ILP, rather than TLP, ruled. That distinction wouldn't become terribly important for a few more years but it would have eventually. Intel made a number of decisions in this vein. See also NetBurst Architecture. (An Intel exec told me, at some point after it had become obvious that multi-core was the future, that Microsoft had really pressured Intel to go down the few/fast cores route.)
I can see the connection. Microsft was even more vulnerable there than Intel.
If we switch from the "Every year the clock rate goes up 30%" model to "50% more cores every year", you're going to trigger the "recompile the universe" event. This is an esistential-level threat for the Microsoft of 15 years ago.
Microsoft's biggest selling point for years was the backwards compatibility. Keep running that old software forever, and the magic of Intel running the clock up means it still gets faster every year.
If that train stops, you may well think "if I have to rebuild anyway to support many-cores, why not do so on Linux?"
Intel initially didn't plan on having fast IA-32 performance. The IA-32 part of the chip was called the "IVE" which was short for the "Intel Value Engine." The thought was that it was enough for the chip to boot in IA-32 mode to be compatible and it was expected that you would set up system state and jump to 64-bit mode as fast as possible. So really it was just meant for compatibility. I imagined it was just a dusty corner of the chip using some old 486 core to take up the least amount of space.
>"Itanium was created precisely because HP thought it couldn't keep PA-RISC competitive."
I'd be curious to hear why that had that outlook. Wasn't HP still doing gangbuster's business at that point? Or was it something else besides having the financial resources?
They thought that they couldn't compete in the future because they didn't have the fabs to do it. They thought that by throwing in with Intel they could make headway into the high end server space.
Additionally, everyone was using Sun machines to develop on (this is before the stupid-long delays with UltraSPARC). They also thought that the future was code on Windows workstations connected to HP big server iron. That was Rick Belluzo's contribution to killing HP-UX workstations. He left after awhile to go try to sell the same sauce to SGI. Finally the wound up at MSFT.
Fast-forward and now Linux is dominant and who even uses HP-UX or the other proprietary UNIXes?
I wonder what the future of hpux is? Tried to search but couldn’t find anything other than future plans to run hpux as a Linux container which sounds like marketing since hpux isn’t something to run as a Linux container
Still, I wish HP hadn't given it up and maybe opened it up like SPARC. PA-RISC was posting some impressive numbers into the aughts when they were just multi-coring "old" designs.
Fun story: up until a couple years ago, my employer ran the entire company (manufacturing) with software running on a PA-RISC mainframe. I wish I could have taken that machine home when they took it offline.
It didn’t get hung up on RISC purity and introduced a series of instructions that kept code sizes down: single instruction Indexed Loads (normal and scaled), single instruction Address updates, bitfield operations, FMADD, subword operations (making decimal math ops faster), a bunch of branch path instructions and a handfull of others.
It had 32 64bit FP registers that could also be used as 64 32bit registers or 16 128bit registers. For the time this was novel.
Yes, the standard firmware interface. Those working on the Itanium platform (Intel and presumably HP) wanted to have a boot solution that didn't have the technical shortcomings of existing BIOS designs. That drove an initiative to create a formal spec and an implementation. I believe Itanium was the first platform with EFI, but someone can correct me if I'm wrong.
> Itanium being discontinued further reduces mainstream CPUs to the most boring, safe designs possible
IMHO, as someone who cares about security, that's not necessarily a bad thing. (I do lament the passing of tagged architectures, but that ship sailed a long, long time ago.)
> And EFI is a good thing, especially considering the monster dumpster fire that is UEFI?
Well it may be a dumpster fire, but it is the finest, most consistent flame we've gotten from the firmware dumpster; which is why it is fast becoming the only standard firmware interface in actual use. UEFI is fast becoming the standard for ARMv8, it's creeping back to ARMv7, and it's becoming popular with RISC-V.
You will either learn to love it, or you will suffer forever. Mark my words, UEFI will still be booting your machine in 2040, when the President is literally a deep fake controlled by a troika of Jack Dorsey, Jeff Bezos, and Priscilla Chan.
It was FORTH, which most developers really hated. Sometimes the end users even got exposed to bits of FORTH poking out, for example in the syntax for booting.
We really just wanted a nice clean 64-bit BIOS, with all the datatypes 64-bit. The BIOS is pretty decent if you strip out redundant interfaces, segmentation, and never-used functionality. Adding extra functionality to firmware is madness. Firmware needs to initialize key hardware like RAM, load a boot loader, and get out of the way. Firmware doesn't need to be practically an OS.
The old 16-bit BIOS was actually working OK. Sure, it was nasty to program for, but almost nobody had to deal with that.
> The MIPS chip, the DEC Alpha (perhaps the fastest chip of its era), and anything else in the pipeline were all cancelled or deemphasized. Why? Because Itanium was the future for all computing. Why bother wasting money on good ideas that didn't include it?
> The failure of this chip to do anything more than exist as a niche processor sealed the fate of Intel—and perhaps the entire industry, since from 1997 to 2001 everyone waited for the messiah of chips to take us all to the next level.
> It did that all right. It took us to the next level. But we didn't know that the next level was below us, not above. The next level was the basement, in fact. Hopefully Intel won't come up with any more bright ideas like the Itanium. We can't afford to excavate another level down.
I'm not sure what point Dvorak is even making in that article. Yeah, a lot of ultimately wasted effort went into Itanium. But we ended up with x86-64 plus a somewhat diminished set of CPUs from some of the big Unix vendors. It's an interesting question but I'm ultimately not sure that the computer industry would look all that different today had Intel just done 64-bit extensions to x86 or something similarly evolutionary.
AMD might well not exist. But, except for HP, the big Unix vendors mostly hedged their bets anyway. The large Japanese companies who also backed Itanium never were going to make the investments to break out beyond Japan.
Blaming Itanium for killing all of the bespoke RISC processors from the 90s is a stretch IMHO. Low cost high power x86 architecture chips were a much bigger factor. Nobody had the stomach to pay $70k (plus $20k/year for the mandatory support contract) for a "workstation" that was slower than a $4k PC, especially once Linux got good.
Intel's good luck and heavy investment in shrinking node sizes also made it impossible for niche companies to keep up. They were doomed to be slow power hogs in their attempt to keep up with commodity x86 processors.
Intel was independently developing 64-bit extensions under the code name Yamhill. I know there some legal settlements around the time so they may have cross-licensed technology. AMD came out first but Intel had much the same thing in its back pocket.
What I last statement meant was we ended up by an industry dominated by 64-bit x86 anyway in spite of all the effort that went into an alternative 64-bit architecture. So we’d probably be in a similar place had Intel just decided Itanium was a bad idea from the start.
What ...? Yamhill was an answer to AMD64. The first rumors appeared in 2002 where AMD announced AMD64 in 1999, released the full specs in 2000 and actually shipped the first Opteron CPU in 2003 April, Intel shipped the Nocona in June 2004. This trailing remained for a while -- LAHF/SAHF in 64 bit was shipped in March 2005 by AMD but only December 2005 by Intel.
Well sure. Intel much preferred Itanium to succeed. Absent AMD, it’s possuble Itanium would have muddled through in the end. (Or something completely different would have played out.)
it’s safe to say that Intel has some sort of contingency plan going back quite a while. Some analysts even thought they saw features in Pentium that suggested 64-bit readiness.
But it wasn’t until Opteron’s success and its adoption by esp. HP and Dell that Intel felt they needed to make their 64 bit extensions plan public.
You are correct. What people don't seem to appreciate are the internal conflicts within large organizations. There were in fact massive internal conflicts at Intel between the Itanic and the legacy. Companies that large doesn't "think with a single brain".
Random aside: Itanic was HP's brainchild that was adopted and refined at Intel (and far from all of Intel was excited about that). Having experienced a VLIW that _didn't_ suck (the internal engine of Transmeta's Astro 2/Efficieon) I'm sad that EPIC/Itanic gives VLIW as bad name. However, the future belongs to RISC-V.
They shouldn't have killed an excellent processor (the Alpha) which already had tons of software and history and was already being used in the fastest supercomputers in the world for a product that was never (and still isn't) proven. The Itanic was never best in its class at anything.
> They shouldn't have killed an excellent processor (the Alpha)
Parallel Alpha systems are a pain to deal with, because they lack a form of expected synchronization that every other processor has: automatic data dependency barriers. On every other platform, if you initialize or otherwise write to a value, then make a pointer point to that value, you can expect that anyone reading through that pointer gets the initialized/new value. But on Alpha, another CPU can get the new value of the pointer and then the uninitialized/old value of what it points to.
Alpha is the sole reason why the Linux kernel "smp_read_barrier_depends" barrier exists and code has to use it; on every other platform, that barrier is a no-op.
Is there any evidence that not handling read-read dependencies in hw is crucial to alpha performance?
I'd guess that back when the alpha memory model was designed multiprocessors were quite rare, and designers didn't have such a clear picture of the tradeoffs that we do today (not saying today's understanding is perfect, just that it's better than what we had 30 years ago), and chose the weakest possible model they could come up with in order to not constrain future designers.
agreed, Itanium investment should have gone to the Alpha.
Itanium was really good at raw performance as long as you could write hand tuned math kernels or kept working with the compiler team to optimize code for your kernel. Took me a while, but I got 97% efficiency with single core DGEMM.
Hand-written code for Itanium was always smoking fast. One-clock microkernel message passes and other insanity. But nobody ever figured out how to write a compiler that could generate code like that for that machine.
Most of it depended on the problem: for a subset of problems it worked well but once you had branchy code and less than very consistent memory access it was dismal. I supported a computational science group during that period and Itanium (and Cell) kept being tested but never made sense since you’d be looking at person-years of work hoping you could beat the current systems (or even previous generation) instead of spending that time on improved application functionality.
> for a subset of problems it worked well but once you had branchy code and less than very consistent memory access it was dismal.
So, a lot like coding for the GPU. Makes sense, given that the low-level architecture is so similar... And it might explain why VLIW itself is not so widely used anymore. AIUI, even the Mill proposed architecture (which boils down to VLIW + lots of tricks to cheaply improve performance on typical workloads) has a hardware-dependent, low-level "compilation" step that's quite reminiscent of what a GPU driver has to do.
The GPU comparison is common and I think it hits the main problem: Intel/HP needed to solve two hard problems to succeed. GPU computing had only one because gamers provided a reliable market for the chips in the meantime.
I’m also curious how this could have gone a generation later: Itanium performance was critically dependent on compilers in an era where they were expensive and every vendor made their own, and the open source movement was just taking off. It seems like things could have gone much better if that’d been, say, LLVM backend & tools and higher level libraries where someone could get updates without licensing costs and wouldn’t be in the common 90s situation of needing to choose between the faster compiler and the more correct one.
There were people trying, but there are some real fundamental issues with the approach for general purpose computing. It's extremely hard for a compiler to know if some data is in cache, in memory, or way out in swap. Without this information it's very hard to know how long any memory fetch is going to take. If you're trying to run a lot of computation in parallel that has some interdependencies then this information is paramount.
It's kind of like trying to use a GPU for general purpose computation. Itanium should have been a coprocessor.
> Took me a while, but I got 97% efficiency with single core DGEMM.
In my experience, it's pretty widely accepted that VLIW (and EPIC) can achieve high performance and efficiency on highly regular tasks such as GEMM and FFT. That's why VLIW has been and continues to be popular for DSPs. The struggle for VLIW is general purpose code that doesn't necessarily have that same kind of regularity.
I have to admit that as a DEC alpha user starting in 1993 (using OSF/1), and also one of the main FreeBSD/alpha port authors, the itanium being phased out fills me with joy.
Alpha was excellent. Definitely a missed opportunity. The 164LX here running Tru64 is a great system, proving the chip could really work in all kinds of settings.
Around the time that the Itanium project was announced (and had presumably been being worked on behind the scenes for quite a while) was when Compaq bought DEC. HP wouldn't buy Compaq until about 5 years later. So Itanium was already well underway by the time switching to Alpha would have been a remote possibility.
The people who built the Alpha went on to work at AMD and built the Athlon. Most of the innovations in that platform ended up filtering into the Intel Core 2 and Core i7 architecture as well.
The original RISC/Unix players were most of my career. Pyramid MIPS/OSX, then Sparc/SunOS, AIX, PA-RISC/HPUX, Dec Ultrix followed by Alpha, etc. I remember trying to tell my co-workers that Linux would wipe it all out, shortly after AMD rolled out Opteron. Most of them chuckled.
The first machine I ever had root on was a PA-RISC K250 running HP/UX 10.20. Good times. I particularly miss PA-RISC: it was a pretty clean instruction set with good performance, and HP never got their money's worth out of Itanic. There's a C8000 in my collection with dual PA-8900s in it.
PA-RISCs lacking support for any meaningful support for any atomics except TAS (one with weird alignment reqs) forced me to add an atomics emulation layer to postgres. So I somewhat hate it now ;)
I liked pretty much anything in the early days because it felt something like a small exclusive club, and a global discussion network like Usenet was just so unique. That experience of helping someone across the globe while sitting at home was just really cool.
The sweet spot technically for me was probably later Solaris with things like zones and dtrace that nobody else had. The popularity of Solaris also made it much easier to find help from other people, Makefiles that just worked, SSL accelerator boards, etc. Ask anyone that worked with zones, for example, and we find all the buzz about docker and linux containers quite funny.
I probably hated AIX the most. It was tantalizingly close to other Unix implementations, but with enough differences that things would sort-of work, but not really. To be fair, there was a "right" way to do things that worked, but I was naturally trying to use knowledge I already had. It was also the platform most likely to bomb out when trying to compile open source software. Lots of tweaking makefiles, environment variables, compiler flags, etc, to get things to work. They had a sysadmin tool called "smitty" that I particularly hated.
My early career involved compiling a lot of software that would have been better run on Linux or some BSD variant for AIX. I can confirm that building stuff like a custom Apache stack on AIX (don’t ask) was like pulling teeth using XLC.
I’m not really old enough to be an actual grey beard, but I got just enough hands-on experience with late-period Big Commercial Unix to feel like I at least swept floors and fetched coffee in the exclusive club. Cool, even if it was frustrating at times.
I guess that is the thing: we’ve settled into a monoculture that makes me deeply uncomfortable. NeXT won the desktop, but there’s only implementation. And Linux won the server. But in the meantime, I feel like only OpenBSD is credibly providing public infrastructure that everyone uses.
I kind of worry that we’re heading toward a world in which nobody really understands the guts of the systems that sit underneath modern cloud / containerized services. Furthermore, because that stuff isn’t as exciting as the Javascript framework de jure, the amount of innovation happening in the lower layers is nonexistent.
The monoculture is a good observation. Aside from the dangers you mention, the death of commercial Unix also killed off a lot of healthy competition. There used to be, for example, lots of incentive for HP, IBM, Sun, and others to tweak hardware and software to keep at the top of TPC-C benchmarks.
Edit: Well, also the security implications of the Linux/x86 monoculture.
Couldn't it be argued that a monoculture oriented around a specific architecture actually makes it easier to learn and understand the system guts, because of the stability of available learning materials and knowledge resources?
Oh, forgot to mention a funny thing about Sparc machines. It was trivially easy to accidentally halt the system by sending a break from the console. That's accidentally pressing one key on a serial console. I remember some company selling inline serial filters for $100 each.
The first day I started working for a company that had Sparc/Solaris machines around I got warned about that, and to never use the "killall" command. On linux that kills just matching processes. On Solaris that literally kills all processes.
Even though I haven't had to deal with Solaris in about a decade, and it being safe to do killall, it took me ages to break out of the habit of doing:
I've always wondered why they inclued it. There seems to be literally no uses for it, even no case where it is anything but dangerous. Even when shutting down the system, init needs to run. Can anybody enlighten me?
"killall - kill all active processes killall is used by shutdown (1M) to kill all active processes not directly related to the shutdown procedure "
So, intended for use by shutdown(1M) and nothing else. It's basically a completely different utility to the Linux one. They just happen to, unfortunately, share the same name.
Interesting. I've been an AIX dweeb since 3.2.5 and it's actually my favourite proprietary Unix mostly because it's so ornery. I still have a POWER6 running it that handles most of my web and mail tasks.
In fact, I tend to develop on it since if it works on AIX it will be portable enough to work other places. That said, I like xlc performance but I despise its weird command line options. I mostly just use gcc on AIX also.
I do prefer smit to things like HP-UX sam, but I'm accustomed to IBM overengineering. ;)
I worked for Ibm Websphere 1998-2002 in those days. I supported Aix, HP/UX, Solaris 8, and 4 rpm based Linux versions. Our job was to do nightly builds and test on each OS with DB2, Oracle etc. HP/UX and Aix were menu driven and painful for automation. Solaris was much more fun to work with. Redhat, SuSE, Yellowdog and mandrake were the linux distros. No yum to handle dependencies. I remember working on bash a lot to handle package dependencies.
One day I got an Itanium from Microsoft. It was fun to play with. Sadly I was forbidden from opening the server. It was so alpha that adjusting the clock would kill the server. It never went very far due to being so alpha.
I chuckled. Which was silly as I missed the boat with Linux until Ubuntu came along. The versions of Linux I first saw were mere toys compared to the SGI awesomeness I knew at the time.
Oh, yeah. I did not see it with early Linux, and like you, thought it was a fun toy. It was specifically the 64 bit x86 thing that caught my attention. The memory ceiling for 32 bit x86 made it easy to ignore Linux. If I remember right, Linux was also still crashing pretty regularly under load just before that time frame too.
Before that time frame, PC hardware vendors had little incentive to make stable hardware. It was easy to blame crashes on Windows. If the hardware itself made a few crashes per week, that wasn't going to be enough to be noticed.
The early 64-bit PC hardware was server grade, intended for NT and Linux. That set a standard, and then gradually people moved away from junk like Windows 98SE and Windows ME. Hardware bugs no longer had such an easy time hiding in a flood of software crashes.
I figured it out right around Slackware becoming a big distro; that would be in the late 90s / early 2000s. It was pretty clear to me, but I was young and a new entrant to the industry.
SGI's Itanic workstations were pretty anaemic, too. They only looked good compared to the phone-it-in R10K derivatives they were pooting out around that time. I'd rather have had a Tezro than a Virtu any day.
So HPUX finally dies. Doesn't leave much of the original commercial RISC/Unix players still on some level of life support. I guess there's still AIX/Power.
We used it for a Clipper based management application.
My first task was bringing back to life our school labs network, so that we could use it for that application, got to love those coaxial cable terminators.
We are buying a new iSeries and the IBM person was very specific that the terminal we have in the server room is NOT supported anymore without some weird interface board. I was kind of sad at the thought. Even sadder knowing I'm going have to load IBM's client access software on some poor PC.
I worked for an egovernment company several years ago. A state (county?) agency built a service with us, and wouldn't expose the actual database to us, just an SNA supporting interface. We were essentially scraping the data. Messy, unsanitised data. I really didn't envy the developer who looked after that particular application.
I'm pretty sure what we are getting is a renamed Client Access on a small PC we need to set in the server room. Client Access is legendary for how bad it is from installing to using. I would rather have the terminal.
I used Summer87, 5.x in school assignments and early contract work, also got a copy of Visual Objects, but ended up trying to do the app in VB instead.
Demand. Porting from PA-RISC to Itanium wasn't pain free for end users. Porting to Linux/x64 isn't likely much more effort than some HPUX/x64 that doesn't exist yet.
If you are going to x86_64 you might as well run Linux these day. The only people who might still need HPUX are people who have binary software packages that only run on HPUX, and they obviously won't run on x86_64.
I wonder what it’s like working on Itanium over the last decade (and perhaps earlier). I can see someone joining early on when the marketing buzz was exciting and reality had not set in.
But now...you’re working on a product that is widely mocked and has no future, yet releases are still being made. How depressing...what causes someone to stay on?
A lot of people have good jobs supporting and incrementally enhancing legacy products and product lines of various kinds in software, computer hardware, and in many other areas. It’s mostly a Silicon Valley concept that if you’re not working on something ground-breaking you’re wasting your life. And how many people at some of those big SV companies are mostly just working on ad tech?
True, my point is that itanium itself was a laughingstock, and clearly with no future, so must have been embarrassing to talk to your friends about what you do for work.
Making spare parts for the B-52, or maintaining security fixes for Solaris (which has its fanatic fans) can be rewarding, no question. But to work on the Itanium any time in the last decade must have been soul-sucking.
I was an HP-UX kernel engineer from 2002 til 2005, a brief interlude writing IA64 CPU diagnostics, and then and a Linux kernel engineer from 2007 til 2010, all on Itanium systems.
In that time frame, it wasn't clear that horizontal scale out architecture (aka "the cloud") was going to dominate, and that scale up systems were going the way of the mainframe. The thinking was that there would always be a healthy balance of scale out vs scale up, and btw, HP alone did $30B+ revenue yearly on scale up with very slow decline, just like the mainframe market, which is still $10B+, even today.
To put that in today's terms, if you pitched a startup with a $30B TAM, VCs will definitely be returning your emails.
So no, it wasn't embarrassing to talk about working on IPF any moreso than it would be to talk about POWER today. It's just another CPU architecture with some interesting properties but ultimately failed in the market place. Just like Transmeta or Lisp Machines.
What should be embarrassing, but clearly is not, is to slag off entire industries not knowing shit about them.
Edit: I think working on B-52 parts would be an amazingly fun job.
in my experience, the embarassement of a job doesnt come from the technology you are working with.
it comes from the people you are working with, and the people you are serving as customers, and whether these stakeholders are being treated well and having their needs met.
Probably startup culture is a better descriptor though I do think you often see the same mindset in at least outside looking in attitudes about the newer large SV employers.
Maybe you're a few years from retirement and your core skillset is itanium. Maybe you just need a paycheck. It was developed and maintained in many locales, including ones without a lot of alternative employment choices for an architect or vlsi designer.
I've met lots of engineers who are deeply passionate, but I've also met plenty for whom it's really just a job.
As the parent indicates they're porting it to x86-64. I've been away from following HP proprietary systems for almost 10 years but they put a plan in place quite a while ago when it became obvious that Itanium had no future. Remember that systems in this space don't need to be the latest and greatest. They need a long support roadmap but it's mostly fine if hardware is on the older side.
I got to work with one of the first Itanium machines back in 2000 working as an intern. My job was to port Perl to IA-64. It was an amazingly fast machine - like living a few years into the future.
I can see why it failed to gain mass traction, but that’s a shame. IA-64 was so innovative.
HP paid my employer (Progeny) to help port Debian packages to an early Itanium system. I don’t remember thinking it was fast at all, but maybe my memory is colored by later miseries.
It's interesting that it was actually AMD that kept the Intel x86-64 architecture alive.
Intel knew that the x86 architecture was limited in time, and tried to kill it off with the the 64-bit Itanium.
AMD had a different plan, and released 64-bit capable x86 processors, obstructing Intel’s plans to dominate with Itanium. I think this is key to why Itanium never caught on, and why writing software for it is so hard.
Itanium represents a classic case of jumping architectures and committing too soon. Any mature system represents not just the sum of a huge amount of design work but also an untold number of hours (years really) of beta testing, bug fixing, and iterative refinement. A new system may be built on a better foundation but more often than not in its immature state it will still have shortcomings until it's been through a long period of "burn-in" and bug fixing.
On the other hand, Itanium was ugly, but had its charm and uniqueness. Itanium is what EFI was first developed for. Itanium is where the C++ ABI got started.
Itanium being discontinued further reduces mainstream CPUs to the most boring, safe designs possible: IA32/amd64. ARM was kinda quirky (conditional execution, barrel shifter), but those were slowly neutered (by introducing Thumb), and then totally thrown out of the window with aarch64. SPARC is dead. PA-RISC is dead. RISC-V is new and promising, but is also the most pragmatic and safe design of an ISA ever. The Mill CPU is interesting, but is underfunded and I don't think it will ever be taped out.
Similar as with OS research (think: Solaris, Plan9/Inferno), researchy and experimental CPU ISAs seem to be a thing of the past now.