Hacker News new | past | comments | ask | show | jobs | submit login

Curious question on the period.

Assuming Itanium released as actually happened... (timeline, performance, compiler support, etc)

What else would have had to change for it to get market adoption and come out on top? (competitors, x86 clock rate running into ceiling sooner, etc)




Well, what actually killed it historically was AMD64. AMD64 could easily not have happened, AMD has a very inconsistent track record; other contemporary CPUs like Alpha were never serious competitors for mainstream computing, and ARM was nowhere near being a contender yet. In that scenario, obviously mainstream PC users would have stuck with x86-32 for much longer than they actually did, but I think in the end they wouldn't have had any real choice but to be dragged kicking and screaming to Itanium.


PowerPC is the one I’d have bet on - Apple provided baseline volume, IBM’s fabs were competitive enough to be viable, and Windows NT had support. If you had the same Itanium stumble without the unexpectedly-strong x86 options, it’s not hard to imagine that having gotten traction. One other what-if game is asking what would’ve happened if Rick Belluzzo had either not been swayed by the Itanium/Windows pitch or been less effective advocating for it: he took PA-RISC and MIPS out, and really helped boost the idea that the combination was inevitable.

I also wouldn’t have ruled out Alpha. That’s another what-if scenario but they had 2-3 times Intel’s top performance and a clean 64-bit system a decade earlier. The main barrier was the staggering managerial incompetence at DEC: it was almost impossible to buy one unless you were a large existing customer. If they’d had a single competent executive, they could have been far more competitive.


> PowerPC is the one I’d have bet on

Interesting to note that all state of the art video game consoles of the era (xbox 360, PS3 and Wii) used PowerPC CPUs (in the preceding generation the xbox used a Pentium III, the PS2 used MIPS and the GameCube was already PPC).


Power.org [1] was a fairly serious initiative to push Power for consoles and the like at one point.

[1] https://en.wikipedia.org/wiki/Power.org


No it could not not have happened.

Address space pressure was immense back in the day, and plain doubling the width of everything while retaining the compatablity was the obvious choice.


> Address space pressure was immense back in the day, and plain doubling the width of everything while retaining the compat[i]blity was the obvious choice.

PAE (https://en.wikipedia.org/wiki/Physical_Address_Extension) existed for quite some time to enable x86-32 processors to access > 4 GiB of RAM. Thus, I would argue that if the OS provided some functionality to move allocated pages in and out of the 32 bit address space of a process to enable the process to use more than 4 GiB of memory is a much more obvious choice.


> Thus, I would argue that if the OS provided some functionality to move allocated pages in and out of the 32 bit address space of a process to enable the process to use more than 4 GiB of memory ...

Oh, no. Back then the segmented memory model was still remembered and no one wanted a return to that. PAE wasn't seen as anything but a bandaid.

Everyone wanted big flat address space. And we got it. Because it was the obvious choice, and the silicon could support it, Intel or no.


PAE got some use - for that “each process gets 4GB” model you mentioned in Darwin and Linux - but it was slower and didn’t allow individual processes to easily use more than 2-3GB in practice.


> AMD has a very inconsistent track record

In what way? Their track record is pretty consistent actually, which is what partially led to them fumbling the Athlon lead (along with Intel's shady business practices).

During the AMD64 days, AMD was pretty reliable with their technical advancements.


Yes, but AMD was only able to push AMD64 as an Itanium alternative for servers because they were having something of a renaissance with Opteron (2003 launch). In 2000/2001, AMD was absolutely not seen as something any serious server maker would choose over Intel.


You're right, there were ebbs and flows in their influence...but they were consistent in those trends. Releasing an extension during their strong period was almost certain to be picked up, especially if Intel wasn't offering an alternative (which Itanium wasn't considered as it was server only).


Apple was fine on POWER


My uninformed opinion: lots of speculative execution is good for single core performance, but terrible for power efficiency.

Have data centres always been limited by power/cooling costs, or did that only become a major consideration during the move to more commodity hardware?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: