Hacker News new | past | comments | ask | show | jobs | submit login

Since cloud servers are a bigger market than users who want to run an old copy of VisiCalc, why doesn't either Intel or AMD produce a processor line that has none of the old 16 and 32 bit architectures (and long-forgotten vector extensions), implemented in silicon? Why not just make a clean (or as clean as possible) 64 bit x86 processor?



You mean like the Intel i860?

https://en.wikipedia.org/wiki/Intel_i860

Or the Intel Itanium?

https://en.wikipedia.org/wiki/Itanium

Or the AMD Am29000?

https://en.wikipedia.org/wiki/AMD_Am29000

Or the AMD K12 which was a 64-bit ARM?

https://www.anandtech.com/show/7990/amd-announces-k12-core-c...

All of these things were either rejected by the market or didn't even make it to the market.

Binary compatibility is one of the major if not the major reason that x86 has hung around so long. In the 1980's and 90's x86 was slower than the RISC workstation competitors but Intel and AMD really took the performance crown around 2000.


I think he's suggesting something more like the 80376, an obscure embedded 386 that booted straight into protected mode. So you'd have an x86-64 CPU that boots straight into Long Mode and thus could remove stuff like real mode and virtual 8086 mode. AFAIK with UEFI it's the boot firmware that handles switching to 32/ 64 bit mode, not the OS loader or kernel, so it would be transparent to the OS and programs.

But in order to not break a lot of stuff on desktop Windows (and ancient unmaintained custom software on corprate servers) you'd still have to implement the "32 bit software on 64 bit OS" support. That probably means you don't actually simplfy the CPU much.

Of course some x86 extensions do get dropped occasionally, but only things like AMD 3DNow (I guess AMD market share meant few used it anyway) and that Intel transactional memory thing that was just broken.


I think the idea is to go ahead and break lots of stuff on desktop Windows (and ancient unmaintained custom software on corprate[sic] servers). Let that software keep running on x86_64 hardware. But offer an additional choice--an x64-ng--that can only run a subset of software, but it can run it even better than x86_64 can. You don't fill an entire datacenter with these. Just a few aisles. Then you let people choose them for their whizbang modern workloads. Every year you replace an aisle of x86_64 racks with x64-ng racks. Twenty years from now, 25% of your datacenter is still the latest generation of x86_64 and they rent for a premium.

Just as if a datacenter today allocated some of its rackspace to zSeries or ARM or what have you. For workloads that gain advantages on those platforms.


A “64 bit clean” CPU would be nice, but practically you’d also want new 32 bit compatible CPUs for markets that need it. Gamers are still going to want to play old games with MORE POWER, business will want to throw more CPU at some process that is reliant on ancient code and so on. Apple has tried forcing the issue, and it didn’t exactly make everyone happy, and Apples view on compatibility is rather different to Microsoft’s or Linux / Linus’s.

So now you need to design and verify two CPU cores instead of one. The most efficient for an engineering staff and resource allocation perspective would be to just have the “x64-ng” core being the normal AMD64 core with legacy support lasered off. So probably not much in performance gains. If you had the designs actually diverge you’re going to end up with with duplicated work by more people / teams and thus less profit.

With the trend for dedicated low power cores, the companies already have two lines of core design to maintain, they aren’t going to want more.

I’m not saying it’s impossible, and a legacy free x86 core would be nice, but the business case for getting rid of 32 bit support probably isn’t there (yet?).

(You do also mentions Z series which has backwards comparability in some form going back to System 360 from the 60s - getting rid of this stuff is hard once it’s entrenched).


You could probably implement dropped extensions at the OS level, similar to how Rosetta handles binary translation.


Binary compatibility kept x86 dominant, coupled with competing platforms not offering enough of a performance or price benefit to make them worth the trouble.

That formula has completely changed. With the tremendous improvement in compilers, and the agility of development teams, the move has been long underway. People are firing up their Gravitron2 instances at a blistering pace, my Mac runs on Apple Silicon (already, just months in, with zero x86 apps -- I thought Rosetta2 would be the lifevest, but everyone transitioned so quickly I could do fine without it).

It's a very different world.



It didn't already support arm64? Was nobody using it on iOS?

I remember trying to run Mercury on M1 recently and having problems getting it to build - some of that was because it had very old style probably wrong approaches to atomics written in x86 asm.

Also, a lot of games are still on x86, even constantly updated ones like Minecraft - and that's not even native code.


Both cpu you mentioned are locked in ... I hope less locks in the future , right now you can't buy graviton CPUs just rent in a Amazon datacenter, neither m1 cpu or install Linux on it..


Apple sell those cpu because they are "buying" users for their locked ecosystem witch will pay "lifetime" subscription in their services , probably in Future enriched with apple search and apple ads.. 1984 .. but it started with a good CPU.. Amazon is settling.. look how good is our cpu and how cheap.. while they are probably have 0 margin on those and 50% on competition .. look the argument for alternatives in CPU space is not bad.. it's been just the examples that you have chosen that are imo Pricewise and performancewise a CPU can look better if the seller have secondary interest to sell those to you that are not monetary.. (take that 3nm tr7990wx at 300$.. in a locked system that we sell you for 1500$, we will take lifetime 30% cut from what you buy with it anyway ;) if you want 2tb ssd it's another 2k.. but hey the CPU is just 300$ ... ) I will judge m1 only if it will be compatible with Linux and sold separately from the apple (eco)system Talking about architecture arm did a good job.. but under Nvidia.. I'm not so faithful for the future..


Sorry for the rant but some things are not comparable i see m1 and graviton vs Intel or AMD CPU and what i see really is like saying like as image hosting solution nextcloud+owned nas is bad and costly why Google photos is cheaper.. (free.. Until it's not..) ! The two are two different things one can be your.. the other.. not


No, there's a lot of PCisms that that can be removed and still allow for x86 cores. User code doesn't care about PC compat really anymore (see the PS4 Linux port for the specifics of a non PC x86 platform that runs regular x86 user code like Steam, albeit one arguably worse designed than the PC somehow). Cleaning up ring 0 in a way that ring 3 code can't tell the difference with a vaguely modern kernel could be a huge win.


Because the number of transistors used for that functionality is absolutely negligible, so removing it has virtually no benefit.


Even so, doesn't having a more complex instruction set, festooned with archaic features needed by very few users, increase the attack surface for hacking exploits and increase the likelyhood of bugs being present? Isn't it a bad thing that the full boot process is understood in depth by only a tiny fraction of the persons programming for x86 systems (I'm certainly not one of them)?


As a sibling said, if you can get the cpu into real mode, you can probably do whatever else you want, so it being there isn't a real security worry.

Dropping real and virtual mode wouldn't save a whole lot anyway; for the most part, the instruction set is the same, regardless of mode, just register selection is a bit different, and fiddling with segment registers is significantly different.

Mostly, the full boot process isn't understood in depth by many people because very few people need to know about it in depth. Really full boot process includes detecting and enabling ram and all that, and there's a handful of companies that provide most of the firmware images for everyone ... OSes usually start from the BIOS boot convention or UEFI, so they don't need to know all that early boot stuff. Well, really, bootloaders start there, OSes can start at multiboot or UEFI if they want to save some work. An SMP OS will still need to know a little bit about real mode though, because application processors (non-boot processors) start in real mode, complete with segmented addresses, and need to get into protected mode themselves.


Not really. Basically none of these features can be used outside of the kernel anyways, which means that the attacker already has far more powerful capabilities they can employ.


The number of transistors sure. The engineer time to design new features that don't interfere with old features is high. The verification time to make sure every combination of features plays sensibly together is extremely high. To the extent that Intel and AMD are limited by the costs of employing and organizing large numbers of engineers it's a big deal. Though that's also the reason they'll never make a second, simplified, core.


It's never going to happen. The ISA is a hardware contract.


When things get to the point where AMD is considering making nonmaskable interrupts maskable (as the article states), maybe it's time to invoke "force majeure".


Yep. Benefit = The cost of minimalism at every expense - the cost of incompatibility (in zillions): breaking things that cannot be rebuilt, breaking every compiler, breaking every debugger, breaking every disassembler, adding more feature flags, and it's no longer the Intel 64 / EMT-32 ISA. Hardware != software.


Since cloud servers are a bigger market than users who want to run an old copy of VisiCalc

Imagine if your VMs, which are being used to run old software, can't be used on cloud servers due to this feature absence. Virtualisation is used extensively for this purpose. The cloud providers certainly wouldn't like that.


Intel did this with the cores for the Xeon Phi. While it was x86 compatible, they removed a bunch of the legacy modes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: