> As in, the thing that tried to implement Linux syscall interfaces on top of the Windows kernel?
It's really funny, NT was supposed to be great at three things:
- be easily ported to different hardware architectures, which then never actually became relevant (and nowadays macOS is the best example for actual architecture migrations!)
- have a much more sophisticated and elaborate security model than those filthy unices (and now we're getting sudo on Windows, because 30 years later, it's still too complicated for anyone to use as intended)
- allow fluid switching between different userlands, be it win32, OS/2 (RIP), Unix (RIP), and anything else you could want in the future! (except no, you're getting VMs now)
The issue with VMs for Linux, which Windows isn't the first, rather the last from several attempts done by UNIX vendors, and IBM/Unisys mainframes/micros is that Linux kernel syscalls have become more relevant than POSIX.
Thus it is easier and cheaper to plug a Linux VM, than implement POSIX, and then get the same kind of complaints from Linux folks using macOS, or other UNIX proper environments.
> - be easily ported to different hardware architectures, which then never actually became relevant
Commercially relevant, perhaps, but it has remained technically relevant: The NT Kernel has historically operated on lots of different hardware architectures and continues to run on a small variety today. The ARM port is still active and a living branch even if total hardware sales are fewer than projected and Microsoft ceded most of that hardware space commercially when they gave up on Phones.
> - have a much more sophisticated and elaborate security model than those filthy unices (and now we're getting sudo on Windows, because 30 years later, it's still too complicated for anyone to use as intended)
That "sudo for windows" still leverages the elaborate Windows ACL model. It's not like they are also porting Linux kernel security on top of Windows. They just realized that both "RunAs.exe" and PowerShell's "Start-Process" have more complicated CLIs than necessary for simple UAC cases and decided to copy the CLI arguments of a well known CLI.
> - allow fluid switching between different userlands, be it win32, OS/2 (RIP), Unix (RIP), and anything else you could want in the future! (except no, you're getting VMs now)
Turns out users don't actually want to switch userlands on the fly and when they do VMs feel more right as an abstraction?
More (RIP) than OS/2 or the various attempts at POSIX userlands, Windows 8 actually tried to deliver a truly modern userland as a wholesale new experience, failed spectacularly. Switching was fluid and felt good if you enjoyed the new userland (which had some extraordinary, noticeable benefits in bootup and power/battery usage and other things). Coordination between the two userlands got really good in 8.1. The final lessons that seemed to come from Windows 8 was to never try that again because users hated it and didn't understand it. (I still lament how much of "didn't understand it" was so much more of a failure of education and PR and marketing and incidentals more than technical problems. There was some great technical appeal of a chance to move from win32 to a userland that was greener [both as in pastures and ecologically].)
As someone that believed into the WinRT dream, I am deeply sour with how WinDev managed the whole story, it wasn't only the users not wanting to adopt the new world.
Microsoft itself made a mess out of the developer experience.
Now I am back to distributed computing, and for anything Windows the classical frameworks are good enough.
>macOS is the best example for actual architecture migrations
Eh, they did 2 migrations while supporting at most 2 architectures concurrently. Nothing compared to Linux which is maintained for x86, POWER, ARM, s390x, MIPS etc concurrently
Does Linux allow you to run your s390x binary on your ARM system? No.
As others have pointed out macOS allowed migration including existing binaries. They have done 3 of these migrations. 68k to PowerPC, PowerPC to x86, and x86 to ARM. Each time, users were allowed to bring along existing binaries and keep using them and each time the binaries from the previous system generally ran as fast or faster on the new system. As far as I am aware, Linux has never done anything like this.
There are applications for that on Linux (qemu and box64/box86 being the best known), they just aren't installed by default on most distros.
A large part of why the binaries ran well on macOS migrations is that each time the migration came with a substantial processor speed increase. This meant that emulated/translated binaries were able to roughly match their previous performance, while native binaries for the new architecture were significantly faster. On Linux, however, the most common reason for cross-architecture tech these days is running x86 binaries on something like a Raspberry Pi, which means a slower processor on top of the translation layer - so non-native apps see a huge drop in performance.
macOS did migrations. Linux is just supported on those architectures at the same time without any real layer that allows users to switch from for example x86 to ARM without recompiling the entire world to match.
But contrast that to Microsoft's absolutely hilariously inept attempts at bringing Windows to ARM. The amount of cumulative money spent over the last 15 or so years versus the actual market penetration is insane.
It's really funny, NT was supposed to be great at three things:
- be easily ported to different hardware architectures, which then never actually became relevant (and nowadays macOS is the best example for actual architecture migrations!)
- have a much more sophisticated and elaborate security model than those filthy unices (and now we're getting sudo on Windows, because 30 years later, it's still too complicated for anyone to use as intended)
- allow fluid switching between different userlands, be it win32, OS/2 (RIP), Unix (RIP), and anything else you could want in the future! (except no, you're getting VMs now)