I wish Microsoft would release their own linux-based OS with a compatibility layer to let me run Windows apps. I'm not overly impressed with the direction Apple is going but I really enjoy a *nix-native environment too much to go back to vanilla Windows. It would change the math a lot for me if it was full-blown linux under the hood.
Disclosure: I work at Microsoft but not on Windows (and my work machine is a MacBook Pro FWIW)
WSL 2 is a lot like that -- it's a VM in the sense that it uses Hyper-V, but it's super lightweight. And in fact, when WSL 2 is enabled, both Windows AND Linux are running in the same type of VM (separate, obviously), so it's near-native performance in a lot of ways. There are still some I/O issues that the team is actively working on, but early benchmarks are quite promising.
I'm nervous about WSL 2. I use WSL at home quite a bit and I'm on the Fast Insider ring. My PC hasnt been able to boot Win10 for over a year with virtualization support enabled in my BIOS. No errors or anything, just hangs at boot. Ive left feedback in the hub, but crickets. Don't know what the problem is. It used to work, but at some point an update broke it. Not bleeding edge hardware, either. About a 4 to 5 year old PC. Intel i7 CPU, think it's an X79 chipset (not at the computer, so cant verify). Its just weird...
I had similar problem. With Hyper-V my PC BSODed like every other day. I did some investigation on dumps and I think that problem was in nvidia driver (not entirely sure, though, but stacktraces were from that driver). I don't really know whether it resolved or not, because I decided not to use Hyper-V ever again, VirtualBox is much better.
I never got BSODs, just hangs at start, so I don't have any dumps to look at. I'm also using nVidia (1080 GTX) and it has probably been a few months since I updated drivers, but I've been having the issue I think since at last the 1803 update.
I havent been on the PC for several weeks, so I don't have a link to my feedback. I'll post next time I'm on.
This is a total aside, but I really don't understand voting on comments. I've no idea what it could mean - levels of agreement, interest, offensiveness (or scores of other possibilities)? It seems entirely meaningless, therefore pointless, to me.
This is basically it. I upvote things I want to go up and downvote things I want to go down. Off-topic meta discussions about votes should always go down.
Sure, I know what it does, I'm just not sure why anyone would use it. I never do normally want a comment to go up or down (other than the odd egregiously abusive or aggressive item).
I may disagree with a comment, but that doesn't remotely generate in me an opinion re where it should sit in the tree. I don't think other people should or are likely to find a comment agreeable/interesting/relevant/insightful just because I do. Different strokes I guess.
Fair enough. I honestly suspect your signal there is more than lost amongst the noise of readers reflexively voting for what they happen to agree with. Anyway that's probably enough of this particular dead horse.
A comment more near the top will be seen more and therefore get more discussion, so if you want that, vote it up. If you want it to get less discussion, vote it down.
It’s happening at the hypervisor level, and in this case it’s Type 1, so I guess you could but I don’t know how. My ops colleagues are hanging their heads at me right now.
> There are still some I/O issues that the team is actively working on, but early benchmarks are quite promising.
=/
Hoping to run my R code on WSL 2. Glad I got the answer. I wish you guys well on I/O issues.
Is there any performance hit for WSL2 in regard to process forking? I recall that how window and unix deal with process is different (my R code uses mclapply parallel package for unix). I'm curious if there will be any performance hit that stands out versus running on a unix system for parallel/concurrency code.
On Reddit, they had a benchmark for WSL1 vs WSL2 and it showed that the Single core perf of WSL2 was sometimes better than Windows (and even bare metal linux - how that happened I am not sure). The multi-core was not as good, but still better than WSL1.
I can't find that exact thread right now, but here's another one comparing WSL with Windows:
I bet the windows hardware drivers for initialising caches, memory busses, power states, cooling etc. set certain systems up for slightly better performance.
Just out of curiosity: did they let you use the MacBook Pro without problems? Can you guys @ Microsoft decide to use whatever OS is more suitable for what you do?
Microsoft has had Macs in use for a very long time, even pre-dating the famous Gates investment. There are some infamous pics of delivery vans unloading tons of Apple boxes in Redmond. I would expect them to be even more liberal now under Nadella, but to be honest, they probably get great prices on Surfaces, which are very nice machines now.
Just saw this. The answer is it depends on the team, but unless there is a hardware/software reason tied to your job for a specific platform, people can choose what they want.
Many of my colleagues use macOS, some use Windows, some Linux. I have a work-issued Mac and a work-issued Surface Book, because it’s important to test compatibility, especially when it comes to CLI stuff, across different platforms. I have Linux VMs and docker containers and WSL configured too.
I really hope this results in PCIe passthrough on Hyper-V becoming reasonable, it was an absolute nightmare to try to configure and I could never get it to work more than a single VM boot.
In practice it's not really working though. At least not yet. There's a very long thread on virtualbox forums with people trying to get it to work but failing.
I personally have to reboot when I need to use docker or virtualbox. Very annoying.
Yep, this is currently confirmed broken on Windows 10 1903 (Windows Hypervisor Platform extended user-mode APIs). Affects VirtualBox, qemu, etc. We may see a patch, but it's up in the air right now. (<closes eyes and sighs>, I know.)
Hyper-V supports nested virtualization (only Hyper-V nested inside th Hyper-V, though), and seeing as VirtualBox supports a Hyper-V virtualization backend, the solution may be to just find and fix all the bugs in that backend code when under Hyper-V
This is probably an unpopular opinion but.. there is a lot of value in having multiple competing kernels. In the same vein as browser engines, any monoculture is a bad idea. Not to mention there's been a lot of good work done on the NT kernel itself. Under the hood it's pretty advanced, even with all the user-space cruft on top.
Sadly, much like with the web, things have evolved such that it is virtually impossible to create a competitor due to all the accumulated complexity. The driver problem on the PC platform is pretty much insurmountable at this point, so any competitor would have to begin life, and gain massively in popularity somehow, on a much less flexible hardware platform.
It's a shame and makes me sad for modern computing.
I have think the same but maybe TODAY is not as hard as before:
1- We have 4/5 mainstream OSes today: Win/OSX/iOS/Android + Linux. In the case of iOS/android: Some users alread switch between the 2.
2- If it have apps is almost all people need. Drivers are not THAT significant IMHO.
3- Chromebook is a thing, despite it sound stupid: A OS that is just a browser.
Web is the 6 truly mainstream "OS".
4- What people need on drivers? USB (and with usb-c), hdmi, bluetooth (maybe?) + wifi. With this alone and you get even better than iPad: Monitors, wireless mouse, keyboards, external storage, and by extension you catch the rest.
5- Printers? By proxy of iOS it can work full wireless with no drivers on device.
So I think a true desktop OS could launch with less worry about legacy hardware.
Where IS the problem is the apps. It need to be better than chromebook, have very great first party apps that cover basics and hopefully, a VM layer so you could run linux and catch the rest.
> What people need on drivers? USB (and with usb-c), hdmi, bluetooth (maybe?)
Do you honestly think there is a single USB-C driver that covers literally everything that can connect over USB? And anything with an HDMI port uses the same driver!?
No, but I suspect the array of devices is more narrow than suspected?
I think that nail the software (app + dev experience) is a bigger challenge and priority than worry about the (external) hardware.
P.D: One thin I forget to articulate is the possibility to leverage linux as a bridge for drivers (possible)? so the new OS ship a smallish linux just for get compatibility.
Then you're just making a Linux distribution, or something like Android I suppose. You may as well say extending Chromium would solve the web monoculture issue. That isn't how it works.
> I think that nail the software (app + dev experience) is a bigger challenge and priority than worry about the (external) hardware.
Considering that the alternative to “OS in a browser” these days seems to be “every app is a (different) browser” (Electron), the Chromebook concept is not that stupid.
It’s sad that commercial realities are pushing people to throw away 30 years of progress on desktop UIs, but here we are.
I wouldn't be so cynical. As with all things, tech comes and goes in cycles. I think with the limit to Moore's law rapidly approaching for current silicon, there's a good chance that efficiency & simplicity will become front and centre again at some point.
Not sure if this will mean new architectures, operating systems & better tools.. but one can dream!
It is difficult to continue to hope for that while watching people cheer on additional layers of complexity like the garbage fire that is the modern web.
I did. I abused the history API on my website when it was new, and then they went and fixed it. As it turns out, sometimes people care more about the marshmallows than they do about the fire.
Less flippant answer: To some extant, I feel it is on me to try to access good things via the Internet, figure out how to interact with it reasonably safely and satisfyingly and not get overly bent out of shape about the inevitable problems.
All things have their downsides. There are no perfect things that are nothing but goodness and light.
As they say: Sunshine all the time makes a desert.
Dozens and dozens of UNIX kernels (with Fuchsia being the exception I believe..? Not sure). Yes I know kernels have far outgrown POSIX, but it's still a form of monoculture. Diversity in all dimensions matters imo.
Unix kernels could use more cohesion, not less, in terms of API. Even if that API is implemented mostly in userspace
View the kernel API like HTML. Browser diversity is good, but they need to be implementing the same interface if the smaller players want to support any hardware that works with Linux
Windows Subsystem for Linux is really close to that though. If you have not tried it yet you can download a Windows VM for developers from Microsoft that includes WSL and other dev goodies. Its mostly for evaluation but good enough to get started with.
Iirc, even the Win32 APIs sit above the subsystem layer. There was the OS/2 subsystem back in the day, Win32, and one more I can’t remember from my training 11 years ago.
FWIW, I have given major conference talks years back on my linux laptop with PowerPoint running via Crossover Office (wine), mostly due to my advisor's insistence on using PowerPoint. So that has already been a thing for a long while.
My issue is that with nvidia laptops the hdmi connection almost never works as expected, when I have cuda installed. I'll try proprietary or noveau drivers. I've driven my monitor with intel drivers. I've tried so many things, but it has been multiple laptops that I've just never gotten this to work. I just can't have cuda and use my laptop for presentations. So I just always keep a windows partition just for presentations. If you do have a suggestion I'd love to hear it.
I've never used crossover. Thanks, I'll look into it.
Well, each WSL1 process is a Windows process. If you’re running vim in bash, for example, you’ll see “vim” and “bash” as separate processes in Task Manager. WSL is all about ELF binaries natively, just as Win32 runs PE executables natively.
"windows process" is misleading, WSL uses a separate "linux subsystem" with its own syscalls, process/thread data structures, etc. It also uses its own file system (albeit with all the state stored in NTFS and affected by filter drivers, which is why its I/O is slow)
True, I thought that a few hours after I wrote my last comment but couldn’t update it by that point and didn’t respond to it. Windows NT is the kernel, Win32 is the Windows subsystem that everyone’s used to (including kernel32.dll), WSL is a subsystem that runs ELF binaries and implements the Linux syscalls. It’s not yet obvious to me how much WSL2 is actually even a Windows subsystem, given that it’s running it as an actually separate OS.
I dont think its a VM like Docker no. WSL processes show up under the task manager. This is more like Wine but Line... In a sense. But a little trickier.
I think the OP is probably referring to the fact that running Docker on Windows and macOS is accomplished by running a Linux VM which the Docker containers run in. Not that Docker containers are VMs.
In practice, the vast majority of docker containers are Linux containers, so most uses of docker for windows need a Linux kernel to do anything useful.
As such, the 'native' docker for windows you linked is actually managing a Linux VM (HyperV or VBox) fot you; the docker server runs on that VM, and the native docker windows binary simply connects to that server.
They also do a lot of tricks to connect networking and file shares with that VM. The file shares part is almost completely non-working in my experience, with scary comments in the bugs like 'doesn't work if your windows password contains certain characters' (it sets up CIFS shares between the Linux VM and some parts of your Windows file system).
So, not sure what docker for Mac looks like, but Docker For Windows is only 'native' in a very hand-wavy sort of way, unless (maybe) if you're running Windows docker containers.
Docker desktop runs a Hyper-V VM. Prior to that, Docker used a Virtualbox VM on Windows. The current plan seems to be to integrate with WSL2 once that ships [1].
Why? This is a common ask, but what exactly makes a Linux kernel superior to Windows?
There seems to be an assumption that Linux is the ultimate in OSes. But most of the value comes from the large ecosystem and momentum, rather than specific technical advantages.
The biggest practical advantage I have found is that Linux has dramatically better filesystem I/O performance. Like, a C++ project that builds in 20 seconds on Linux takes several minutes to build on the same hardware in Windows.
I don't think this is about technical superiority of one kernel vs. the other. It is about the software environment. Linux is the most actively used Unix-derivate at this time. So by running Linux you get a whole software stack. Parts of this stack has been ported to Windows too (e.g. via Cygwin) but you get the most coherent user experience by just installing a Linux distribution, as WSL allows you.
In the post-SGI, pre-Second-Jobsian-Revolution world of the the mid-late 90s, there was a real malaise in workstation computing. The systems were dumbed down, the hardware was homogenized and commoditized into joyless beige, an ocean of "Try AOL Free for 100 days" CDs jostled and slushed amidst the flickering fluorescence of the cube farm, where CRTs and overhead lights would never quite hum along at the same frequency, zapping the air with an eerie, sanitized sick-static. This was the future of computing.
This gloomy world of palpable beige-yellow-purple is the world of Microsoft Windows. As a product, it caters exclusively to that clientele, in that environment. To be blunt, above all other design considerations, Windows was built to accommodate the lackluster office drone -- or, more precisely, their bean-counting overlords who wanted to save on user workstations.
Despite attempts to undo this, the design has proven impossible to build upon. How many legitimate improvements in the cutting edge of either academic or industrial compsci have been built on Windows technology, for any reason other than "MS signs my paycheck so I'm contriving this to appear like I'm happy to be using PowerShell"?
MS was able to paper over the deficiencies for a while through sheer force (pushing .NET as a semi-unified computing environment, Ballmer screaming "DEVELOPERS!", etc.), but Windows was in no way prepared for the revolution brought by virtualization in the mid-aughts, and any shred of a hope that MS would somehow recover was utterly and entirely obliterated by the widespread proliferation of containerization. Good luck getting a real version of that working on Windows.
I've said it before and I'll say it again: in about 5 years, I wouldn't be surprised at all to see the new version of "Windows" plumbed end-to-end atop a nix-like kernel, and driven by a hybrid userland comprised of a spit-polished copy-paste from WINE + a grab bag of snippets from the proprietary MS-internal Win32.
The flexibility of the nix model is the indisputable, indomnitable winner here, and it stunned and killed the Goliath in its tracks. This is the ultimate surrender to the open-source model. Windows's top-down, "report to your cube by 8:26am sharp and don't question the men in the fancy suits" approach to computing resulted in a rigid operating system that was unable to keep pace with the technological demands of the many pantheons of loyal corporate drones that comprised their user base. Even strictly-Microsoft shops are forced to give developers MacBooks now, or they're unable to get anyone competent to sign on.
It really couldn't get more poetic than MS desperately integrating Linux into their OS so that people won't switch to the more flexible nix-like systems for their workstations, although I anxiously await MS accidentally linking in a GPL module and thus becoming required to disclose the Windows source code. :)
Meanwhile, per usual, nix-like OSes have been humming for 50-ish years now and show no signs of slowing down. IMO, that's all the objective, borne-out proof one needs to say that for all practical purposes, Windows couldn't stand the test of time.
Please note that this is not about Linux per se, but the overarching design theory and development processes in use in major operating systems, and the massive success it represents for open-source, research-driven systems.
The chapter on "The Windows Way" has been written, and whatever its benefits may be in theory, they don't bear out as sustainable in practice.
It'd be fun to do a macro-scale timeline comparison to Sun. These invincible tech behemoths that cater to their narrow niche become rotting, hollowed-out fossils as the free systems continue to develop and evolve the cutting edge. Microsoft is right on time here, and I fully expect to see them struggle through the next decade or so until Larry Ellison finally puts them out of their misery.
I think I've read this free-beer explanation of yours in about 100 distinct comments. In most of them you explain why some particular programming language that you hate with a passion came to develop mindshare while it was so unlikely considering what a bad design it is.
For some balance, you could consider how Windows got preinstalled on most computers since the 90s and what huge dominance it had in the consumer and also development domains (also thanks to some evil capitalist practices, some might say!). That there are free versions of Visual Studio and other IDEs and dev tooling for various programming languages for Windows just as well. Also, consider that I can buy Windows 10 Professional for $8.90 with 5 seconds of googling.
Absolutely.
At this point, there isn't much value for Microsoft to hang on to the legacy of Windows - with most of their revenue coming from Azure/Cloud and developer products, it might be time to say goodbye to that legacy and make it subsystem on a modern Linux-based system.
That might also be a way for them to become more relevant in mobile devices and platforms.
The line item in Microsoft's last annual report which encompasses Windows is "More Personal Computing" and shows $10 billion dollars of operating income on $42 billion dollars of revenue.
Windows revenue grew $925 million. Not to $925 million. By $925 million.
> At this point, there isn't much value for Microsoft to hang on to the legacy of Windows
I would say this move by Microsoft is the exact opposite, because it maintains Windows at the centre of the picture.
Microsoft is basically saying, look you can have all your Linux goodness and best of all you can access all that goodness using your current Windows desktop.
Using WSL2 the between Windows/Linux are very blurred.
I'm convinced there's nothing stopping Microsoft from making their own Linux based OS by porting their desktop and GUI models on top of a Linux kernel + filesystem + drivers, not that far from what Apple did with OSX.
Technically it'd be an interesting challenge, but I fear the day we'll see also Linux software asking to be run on that Linux for "better compatibility" because that is what the developers used. In that case, thanks but no thanks.
I wonder whether they'll experiment with Windows compatibility in the WSL2 layer as Extend[1] phase, to possibility utalize it in a future upcoming distro creating an Extinguish[1] situation.
... except that (a) WSLv2's kernel will be running in a VM, and (b) macOS's kernel is much more like Linux's -- fork/exec, networking, filesystem, memory, TTY.
I see WSLv2 as more of a concession that a *nix-style interface is what developers want, and that the NT kernel can't deliver that.
NT supports interfaces for supporting a fork model efficiently, it's just not well documented. In fact, you can write an efficient, Windows-native POSIX environment entirely from user space using existing APIs: https://midipix.org/
The only thing that NT really lacks is a Unix-style TTY subsystem. The interplay in Unix between TTYs, process control, and signals is complex and deep, and none of it lends itself toward abstractions--it's all quintessentially Unix. For almost everything else, NT provides interfaces that can be used to efficiently implement Unix semantics.
Microsoft isn't interested in supporting a modern POSIX environment natively as that would merely accelerate the migration of the software ecosystem away from Windows APIs. IMO, it's why they killed WSL1. WSL1 was de facto a modern, native Unix personality on NT. They may not have managed 100% Linux compatibility, but they could've simply relabeled it Windix or something, used a small team to round things out, and received UNIX V7 certification in a jiffy.
WSL is about managing the shift toward a Linux-based cloud world, capturing as much of that business as they can without unnecessarily risking their existing footprint. WSL2 neatly allows them to maintain Windows API lock-in while also providing substantial value for those needing to develop Linux-based software. WSL1 fell short on both accounts, though standing alone I think it was much cooler technology.
> They may not have managed 100% Linux compatibility, but they could've simply relabeled it Windix or something
My take on it was that they were never able to implement all the syscalls (WSL just couldn’t reliably run some production server software), and filesystem calls were horribly slow (try npm install on a reasonably large JS project if you have any doubt).
You may be right — but I’d still like to dream that one day Microsoft will release a true Linux (or OpenSolaris, BSD, or whatever!) OS of their own, which their Office apps will fully support. That’ll be the day I weigh another option besides macOS.
Accessing Windows' NTFS volume from WSL2 Linux is (and will always be) even slower. I have no doubt they've could've substantially improved file access, but management pulled the plug. WSL1 was basically just a proof of concept, afterall.
Also, don't forget that Windows and NTFS has never been known for performance. Expecting file access to be as fast as ext4 from a Linux kernel was just the wrong set of expectations. If you want Linux performance you need to use Linux, of course, accessing it's own block device directly; and people demanding that are going to get it with WSL2. The cost is that integration will be worse, both in terms of ease of use as well as performance. AFAIU WSL2 is using 9P now and in the future probably virtio-fs (https://virtio-fs.gitlab.io/) or something similar.
People keep saying how lightweight the WSL2 VM is. Well, that's how all modern VM architectures are now. If you launch a vanilla Linux, FreeBSD, or OpenBSD kernel inside Linux KVM, FreeBSD bhyve, or OpenBSD VMM there's very little hardware emulation, if any[1]; they all have virtio drivers for block storage, network, balloon paging (equivalent of malloc/free), serial console, etc; for both host and guest modes.
[1] I don't think OpenBSD VMM supports any hardware emulation.
> The only thing that NT really lacks is a Unix-style TTY subsystem.
The long, laborious ConHost / NT Console subsystem rewrite/refactor over the last few years has moved Windows a lot closer to the Unix PTY system for internal plumbing (while not breaking the NT Object side of things). The fancy new Microsoft Terminal in alpha testing now takes advantage of that.
> WSL1 fell short on both accounts, though standing alone I think it was much cooler technology.
I feel like this is an interesting case where version numbers hurt more than help, because it sounds like WSL1 and WSL2 rather than a pure 2 > 1 are going to be side-by-side for some time now (moving distros between WSL1 and WSL2 is supposedly a simple powershell operation and can be done in either direction) and hints from Microsoft that there may even still be additional investments in WSL1 depending on user use cases and interest.
I think it's good for the NT Kernel to maintain subsystem diversity, and so I do hope that WSL1 continues to see interesting uses, even if a lot of developers will prefer WSL2 more for day-to-day performance.
Just thinking out look because that is just very improbable. But effort wise, implementing the win32 layer over linux is a harder thing than what they did with wsl1, surely right? But at the same, their wsl team was like what, a few guys? 10, 20? But the linux kernel source is opened maybe that's the difference.