The author of that article seems to be conflating GPU support somehow with GUI apps. GPU support is indeed coming to WSL2 and is available currently in Insider builds along with a special release of the Nvidia driver and some Linux support packages.
This, however has nothing to do with graphical applications or GPU acceleration of such. It’s entirely targeted to machine learning frameworks.
I do think Microsoft has said that GUI stuff is on the roadmap. I hope they can continue to force the hand of Nvidia on the ridiculous matter of not providing any kind of GPU virtualization on their consumer cards.
There was a story a few weeks back about Microsoft working on a new android subsystem. So truly coming full circle; android OS stability (via project treble), the vm approach instead of compatibility layer approach, and improvements to hardware sharing/passthrough to wsl 2 mean that they can finally realize what they tried to do with Projecy Astoria.
They'll need to manage their own app store as I can't imagine they'll have gapps//shims out of the box, but that works in their favor if they wanted to do another mobile play.
They'll need to manage their own app store as I can't imagine they'll have gapps//shims out of the box, but that works in their favor if they wanted to do another mobile play.
When the new EU fair market regulation is accepted, they will be able to compete with an app store op Android (or iOS):
allow the installation and effective use of third party software applications or software application stores using, or interoperating with, operating systems of that gatekeeper and allow these software applications or software application stores to be accessed by means other than the core platform services of that gatekeeper.
Since they already allow alternative app stores (e.g. Steam) on regular Windows, not much changes for them in this respect. But they will also be able to compete with the Play Store and iOS App Store with their own store.
It's already possible to have other app stores than Play Store on Android, even simultanously. I think some features are reserved to Play Store in practice (to stores pre installed on the phone) so there could be some improvement on this point.
Google also currently prevents OEMs from installing competitors' stores by threatening to withhold access to Google's services. This is one of the points in Epic's lawsuit.
At this point I could see how some developers may prefer an MS store for ios and android devices over the native stores. Especially if they make deeper cuts on pricing (10% of sales instead of 30% and 3% of transactions instead of 30%).
I'm slightly leery about it all, but definitely warming up to MS this past decade. WSL2 and VS Code have made my life much easier to say the least. Of course this is the intended effect, I no longer have the urgency to get into a full Linux environment this way.
On-Topic: glad to see this GWSL thing, as I'd been thinking about doing similar... getting X and pulseaudio working in windows, and configured right in WSL environment is a pain to say the least. If this eases that pain even if only for local logins, I'm pretty happy with it. I spent a couple days trying to get it all working... I could launch Ubuntu's Firefox instance, but couldn't for the life of me get audio working right and just gave up after a couple days.
>They'll need to manage their own app store as I can't imagine they'll have gapps//shims out of the box, but that works in their favor if they wanted to do another mobile play.
They own App Center so maybe its not that far away.
Will they run into the same roadblocks that befell Wayland on Nvidia? I guess not, because they'll likely create their own compositor that bridges into dwm, so they can use EGLStreams but I sorta hope MS does pressure Nvidia into some kind of support.
Until they allow USB passthrough (which is more or less blocked by the fact that Hyper-V itself doesn't seem to have the feature), a whole bunch of my use cases are basically barred from WSL2. Shame, cause otherwise I really, really enjoyed using it - coming from a guy who's been on Linux for 99% of his time for ~15 years at this point.
Wsl 2 is really cool, right up until you want/need to do anything with the underlying hardware. No usb passthrough, no hardware acceleration, gpu passthrough is there but it's still early stages as performance is about half of native and vendor tooling is absent, etc
It's getting better all the time though - they tease usb passthrough might be doable as it's already supported with remote desktop
Yeah it's actually pretty damn good. Performance was better than it was with WSL1 for me, at least (especially IO). IIRC they are exploring USB passthrough using Hyper-V sockets, but it's still very early to actually talk about having that included[1]
Once they figure that out I might just as well be able to switch over and stop dual booting. I hate dual booting more than I hate Windows, so it would be an improvement.
Probably, but it also introduced a lot of edge cases for compatibility issues. I thought it was a reasonable approach, though likely took more devs than just supporting the more tightly integrated VM environments.
I will say that Docker Desktop works MUCH more smoothly with WSL2 than the prior approach.
About the only issue I see now is about once a month I have to reboot because the magic localhost port mapping between docker, wsl-ubuntu and native host goes wonky. It's not so bad as long as I realize before diving into a rabbit hole to figure out why I can't connect to something.
WSL1 was more or less the reverse-equivalent of WINE as I understood it, more or less a translation layer for a subset of syscalls, the approach is awfully limited from the get go. Not sure how using a VHDX or partition would solve that. We'd be back to square one, with things like containers being unusable again.
> Not sure how using a VHDX or partition would solve that
The problem with WSL1 is limited to disk IO performance, because the compatibility layer that's easy to do with system calls isn't so easy to do with the filesystem.
> We'd be back to square one, with things like containers being unusable again.
It's complementary. WSL2 when you play with containers, WSL1 when you don't.
Personally, I don't bother with containers for about 90% of the development cycle - and it's just for testing before being deployed to a genuine linux, not some VM.
Also, if you want to run servers with WSL (say, postgres), having containers is the last of your problems.
Ah, different usecases then. I pretty much tend to go containers from A-Z for anything non-trivial, thus WSL1 was more or less useless for me. IO is also a big problem, just running a simple git status on a bigger project is a PITA.
I wouldn't call having only a subset of system calls, no containerization, no background services, no proper init system, no kernel modules, "not all that limited". But maybe that's just my use case.
I've never hit a system call I wanted but wasn't available on WSL. In fact the couple times I've been disappointed at the lack of a system call it was when I was booting linux.
No containerization: Agreed, this is a significant limit.
Background services work just fine, I don't know what you're getting at. I have cron and stuff going.
What's improper about the init system? I start services, I stop services, it seems fine.
No kernel modules: I feel like this is pretty niche.
Found this guide a couple of days ago[1]. Looks awfully convoluted and impractical. Good if it works, but I personally draw my line of acceptable minimum functionality somewhere between "working" and "practical". This is a bit much, and has a couple of pretty bad gotchas, unfortunately.
WSL2 is backed by an actual virtual machine, so it comes with the limitations of its hypervisor (Hyper-V). Hyper-V doesn't support full USB passthrough. AFAIK it does support passing down USB volumes though.
You can already do some of this with wslu (out of the box on the ubuntu image). You can create desktop shortcuts to linux apps (and I think provide an icon, otherwise it'll try to guess), wslview proxies to your default browser on the Windows side, and I think there are a couple of other things.
I'm glad we're using dedicated vpn boxes... kind of annoying that I can't use a wifi directly, but usually at my home desk and tethered in anyway. Didn't know about the VPN issues in practice.
Off-Topic: I'm setting up a new machine, and wondering whether I should use WSL 1 or 2 (or both?). Is there a clear path forward for these systems? WSL 1 has served me pretty well. Most of the inter-filesystem interaction amounts to using rsync to back up directories. Otherwise it's just sshing into servers. I use virtualbox for off-line development.
> Off-Topic: I'm setting up a new machine, and wondering whether I should use WSL 1 or 2 (or both?). Is there a clear path forward for these systems? WSL 1 has served me pretty well. Most of the inter-filesystem interaction amounts to using rsync to back up directories. Otherwise it's just sshing into servers. I use virtualbox for off-line development.
Well, except for the timekeeping problem (there are workarounds but they're annoying) and the significant networking issues (no bidirectional port forwarding so "localhost:8080" doesn't work with server in WSL and DISPLAY=:0.0 doesn't work with server on Windows).
Depends on the nature of the VPN. I can connect fine through my work OpenVPN, but it's not a full tunnel, just pushes a couple routes and split DNS, both of which are accessible on the WSL2 VM.
Maybe I did something wrong, but for me getting a local X-server running, with X-forwarding from WSL1 was dead easy and something I used on a day to day basis.
Getting the same working with WSL2 was a pain in the ass, and I eventually gave up.
So if you like X-forwarding... I personally would recommend sticking to WSL1, unless that gives you other issues.
Is "X-forwarding" being able to see and interact with linux GUI apps running inside WSL on Windows? I do this to run Cypress interactively within WSL2.
I have no idea what I did (i just blindly follow some guide on a no-name blog), but I got it all working fairly effortlessly. The only slightly annoying thing is having to start the server on windows if i restart my pc, but otherwise it was pretty easy.
VcXsrv and a bashrc to set DISPLAY= automatically did it for me
export DISPLAY=$(route -n | grep UG | head -n1 | awk '{print $2}'):0
and on the Windows side, start VcXsrv with the 'Disable Access Control' combo box checked. this will enable X clients from other hosts (WSL is another host, basically) to connect to the server without too much hassle. that's it.
This was terrible advice back in the 90s and still is. An attacker anywhere on the network can listen to your input devices, insert fake keypresses, view your screen, etc.
Basically a total system compromise.
All without log entries.
The correct way to do this is to use MIT-MAGIC-COOKIE.
One the client host:
$ xauth -extract - $DISPLAY
Then copy the output.
On the host running the X server:
$ xauth nmerge -
then paste the output copied previously.
Press enter and CTRL-D (EOF).
If you then check the output of "xhost" on the client, it should only say "Access control enabled."
A JS based exploit that hijacks the X Window System Core Protocol running on your localhost to inject key presses into your X server or steal screenshots? I mean it's possible, but it seems quite far-fetched except maybe if someone specifically targets you.
If you're talking about the general concept of using js to spoof another protocol: that exploit involves middleboxes sniffing TCP connections at the packet level, rather than at connection/stream level. It certainly won't work for connections with a TCP server.
If you're talking about using that exploit to allow access to the victim's machine from the internet: that won't work because listening interface for the x11 server is localhost, not the LAN interface.
Thank you for posting this. I tried many times to google how to do this and saw the magic cookie explanation, but never found a good clear description of what was happening. Much appreciated!
Putting it in that directory sets VcXsrc to start when Windows boots up. You would just replace "Nick" with your Windows user name and put it in that path. Then you never have to worry about starting VcXsrv ever again.
Also if anyone is using both WSL 2 and WSL 1, you need to set different values for DISPLAY depending on which version you use.
It's not slow for me. Maybe you have something else running that's interfering with WSL (Defender, etc.)?
I've been using the VcXsrv set up for years (in WSL 1 and now WSL 2). It runs native GUI Linux apps really well. It's how I ran Sublime Text in WSL for a long time before I eventually switched to a different editor. I even ran i3 in it for a bit but gave up on that because it didn't work well with dual monitors. It was smooth as butter tho.
I genuinely don't understand the point of Windows as it exists anymore.
* Games are developed on cross platform engines. I can't remember the last game I played that was Windows-exclusive.
* Business apps work in the browser or Electron, or are otherwise cross-platform. Things that aren't, could work with emulation
* Developers are using unix-like operating systems, and MS has leaned hard into that. Windows is incongruent with the rest of their developer strategy.
* The server ecosystem is almost entirely Linux based, apparently even on Azure
* By all appearances, Windows is now a service that upsells other services, and there won't be a Windows 11 as we would traditionally think of it
* The people who know what an OS is, prefer unix-like OSes by a vast margin. The people who don't, wouldn't notice the difference.
* x86 looks even less like the future than it did a month ago
I think may be very much underestimating the number of line of business apps built as Windows GUIs over the last 30 years - not to mention server software running on IIS and similar.
I also think you're underestimating the number of developers who know Visual Studio and Windows and nothing else (many of whom are the so-called "dark matter" devs).
On top of this, the NT kernel is actually pretty decent - despite the horror of the Win32 API. Personally, I don't want to touch it though.
On top of this, the NT kernel is actually pretty decent
A lot of people that where not in computing back then do not realize that Microsoft picked up a lot of great people due to market forces, they picked up a lot of talent from DEC including David Cutler and the cream of the crop from the OS2 team. This was the team that designed and built the original NT Kernel and it was by and large the best parts of VMS and OS2 coupled with those individuals learnings in building those other systems. I agree with you on the Win32 API and it is a shame that the NT Kernel gets dinged for it.
The value of Windows is to run three decades of legacy Windows applications. (four decades of MS-DOS) Being able to run those still represents an actual (but declining) value.
Also the value of Intel is all that legacy software. Without the legacy software, Intel could be just another ARM or RISC V competitor.
Yeah - I'm not sure what GP is referring to, most new cross platform games that come out are xbox/ps/pc and very rarely will you see a linux or OS X release.
Microsoft Office is much better than the competition (including both LibreOffice and web apps like Google Docs). It's a great shame to say, but if I'm really honest with myself it's clear reality. Using Windows sucks, but Office is seriously so much better than everything else that it's worth it.
> * Games are developed on cross platform engines. I can't remember the last game I played that was Windows-exclusive.
To some, that's not a relevant fact. Consider someone like me who plays only one game that's only for Windows and consoles. It's never been released on Linux and I had a terrible experience playing under Wine.
I got a KVM switch to play it on my secondary PC. I think there are a lot of people out there for whom one game or one app doesn't work or only works poorly on Linux, and they keep a Windows PC or dual boot just for that.
This is me as well, for league of legends. It's the one game I and most of my friends play exclusively, and is how I keep in touch with the ones who live far away. I'll have a windows box around until league runs on Linux or dies, and not a day more or less.
Why is there Pepsi?
Coke Cola Classic has been the world best selling soft drink for over a decade and probably longe
People who want a Cola like drink prefer Coke.
Or even any other soft drinks than Cola derivates.
They sell the most by far.
Why does Dr Pepper exist? Or Club-Mate? Or Tøyen Cola.
There is a concept called competition.
People believe it drives innovation.
There is also nice to have variety and being able to pick
something you like the best form a selection.
A single operating system dominating the entire world from
PDP11 --> entirely is a bleak dystopia.
WindowsNT was lightyears ahead of Linux kernel when it come out.
Async everything from the start. Better protection for hardware.
Easily portable. x66, PowerPC, Alpha, Mips
Able to run win32, OS/2 and a posix layer that could be extended into
other adventures.
Also the most modern operating out there in wide use.
It sadly has retrograded over time.
Both WindowsNT, Linux and BSD are all unsuited for the world we live in now.
I can't wait until some sharp minds are able to create the next generation operating system.
It is taking far too long.
I loved the time of competition and variety
Atari TOS/GEM, MSDos, VAX, OS2, OS/400, Sintran, Mac OS (System 1.0 -->), AmigaOS, WindowsNT, many flavours of UNIX, Sinclair BASIC
Those are just alternate operating systemsI have used myself over a life.
I want far more flavours, ideas, alternatives out there, not less.
If the question is "is there something technical preventing Microsoft from pivoting to a *nix kernel instead of NT" the answer is "no, and there hasn't been for at least 20 years - if ever".
The switch from from DOS-based Windows to NT took place over many years: development started in 1989, the first release was in 1993 and the switch only really completed in 2001 when Windows XP was released, which was the first NT release targetted at consumers. That's well over a decade. Microsoft bet a huge amount of resources on NT because they could see that DOS was never going to work on in the long term, especially on servers. There's no reason for them to spend so much resources and time on a switch a *nix kernel when there's (currently) nothing like that at stake with the NT kernel.
Longhorn was the codename for Vista, which just used the same NT kernel as before (with some feature improvements, just like in every new Windows release).
Actually there were a couple of other reboots, but staying on the same NT kernel model, one being the kernel refactoring around Windows 7 release, and during Windows 10 it got the capabilities of split personality with secure kernel, driver guard and ability to run containers directly on top of Hyper-V.
Weird project description. Is it an X11 Server for Windows? If so, why doesn't it say this first (but only mentions further down that the software includes some advanced X11 Server)?
It includes binaries for VcXsrv[1], which is an X11 server for Windows. This project itself seems to be glue code that helps you set up and manage VcXsrv and it's connection with your WSL instances.
This tech is pretty cool and I used to run virtual machines and WSL, but now I just keep separate computers for separate OSes and my life is so much simpler because I never have to worry if my Windows host or VM infrastructure is causing extra complexity. If something's not working, I have exactly only one OS to worry about fixing.
Now, not only is my configuration and maintenance simpler but everything runs faster because all my operating systems are running on dedicated hardware.
And before anyone says "it's too expensive", it's really not terribly expensive because I get refurbed machines like this [0]. If I want a laptop I get something like this [1] and it runs Linux (or Windows of course) just fine.
The only expensive machine I ever buy is a Macbook, but even those I'll buy refurbished. The last Macbook Pro I bought is from mid-2015 and I got it for only $1049 (in 2017) with 6 months of AppleCare+ still attached to it, which got me a new battery for free from Apple.
Just ran into this yesterday. What a great solution! I now have WSL2 running graphical linux apps and got a nested VM running with Mojave so I can run osx with macOS-Simple-KVM. I can finally cross compile in a way no one should.
Not awful, though the lack of USB pass-through makes for a challenge. You need to make sure to rune your QEMU with at least 8G ram or it is pretty bad. Good enough to build with anyways.
I tried this app. It was okay, but it did some weird things to my bashrc, so it was banshied.
I found the core x server it used but was unhappy with not being able to full screen windows on the fly.
So after firing out the Frankenstein build system I was able to modify the windows x server to allow alt enter to switch a window to and from full screen (no title bar over the start bar).
This one change was huge. And now gui emacs works perfectly in windoes, backed by wsl.
If anybody wants I can get the patch or, uh. Becausw windows and the Frankenstein build system requirements, share the binary.
I had no problem running XFCE desktops under WSL and WSL2 using XRDP and standard Windows's remote desktop client. Still feature set, GUI and overall experience under VMWare feels way better to me. Also ability to test with systemd is a plus.
I'm running X applications like the Gnome terminal on Windows via VcXsrv, works pretty flawlessly on my 4k Surface Pro by just setting the DISPLAY environment variable and tweaking GDK_DPI_SCALE.
Anyone have experience with this? The only thing I miss from a Linux Desktop is https://sw.kovidgoyal.net/kitty/
but it requires direct access to GPU as I understand. Anyone tried kitty with this?
Would this be less resource-hungry than Citrix remote desktop? I had to switch back and forth native Windows and Citrix and the latter stutters and generates heavy artifacts around text.
I've been running apps via xpra instead of X forwarding, and audio works fine there. For silly reasons I'm doing it in Docker, but there's no reason to do that part if you don't need the ephemeral containers.
Windows lets you run Linux apps at the same speed as raw metal Linux, but with good drivers for things like video cards, wifi, etc. Windows also gives you access to a more diverse selection of its own apps so you can get your day job done. As for Macs, you get nearly the same benefits as that but now you also get access to the best hardware money can buy, from the cpu right up. I'm sure Linux will be ported to Apple Silicon eventually, but one look at the history of Linux even on x86 laptops shows how it will suck with half-finished drivers, shitty UI and terrible touchpad semantics, etc. Linux will always be a third rate OS for desktops.
Yikes, do you mean Windows Subsystem for Linux? I really appreciate MS folks putting in the effort to try something new, but WSL is close to worthless. I tried it a while back and could not get basic services to work. And why would you want Windows OS around if you just want to run Linux?
All significant open source development today is done by folks who are not "scratching an itch" but cashing a paycheck, and the projects they release are things that would not have made them money anyway. The actual valuable stuff the trillion-dollar companies of today do is still overwhelmingly closed because secrecy is still a better way to engineer products that sell.
First, I really do appreciate what you are trying to say - that proprietary software is where the money is. It also could be that most of the time, there really isn't a reason to share source code. I've built lots of software for companies large and small. Most of that software is for automating some very specific business process, or making some machine configuration work that no one else in the known universe would ever want anyway. The only developers who that code has value to will work at or for that company. It's not that it is secret, it's just a snowflake. In many cases the budget was so small, that the application wasn't tooled to run anywhere else but on that customer's existing infrastructure, so it would take more time to make that code open-source ready. BTW, open source code generally (but not always) is much higher quality than what I see in most proprietary software because it is going to be reviewed by lots of others and there is often an expectation that that code work on multiple platforms.
Also, when you use absolutes, like "All significant open source development is done by folks ... cashing a paycheck" you are guaranteed to be wrong. It is also disingenuous to people who actually are out there scratching an itch... and to all those developers who started by scratching and itch and finding customers or companies to pay them to keep scratching that itch.
Linux took off because SCO was trying to get $500 per CPU for a Unix license on PCs that were selling for $1000 or in some cases less. Windows NT at the time was also priced similarly to SCO Unix. When Windows XP Pro came out at $250, commercial Unix was twice as expensive.