Hacker News new | past | comments | ask | show | jobs | submit login
GPU Video acceleration in the Windows Subsystem for Linux now available (microsoft.com)
213 points by Fudgel on Feb 13, 2023 | hide | past | favorite | 177 comments



Smells like another piece of their usual Embrace, Extend, Extinguish tactics. So sad seeing people being so naive buying everything these tech feudalists say.


That's one of my big problems with WSL, and Microsoft's contributions to open source projects like Mesa. Their Dozen driver implements Vulkan on top of D3D12, a proprietary API only available officially on Windows. This is useless for anybody not using their proprietary software.

In my opinion, it should never have been accepted without an officially supported open source backend. I don't care if it's in the form of a Linux library, BSD, whatever, if it only adds value to your proprietary product, why should the open source community be shouldered with the burden of maintenance? Especially when _using_ said functionality locks users to your product.

Why didn't they use Vulkan for a backend, which is portable, well-supported, and open, and use and contribute to existing high quality drivers on top of it? There's only one reasonable answer, they're not interested in portability, working with the community, or omni-win-win scenarios.


I was perfectly happy using mingw with gcc in my build environment to create nifty little graphical exes with raylib. Now, Windows Defender likes to flag anything built with mingw as a virus, and the official answer is "Just use WSL!". This isn't a problem for me personally, as I trust my own software, but it's absolutely unacceptable if I wanted to ship something on itch.io or Steam or the like. I suppose I've been extinguished for now, and will probably just start building to html5/webassembly, but I'm 110% onto their tricks.


Thank goodness. Linux is terrible and it needs a fresh start. Maybe if WSL kills it we can start over fresh.


Microsoft - if you are reading this, I love this - but if you could throw Ext4 support into Windows, so that I can use Linux-formatted drives natively, that would be truly next-level. I know you can do it through WSL, but that's kind of a hack compared to native support in Windows Explorer.


You can compile your own kernel to use with WSL2. See the Config file for how you can specify the path to your own kernel and the command line to pass to it:

https://learn.microsoft.com/en-us/windows/wsl/wsl-config

Here is adding ZFS support:

https://wsl.dev/wsl2-kernel-zfs/


I think the GP's ask is for native Windows support, which is orthogonal to anything WSL/WSL2 can do.


Native Windows support would conflict with the WSL2 stack, at its integration level. File system drivers aren't generally designed to share the underlying block device between different kernels concurrently. The block device isn't consistent with the latest writes without flushing caches, and a native driver wouldn't see those caches without hooking the Linux kernel, but that's not the WSL2 integration point. WSL2 is based on virtualization not kernel hooking.

Native Windows would be able to access the file system as long as you killed all WSL2 processes first, and vice versa: run any WSL2 process and lose native access.

Access via network makes far more sense.


The WSL2 kernel already has Ext4 support.


Mount in wsl, access with explorer via \\wsl$, done


ext4 I don't know, but if you're willing to go with btrfs: https://github.com/hbs7/btrfs-windows

Note that I didn't try it but it looks amazing. There's even some mad lad that uses it to boot Windows.


Honestly I'm not seeing any single reason why it may be interesting for them - no target audience, no performance or costs gains, only spending. Not even accessibility feature to help disabled people.

What kind of reasons you think may give them an idea it's worth trying?


use paragon software


My problem with WSL is the performance. A real Linux is a lot faster, even a Linux in VirtualBox works a lot better. Maybe only happens to me.


Use the native Linux file system instead of /mnt/c and it’s basically native, as it should be, since it’s a hyperv vm under the hood.


Meaning, mount a physical disk?


Nonono. Use the Linux home (cd ~) instead of the Windows one in the simplest case.


It's just you. WSL2 is a real Linux, there's almost zero performance hit. (If you access Windows filesystem then that's slow, of course.)


Curious, though I feel like when I develop using an IDE like intellij, there's some sluggishness when the project files are on the wsl file system. Is it because of mixing between the windows one where the IDE is installed, and the wsl one?


Yes, the WSL2 implementation uses a sort-of SMB to mix filesystems, compared to WSL1’s “real” syscall translations etc.


I use WSL2 on a daily basis and I love it, but the console output is slow.

It's mostly due to Windows Terminal having performance issues. But it does have a big impact on any script that prints a lot.


Use wsltty


Thanks for the recommendation, I'll try :)


Can't you just use an X server on WSL ?


Fair point, I never worried about it to be honest.


Windows 10 controls VMware network settings which are behind UAC.

So if you want to run a Kali Linux Vmware guest and use Wireshark in promiscuous mode, you better check and reset the VMware network settings each time before running Kali Linux.

Windows changes these VMware network settings to what it wants!

Its the first time I have seen windows change settings behind the UAC prompt to what windows wants.


Are you referring to bridged adapter settings? It could just be a VMware limitation when they reimplemented their Windows hypervisor to work with Windows Hypervisor Platform (I.e. mini Hyper-V), so they could work with Credential Guard / WSL2 enabled. There’s a bunch of initialization work on every boot because of how the Hyper-V adapters are configured (which I hate) so VMware might be tearing adapter bridges/vSwitches down & remaking them each boot.

Or if you mean IP ranges and such then that shouldn’t happen in either mode, or at least it doesn’t seem to happen to any of us at the office on Workstation 16/16.2/17. (and VMware asking for elevation to change some adapter settings is a design choice by VMware)


> bridged adapter settings? Yes.

> There’s a bunch of initialization work on every boot because of how the Hyper-V adapters are configured (which I hate) so VMware might be tearing adapter bridges/vSwitches down & remaking them each boot.

It resets everytime the main windows os is rebooted so it sound like this.

Like I said, I've never seen settings behind UAC prompts being reset, I've seen settings being reset by windows updates, but never something like this on a day to day basis.


It’s just Hyper-V itself deleting and recreating switches & vNICs. It’s the same root cause as that issue where your Ethernet adapter gets a “(2)” or “(3)” tacked on, like in this (3rd) issue: https://learn.microsoft.com/en-us/answers/questions/326293/h...


Great to see the video progress. Hoping to see more with sound support soon as well. I’d love to be able to more easily work with libraries like OpenAI Whisper under WSL once issues with soundcards in WSL [1] are addressed.

[1] https://github.com/microsoft/WSL/issues/7327


For GPU accelerated ML in Win11 (PyTorch w/ CUDA) - is WSL enough nowadays, or still dual boot to Linux?


It works well enough for me to not worry about dual booting. Windows tends to claim some of the GPU memory - about 1.5GB on my 3090. If I do anything performance sensitive I’ll still do it in the cloud - but it has made my gaming/flight simulator PC massively more useful.

You can also set up a network so your WSL2 installation shows up on your network as another host. Which is nice. Docker and docker with CUDA are a breeze too. Overall, I’m pretty happy with WSL2, but realize there are some performance drawbacks - so if my machine wasn’t massively overspec’d, I’d be much more concerned.


Is there any official doc for the network/bridged setup? Wasn’t aware that was possible..


It was a haul and I'm realizing now that I don't have it properly documented in my notes. But, that was a haul over the course of a couple years that had me experimenting with external USB network cards, virtual switches, and more.

It's gotten way easier now.

In your .wslconfig you'll want something like this:

    [wsl2]
    memory=48GB
    localhostforwarding=true
    networkingMode=bridged
    vmSwitch=WSL_external
The key things are "networkingMode=bridged" and "vmSwitch=WSL_external".

Then, open up Hyper-V manager on Windows, go to "Virtual Switch Manager" and create a new virtual switch using your primary ethernet adapter (I could never get this to work with my USB ethernet adapter). Check the boxed for "External Network" and "Allow management operating system to share this network adapter" and finally call it "WSL_external". That should do it for you.

Incidentally, the port forwarding with WSL2 and Docker, has made this not entirely necessary, but I still like being able to SSH directly into my WSL2 host, not having to manage Windows' firewall rules, and being able to forward UDP for things like Mosh.


I was able to get Stable Diffusion and other similar ML systems working in WSL2 on Win11. There might be performance differences between WSL and a native system. I haven't benchmarked AI in different configs, but my main use case is Rust+wgpu and there were noticeable performance differences because the GPU driver exposed by WSL didn't have as many features as the GPU driver used on Windows. I also had problems with other APIs on WSL such as Optix.


For my CUDA workload WSL2 (or really, Hyper-V GPU-PV) is about 75% native performance. Isn't too bad, I still prefer running non-virtualized Linux for anything longer than overnight.

OTOH, run Linux on the metal and Windows in KVM works just better.


fwiw I have Stable Diffusion working in WSL2 on Windows 10


I've been using PyTorch + CUDA on native Windows without much problems for a few years now.


I’ve also been doing native windows without too much issue. I have run into difficulty setting up libraries like detectron though.

For the record, I loathe working in the windows terminal. But for people like me, conda is a lifesaver. The m2-base packages bring a pretty good number of tools into the terminal and significantly resurrect its usability.

I’m a little tempted to try wsl again with this improvement, but my last experience wasn’t great with it and I’ve gotten comfortable enough without it.


CUDA worked in WSL since WSLg just started. I used it, as you mentioned, for a few PyTorch modules, that were Linux-only. Slight pain to setup, but one time only.


Likewise to the other commenters, I've been using pytorch + cuda on windows and it's been working fine using miniconda to install most of the compiled bits.


Take but never give back, well played Microsoft.

So Windows users can run Ubuntu in your little VM and that Ubuntu can utilize DirectX without any issues but not real Linux machines.

No, I don't think this is laziness but a well planned step. We get it, you don't wanna lose your gaming marketshare. After all, you're the main player even with a bloated and anti-consumer software like that but know that Linux is inevitable.

Maybe not today, but the better software will win. No matter what your opinion is, FOSS is coming.


This is arguably similar to VMware Tools’ DirectX support, where it works best on a Windows host due to being able to just forward DX calls to the host (with some partitioning/address translation).

Microsoft’s DX backend for WSL could just be a wrapper around the host’s DX rather than a reimplementation. Making that work on any Linux machine would be impossible because it’s essentially just a shim.


Was there even any doubt what are MS intentions regarding WSL?

MS is known for Embrace, Extend, Extinguish.

> "Embrace, extend, and extinguish" (EEE), also known as "embrace, extend, and exterminate", is a phrase that the U.S. Department of Justice found that was used internally by Microsoft to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences in order to strongly disadvantage its competitors.


Funny enough, WSL is what pushed me towards Linux even more. To the point that I stopped dual booting altogether and switched to Linux permanently because of how horrible WSL is and how much time it wastes.

I honestly believe that people who stick to WSL and are afraid of using a real Linux distro are a little too afraid of trying something different from what they're used to. I used to be in a similar position, so I won't blame them but hey, at least I got to compare the software side by side and eventually decided which one's better by a thousand miles.


I'm stick to Windows and WSL - I'm not looking into Linux On Desktop land (not in FreeBSD on Desktop as well of course), I have enough Linux on servers at work and doing sysadmin job at home is not appealing to me. I like the robust, reliable and solid system Windows provides me. I have close to 0 issues, not having problems with updates, basically every case and every tool I _may_ need works, making me confident about Windows as tool to solve my tasks.


> doing sysadmin job at home is not appealing to me

Please tell me this is a joke. You have to have 0 knowledge about Linux Desktop to claim that Linux users are doing a sysadmin's job when they're using their systems. Calling Linux users sysadmin would be equivalent to calling Windows users Microsoft's QA team haha (ngl, they actually are).

We love Linux because it gets work done, for free, without hiccups that Windows often introduces, and without being a bloated spyware and adware mess that is modern Windows.


If Linux is so great why can't I even do `dnf install candycrush`? It's already right there on Windows for me!


See!? Microsoft® Windows™ is the best! (Terms and conditions conditions apply)


Keep loving it, I don't mind. Very good it gets work done for you. Just keep telling for yourself.

Linux introduces more hiccups for me than Windows.

Having something for free is not viable in long term, work and efforts must be paid. Seeing projects without funding is not a good sign from my POV, doesn't let product owners and project managers to have strong base and planning.


> Having something for free is not viable in long term

They've been saying this for the last 30 years, FOSS still isn't slowing down.

Like I said, whether you like it or not, FOSS is coming for their pockets. No amount of candy crush in the start menu can save them.


This thread reminds me the threads from 2010s :)

I'm pretty happy where Linux desktop is right now. It's popular enough so that big software advertises "Windows, macOS and Linux" support, but not big enough to get the "I've unplugged my mouse and now it doesn't work, how can I fix it" posts in the Internet.


> but not big enough to get the "I've unplugged my mouse and now it doesn't work, how can I fix it" posts in the Internet.

So Windows isn't big enough either yet I guess.


You probably messing FOSS with something not requiring people to be paid. If I take say RHEL or Ubuntu Pro it still FOSS but it's paid.

Not sure where you are getting this fan boy style impressions on I'm against FOSS from.


Just curious, why do you use WSL if the Windows model answers your needs?


Running ssh with support of complex includes/jumphosts/wildcards in configs, in a way it can be shared with other teammates; ansible stuff; playing around with local prototyping in Docker (say implementation of Nginx configuration to serve webp files instead of jpeg files if browser supports it transparent to end user) and so on with common [also meant to be common across the team and dev/prod envs] tooling on top of it - like Vim, Tmux, direnv, asdf, list goes on.


I am similar, but I run Ubuntu in VMware instead of WSL


I have VMs for tests purposes as well, while still find WSL convenient for simple tasks [not requiring much isolation and flexibility of VMs].


I wanted to give Windows/WSL another try, but unfortunately installing WSL on a clean install of Windows 11 gives a blue screen of death boot loop on my laptop. Seems to be related to this 3 year old bug: https://github.com/microsoft/WSL/issues/4784


The folks had it all wrong, it wasn't the Year of Desktop Linux, rather the Year of Desktop POSIX, that most people cared about, by investing into Apple hardware.

Even if WSL runs Linux kernel, macOS definitely not, and both combined make OEMs selling GNU/Linux proper almost irrelevant.


Say that to the many developers suffering from crappy docker support on MacOS because it doesn't use the Linux kernel.


Developers don't need Docker for POSIX based development.

Those that do, maybe should have supported Linux OEMs in first place.


Great. Now I want GPU pass-trough in VM on linux with Nvidia GPU. I mean something that you don't need nuclear physics degree to actually understand how to set up.


virgl/video will soon be available in Mesa for linux kvm too [1].

[1] https://www.phoronix.com/news/Virgl-Encode-H264-H265


Remember, WSL is pronounced "weasel".


Actually it's pronounced "duble-yu ess ell" by most people I know.


I'd prefer MS to stop pushing DX12 NIH for a change and start using Vulkan.


NIH? Vulkan was released nearly a year after DX12. And DX12 follows a long line of DirectX versions.

Also... OpenGL has kind of always sucked compared to DirectX. There is no reason for Microsoft to abandon 25+ years of support because finally we have a competitive open standard for Direct3D.

And on that note, you are just complaining about Direct3D. There are a ton of DirectX APIs for things that aren't graphics, like sound (XAudio), raytracing (DXR), storage (DirectStorage), machine learning (DirectML), and many more. Vulkan doesn't really attempt most of these other DirectX features. So... you're asking Microsoft to abandon Direct3D for Vulkan, and have everyone use a Vulkan/DirectX hybrid, which sounds awful.


> Vulkan was released nearly a year after DX12.

True, but Vulkan is largely a continuation of Mantle which released two years before DX12.


Is 3D rendering and audio output really so coupled together that you can't just swap out Direct3D for Vulkan and keep using XAudio? It sounds awful if that's the design in the first place. For what it's worth, many AAA games do exactly that...


Debunked this "year after" argument in the above comment.


You are aware of the history of DX right?

Their SDK is old and really good.

Just because Vulkan exist doesn't mean DX is invalid.

Good to have more than one thing. Innovation and stuff


Arguably without the OpenGL vs D3D and now Vulkan vs D3D back and forths, along with experimental APIs like MANTLE, we definitely wouldn't have a lot of the robust tech we have access to today.

OpenGL's freeform experimentation and evolution with extensions let people test things out in production environments to figure out what worked, while D3D's stable feature set meant that games and productivity software could - if it made sense for the developer - choose to ship a more limited feature set that worked everywhere, all of the time.

D3D also has consistently offered great debugging tools and a robust reference rasterizer, things you simply can't get in an OpenGL environment. As a game developer it's invaluable to be able to swap over to a Direct3D backend for debugging even if you end up using OpenGL as your default. (These days, Vulkan has first-class debugging support too, which is great.)

Now we have Vulkan as the new home for experimentation and it has great debugging and validation layers, while D3D pushes forward on certain new features and provides a more consistent baseline on Windows desktops. For console games as well, you can use Vulkan on (AFAIK) Nintendo Switch, while using D3D12 on Xbox, so each API is providing value for console game devs as well.


> OpenGL's freeform experimentation and evolution with extensions let people test things out in production environments to figure out what worked

Yeah, except nobody did this. The cutting-edge features came to Direct3D first because Microsoft had early access to what the GPU vendors were doing and could plan out what that would look like from an API perspective. The vendor extensions only came out around the same time as the DirectX support, maybe later. Core OpenGL support would onlu emerge years later.

> As a game developer it's invaluable to be able to swap over to a Direct3D backend for debugging even if you end up using OpenGL as your default.

What? Name a game developer who's done this in the past 20 years -- use Direct3D for debugging and OpenGL as the default. Only Id Software actually used OpenGL and Vulkan seriously, and they are owned by Microsoft now so that will change.


> Name a game developer who's done this in the past 20 years

I'd imagine it was really popular among console developers, who would usually develop two versions of their game anyways (a DirectX one for Xbox/PC and an agnostic one for Mac/Playstation/other). DirectX is traditionally considered easier to use (and comes with PC tooling) so I could see how people would prefer it for debugging.


Contrary to urban myths OpenGL was never a thing in most game consoles.

Sony only supported it on PS 2, and quickly moved into LibGNM(X) as almost no one cared, Nintendo had a OpenGL like API on the Wii, and while the Switch supports GL 4.6/Vulkan, its main API is NVN.


IIRC there was an OpenGL layer for PS3, but it was terrible and I only know of one (indie) dev who shipped a full-blown game on it. It was an awful experience from my understanding.


Before Vulkan, if you wanted to do graphics debugging like step-through on pixel shaders, your options were Direct3D + PIX or, if you had access to them, game console dev tools. Maybe Apple had something?

Obviously, lots of games only had an OpenGL backend. So those devs simply didn't have access to those tools. It's the one outlier API with bad tooling.


See Instruments for macOS, and tooling from GPU vendors.

The main problem with Khronos APIs has always been "go fishing" attitude, Vulkan isn't much better, even with the LunarG SDK.


Isn't DX 11 the same as OpenGL 1.1 whereas DX12 is OpenGL 1.2?

Edit, I might be mixing up OpenCL 1.1/1.2


DX11 is like modern OpenGL, DX12 like Vulkan.

(OpenGL 1.x/2.x is ancient, and comparable to DX8/9. All still used by old software, though.)


I still recall oh so many OpenGL apps failed to start when OpenGL 2.0 was released, as 1.x had been around for so long "everyone" had gotten used to just checking the minor version number.


I'm aware of history of DX12. It started as a clone of Mantle, which they made with full knowledge that Vulkan is going to be a thing. No one stopped them from collaborating on Vulkan instead which originated in Mantle the same way. I see no excuse here but the classic NIH / lock-in push.

For the reference: https://twitter.com/renderpipeline/status/581086347450007553


D3D12 is nothing like mantle. Not even close. D3D12 is heavily derived from D3D11. D3D12's own documentation is even specified as 'behaves like D3D11 except for these bits'.

If anything, Vulkan is a clone of Mantle because Vulkan is Mantle. It was donated to the Khronos Group by AMD and served as the foundation for Vulkan. If you have both API headers for Vulkan and Mantle side by side it's shocking how similar they are. Vulkan 1.0 is largely just Mantle with more API ceremony for tile-based mobile GPUs and NVidia's (at the time) far more restrictive binding model.


It was derived directly from Mantle, same as Vulkan. See the link above which literally records that historic fact.

Same way Mantle was used for Vulkan by Khronos, MS used it for their NIH becasue they didn't want for collaborative effort to reduce their grip on the gaming market. Without AMD, MS would have never came up with DX12 on their own so fast.

AMD expressed the interest in collaborative API quite early on and Mantle was presented for that very purpose. Khronos used that as intended, while MS hijacked that for their own market manipulation purposes in their usual MS only way.


The linked tweet only really shows that some of their documentation language was 'borrowed'. The actual API semantics are nothing like Mantle or Vulkan.

The synchronization model is completely different, the queue system is completely different, the memory allocation model is completely different. The binding models are massively different. Maybe some of the core ideas behind Mantle were taken with explicit synchronization, and maybe they started from Mantle as a base but the end result is completely alien to what Mantle was.

Microsoft could never justify using Vulkan over DirectX 'Next' to its developers. It would be a total deprecation of Direct3D. It would require all their developers to throw their entire renderer backend out and start fresh with 100% new tools. A lot of effort was put into making the transition to D3D12 from D3D11 easy, even to (IMO) the detriment of the API semantics. They even kept shaders inter compatible while adopting an enormously different binding model.

D3D12 is also largely a much friendlier API to use. Vulkan is (was) verbose to the extreme in ways that really didn't matter to the majority of Direct3D users on desktop GPUs. Vulkan render passes are a nightmare to work with, and largely served no benefit to the desktop and console GPUs Direct3D is used for.

Vulkan has evovled a lot since 1.0, relaxing lots of the excessive restrictions that were originally there. A lot of this stuff likely wouldn't have happened if it weren't for D3D12 putting pressure on Khronos to improve the developer experience.

Vulkan is not the panacea people think it is, but it's getting better. And so is D3D12 by borrowing some of Vulkan's better ideas. To say D3D12 should have never existed is just 'M$ bad' dogma.


I haven't looked at DirectX 12's documentation, but I really should. Khronos Group's Vulkan Samples are broken out of the box, crash on a fresh build, and the Hello Triangle API sample crashed when you minimize it. It's just a shitshow.

Alexander Overvoorde's Vulkan Tutorial is in my opinion, the de facto practical documentation for a first pass Vulkan implementation, but he also does some small things wrong that you just should not do in a production application.

It took me over 700 lines of C for a minimal replica.[1]

I'm not a fan of Vulkan not having a built-in compiler for shaders compared to OpenGL. I'm sure there's a superior technical reason for it, but I don't care because it's ruined my development experience, and reintroducing the behavior requires a sizable increase in CMake dependency overhead.

https://github.com/Planimeter/game-engine-3d/blob/main/src/g...


> I'm not a fan of Vulkan not having a built-in compiler for shaders compared to OpenGL.

What does that mean? The standard doesn't dictate any kind of form of implementation of the compiler. Different OpenGL drivers can use different compilers. Same for Vulkan.


I mean that there is no vkCompileShader to OpenGl's glCompileShader. You must bring your own precompiled .spv.

This is what you'd most likely want in production anyway, but for my purposes, it's a missed API feature.


Well, it has still to be compiled later into machine code. But how hard is it to compile GLSL into SPIR-V using an external compiler.


Semantics drifted over time. The main input was still Mantle, no a bit of doubt about it. That documentation shows it was borrowed practically verbatim in the early form.

Vulkan also didn't keep Mantle as is in every aspect when it used it.

Point is, MS didn't need to build things from scratch. They used Mantle as the starting point, same way Khronos did.

So the question one should be asking, why MS didn't collaborate while they very well knew the collaborative API is going to happen. No one stopped them from making that collaborative API more friendly or better. But MS being MS they rushed with NIH.


I suppose it could be seen as NIH, but there are legitimate engineering benefits for Microsoft with its own API. It's important to remember that Direct3D used to be a completely separable component from Windows, and wasn't installed by default. Around the same time D3D12 and Vulkan were happening Windows also pulled D3D in as a core system component that will be universally available.

If you're an engineer trying to pick "the" GPU API for Windows the only real argument that can be made for picking Vulkan, the open API, over the internal API Microsoft already had for ~15 years by that point is that it's the 'open source friendly' choice. How do you justify using someone else's API as a core Windows API over your own solution that you already had. There's no engineering or business justification really.

It's similarly easy to construe Vulkan and Mantle as an attempt by AMD to take control of the GPU API space and specify an API particularly friendly to their hardware as the standard. They largely even succeeded considering what Vulkan became. Even D3D12's binding model is basically an exercise in how close we can get to directly exposing AMD's "anything goes" binding model while still allowing NVidia to function. It's very nice as a GPU vendor when your driver can be made closer to a no-op than your competitors.

Too many people pile on D3D12 simply because of Microsoft, rather than fairly considering the context of what created it. Apple made the same decision too with Metal, but I rarely hear any complaints there.


Engineering reasons just don't seem convincing when MS has DX only policy on Xbox and a long history of anti competitive behavior, especially in the gaming segment.

If MS would have allowed using Vulkan on Xbox for instance, I would have been more willing to give them the benefit of a doubt. But as it stands, I see them pushing DX as having lock-in motives.


> […] when MS has DX only policy on Xbox […]

PlayStation only supports Sony’s own, proprietary Gnm/Gnmx[1]. I heard their APIs are somewhat based on (very old) OpenGL, but different enough to not actually be compatible.

Nintendo Switch supporting OpenGL or Vulkan is an exception in the console space.

1: https://en.wikipedia.org/wiki/PlayStation_4_system_software#...


Very few switch games use Vulkan on the Switch. NVN is the true native graphics API on the Switch, which is Nvidia's API. Those that have tried Vulkan on Switch usually end up ditching it for NVN as Vulkan leaves too much performance on the table on a system with little power to spare.


Sure, I'm not saying Sony is any better in this regard. They copy worst ideas from each other.


Right, but my point is that this isn’t some famous malicious intentional lock-in by “evil Microsoft” — it’s how every console is designed. (And arguably Xbox has an advantage here because it shares DirectX with Windows)

There’s no evil MS lock-in here.


Not really. It has nothing to do with console idea or form factor, it's just how these messed up companies are "designed" in using anti-competitive methods. See Steam Deck which uses Vulkan just fine.

There is absolutely lock-in in MS and Sony's approaches. There is no inherent need in it just becasue it's a console.


Of course they want to control the ecosystem the same as every one else.

There is a reason for NES games having a certificate from Nintendo.

And yes most developers do not care about it anyway if it's good.


That's not an excuse for lock-in. But they indeed always tried to do that. ActiveX, browser wars, etc.


What's stopping Intel, AMD and NVIDIA from offering native Vulkan support in WSL, then?

This 'NIH' (not so as already explained by another commenter) is offering graphics acceleration to WSL guests based on existing Windows drivers, which compensates for the fact that the GPU vendors aren't already offering acceleration - Vulkan or otherwise - for WSL guests.

I'm actually not sure how I would get stable acceleration, Vulkan or otherwise, in a Linux host in any VM. In my experience acceleration in VMWare and VirtualBox are both a crapshoot to the point of not being worth using.


How about VirGL with QEMU/KVM? It's also not perfect but might work for you.


Will give it a try the next time I need to test accelerated graphics on Linux, thanks a lot! Never heard of it before.


For 25 years now the standard for 3D graphics has been DirectX. Why would you expect Microsoft, of all companies, to support anything else?


Vkd3d is only going to get better, and it's pretty darn good right now. I'm okay with Microsoft just sticking with D3D12 for a very long time. I use it to play lots of games on Linux. :D


That helps to break the lock-in for sure, but it took a huge amount of effort to make this work around and it will continue to be costly. Good thing Valve are footing the bill for it, but it will always need to play catch up.


This entire concept of having directx in WSL sounds like Embrace Extend Extinghish, all over the place. This means we will have some linux programs only running in WSL. This is fucked.

> libd3d12.so and libdxcore.so are closed source, pre-compiled user mode binaries that ship as part of Windows. These binaries are compatible with glibc based distros and are automatically mounted under /usr/lib/wsl/lib and made visible to the loader.

Mesa is a freedesktop project and this is not the freedesktop I was imaginating.


It's closer to the way vendors like Qualcomm shipped GPU drivers for ARM Linux until relatively recently, where they would ship a patched kernel with a DRM driver, and then a closed source userspace .so blob that you had to rely on, on top of that, which implemented the shader compiler, pipeline, etc. This is basically that. Except it's a virtual machine, so it's also kind of like how VMware Workstation does graphics acceleration.

The main difference here from the ARM world is that Microsoft seems to have invested dev work in upstream Mesa so that Mesa actually implements the nitty gritty parts of the spec on the Linux side, and then translates it all to underlying D3D12 host calls for you. This is conceptually similar to VirGL, which is basically the same thing but for OpenGL inside QEMU. If anything Microsoft seems to have taken a less arduous approach to this by going to upstream first to Mesa. The other thing here is that Microsoft is using a patched kernel that hasn't been merged upstream; but then again Asahi Linux is in the same boat, and they work upstream in Mesa now, too. And Mesa isn't just limited to Linux, of course, and can be used in userspace in theory a lot of places.

If you have some weird desire to use DirectX APIs on Linux, you can already do that in many ways with subprojects of Wine and Proton such as d3dvk. It's a well known API with a well known ABI so it's not like this is impossible. You can already run many Windows programs through Wine, so unless this hypothetical Windows/Linux monstrosity thing is going wildly out of its way to be WSL only through active hostility, it's not like there's some magical API that can't be emulated in most of these cases.


> If anything Microsoft seems to have taken a less arduous approach to this by going to upstream first to Mesa... If you have some weird desire to use DirectX APIs on Linux, you can already do that in many ways with subprojects of Wine and Proton such as d3dvk.

Interesting where Google's efforts to commodotize d3d12 with SwiftShader (as opposed to Mesa) and Angle fits in to this.


Yeah, I pointed this out on the mailing list when when microsoft was trying to upstream their dxgkrnl driver: https://lore.kernel.org/all/CADvTj4q_-A9p2UkH975SPPYmSzVAv38...

As have others: https://lore.kernel.org/all/Yh8wEj5wKMRtxaBB@infradead.org/

Looks like the up-streaming has been abandoned as Microsoft has not made a real effort to fix the issues pointed out: https://lore.kernel.org/all/YinPR66ekfKWHR0R@kroah.com/


It was their second attempt, the first one was in 2020 https://lkml.org/lkml/2020/5/19/742 and was also turned away.


They abandoned it after submitting a v3 series looks like: https://lore.kernel.org/all/cover.1646161341.git.iourit@linu...


Except in order to succeed, they have to do the embrace part.

The primary factors for me using Linux right now, in order of importance:

1. Tiling Window Manager

2. Functional Declarative System-wide package manager (NixOS)

3. Cleanliness/simplicity (This is vague, but I have exactly the software I asked for and nothing else).

4. Configurability (I can adjust quirky in-the-middle software like my file manager, taskbar/clock, etc. No immutable one-size-fits-all UI/UX)

5. Repairability (Given time and patience, I can fix Linux problems. Fixing Windows is black magic, and I'm very often reminded, simultaneously, how much of an expert and how much of a novice I am on the CR/LF hand path.)

6. Auditability (Open source is more trustworthy, since source code can be freely audited by anyone at any time. Active free software projects accept security patches quickly and cleanly.)

---

Windows' design is diametrically opposed to all of these, and I can scarcely imagine a version that would allow a true tiling window manager (as opposed to the contemporary explorer.exe hacks).

Give me these features, and I will be truly impressed.

Do all that, then do the extinguish step, and you will really blow my mind.


Well of course you're inoculated from the embrace but a ton of people aren't. If enough of the Linux people around you are embraced by WSL, there might come unforeseen consequences.

You should be scared.


Most of the casual windows users I've talked to have complained to me, unprovoked, about how bad windows is for them.

Microsoft has a long way to go, even for the average user, and they are still moving clearly in the opposite direction.

The casual Linux users I know feel the same, but more stubbornly.

Windows getting better is, for me, a fantasy, not a fear. At least then I would have fewer people begging me for the kinds of tech support needs that should have been factored out in 2008 (or much sooner), that Windows has instead kept alive into 2023.


Of course Linux offers tons of features people like me or you might like (I'm a KDE dev), but there is also a vast amount of Linux user who for example discovered Linux by installing a VM for an uni project or have to use a POSIX environment for a programming project and then decided to stick with Linux because they liked it. There is a serious risk that people won't be tempted to try the Linux desktop anymore and instead will just use WSL. This is bad as it means less potential new contributors and enforce even more the dominance of Windows on the desktop.


MS can't grok functional software (in every sense of the word) so something like Nix is off the table.


Currently working on writing native uis for a small service I'm writing .. gtk, at, swiftui, ios and android are easy. Windows is a nightmare.


Depends on which stack you're picking on Windows side, just like there are two major APIs on iOS and Android, three on macOS,...

The last useful Gtk in comparison with what is there on Windows was version 3, they even killed their designer for version 4, and version 5 roadmap was recently announced at FOSDEM.


what do you mean? raw GPU compute from the host GPU has been available in WSL for a while now, and this is just a layer on top which allows access to the encoding and decoding hardware on the card.

They had to pick an API to expose, so they picked theirs.

I guess I don't see the big deal, here. Nothing MS does for WSL will change anything that exists outside of WSL.

There is no shortage of people who hate Microsoft on this site, or on the Internet in general, and I would be very surprised if any application of substance chose to both run on Linux and be Microsoft-only (WSL only). That seems so unlikely that it's almost not feasible for a logical human person to come to a decision like that. And if that happens, don't use that software, and protest their decision, just as you're doing here with WSL.

Linux is far bigger than Microsoft, no matter what anyone thinks. And the percentage of Linux users and developers who hate Microsoft is very high. I just do not see the situation you're describing where there are Linux apps which only work in WSL. This hasn't even extended what Linux can do, it's just another way to do it.


There is no reason to keep Windows installed on a new computer. Wipe everything, install Linux, any distro. WSL will always be worse than a native setup.

The only thing I see that Microsoft could do is use their weight with AMD and NVidia to somehow sabotage Linux hardware driver support for GPUs, artificially making WSL the superior solution, but that's a long shot; I'm not sure what hardware mfg would gain from that.


" There is no reason to keep Windows installed on a new computer. " Gaming would be a reason


Not really anymore. Wine, Proton, VK3D, .. have improved so much that the native gaming experience on Linux is more than viable. The Steam Deck might be the best example of that.

I'm very happy that the last few years I didn't have to keep a horrid Windows install around, just to play a game every now and then.


I'm pretty excited for linux gaming, and it really does seem to work well for singleplayer games. But online games (with anticheat, Valorant for example) often times don't run at all.


they are already working on console and mobile support, so Valorant can come to Linux, if they port for Steam Deck(less possibility though)


> There is no reason to keep Windows installed on a new computer.

You could keep it around as a reminder of how bad things could've been as you ponder your Candy Crush start menu.


This will never happen. Linux is way too big for this. The percentage of developers using Linux is so large that making your tool only work under WSL makes very little sense, especially when the native approach is usually easier/simpler.


Making non-WSL Linux unusable for at least some applications is clearly the goal here.

Microsoft isn't even hiding that at all as they even list it as a specific goal of the directx related changes being pushed to mesa: https://devblogs.microsoft.com/directx/in-the-works-opencl-a...

> Make it easier for developers to port their apps to D3D12. For developers looking to move from older OpenCL and OpenGL API versions to D3D12


Wine exists, so we already have an open source DirectX implementation for Linux. It's not currently exactly compatible with libd3d12.so, because wine provides a Windows PE .dll, not a Unix ELF .so. Though I think Winelib may already allow you to access the Windows APIs via en ELF toolchain.

> Mesa is a freedesktop project and this is not the freedesktop I was imaginating.

Taking OpenGL and converting to a closed source device driver interface doesn't seem that different, morally, from converting to a closed source graphics driver device API.


DXVK supports Direct3D 9/10/11 natively on Linux, without wine.


And they even have the balls to label the blog "DirectX Loves Linux", heart symbol included. What a god-forsaken aberration. The people writing for this blog must be incredibly oblivious or incredibly drowned in kool-aid. Like they know nothing of computing history and how Microsoft has always tried to make Windows the gaming platform at the expense of other graphics APIs and ecosystems.

PS: Fuck Microsoft. Fuck them with a spiked club.


...someone needs to read a bit about graphic api history.


How dare they extend the functionality of an open source project beyond it's existing capabilities /s


> This entire concept of having directx in WSL sounds like Embrace Extend Extinghish, all over the place.

That's because it is. The Linux Desktop has been 'embraced' and 'extended' on Windows via WSL + NVIDIA GPU drivers all supported on it. Making Windows 11 the best Linux desktop distro and there is no need to 'Partition and Install Linux' when it is all running on Windows itself.

'Extinguish' in this case is all the other Linux Desktop distros that will wither away into obscurity since little to no-one would directly install a distro or go through the steps of dual booting other than the techies.

Microsoft's new strategy of targeting and selling to developers seems to be a great bet that is clearly working.


Thats a weird way of putting it.

Windows is effectively dead outside of desktops. Nobody that isn't mired in 90s/00s processes (ex: the banking and energy sectors are minor players in the enterprise universe, but are still stuck in process hell in many cases); Microsoft's biggest customers on Azure are all Linux; using Azure for Window is almost unheard of (their biggest client on Windows is the Office 365 exchange cluster itself, a decidedly legacy application that would be nearly impossible to port or replace).

Microsoft is admitting that Linux won, and the only way to exist is to have Windows be able to transparently be Linux too, by running a captive Linux VM. Linux embraced and extinguished Windows, and did it so subtly that most people didn't even see it until it was too late. Microsoft's creation of WSL2 is their admission of this truth.

Side note: Half of the software developers and systems engineers that work at Microsoft are not qualified to work on Windows, Windows software, or even really know how to use Windows beyond the obvious metaphors that all modern desktops share. They use Linux desktops, and work on Linux software, on Linux systems. At Microsoft, home of Windows. Linux won.


This is Microsoft's new EEE strategy, except that the new "Extinguish" target isn't all Linux systems but rather just any non-WSL based Linux systems. Microsoft offers WSL based Linux on Azure and they want developers to write software that's locked into WSL based Linux.

Microsoft isn't even hiding this at all as they even list it as a specific goal of the directx related changes being pushed to mesa: https://devblogs.microsoft.com/directx/in-the-works-opencl-a...

> Make it easier for developers to port their apps to D3D12. For developers looking to move from older OpenCL and OpenGL API versions to D3D12


Thats still a mistake in your thinking, however.

Non-WSL Linux systems are the majority. They are in your pocket, in your car, your TV, in space, on Mars. Microsoft will never be a player here but is still invited to the bazaar if they wish to participate.

And yes, I agree with Microsoft on this: all apps that people care about their 3D performance should move to DX12 or Vulkan. Microsoft in-house works on supporting DX9/10/11 and OpenGL purely as a legacy API, and eventually, Windows won't allow drivers to implement them anymore, and it will be some shim. I'm not going to shame Microsoft for promoting DX12 over Vulkan; they're both largely the same API, driven entirely by AMD and Nvidia development teams to reflect the current state of hardware, not the whims of some bald CEO that threw chairs at people.

Could the shim be just DXVK-based? Sure. Could the shim be DX9/10/11 state trackers being added to Mesa, and then the Mesa-on-DX12-on-Windows method used? Also sure. Intel is already using DXVK as a legacy shim in their new Arc drivers, and its performing quite well. DXVK also outperforms Nvidia's DX9/10/11 emulation in some games. I could easily see a DXVK-based solution just ship with a future Windows.


> Non-WSL Linux systems are the majority. They are in your pocket, in your car, your TV, in space, on Mars. Microsoft will never be a player here but is still invited to the bazaar if they wish to participate.

I agree this isn't what they are targeting, at the moment based on the design this doesn't appear to be a play for the embedded Linux market, it's a play for controlling the server/desktop markets(at least the segments of those markets that would potentially use the WSL only Linux userspace API).

> And yes, I agree with Microsoft on this: all apps that people care about their 3D performance should move to DX12 or Vulkan. Microsoft in-house works on supporting DX9/10/11 and OpenGL purely as a legacy API, and eventually, Windows won't allow drivers to implement them anymore, and it will be some shim. I'm not going to shame Microsoft for promoting DX12 over Vulkan; they're both largely the same API, driven entirely by AMD and Nvidia development teams to reflect the current state of hardware, not the whims of some bald CEO that threw chairs at people.

The issue is that they are effectively bringing a WSL only userspace to Linux(https://devblogs.microsoft.com/directx/directx-heart-linux/) and are pushing for developers to target that WSL only userspace. Specifically they provide Linux native libraries like libd3d12.so and libdirectml.so which have a hard requirement on WSL.

> Could the shim be just DXVK-based? Sure. Could the shim be DX9/10/11 state trackers being added to Mesa, and then the Mesa-on-DX12-on-Windows method used? Also sure. Intel is already using DXVK as a legacy shim in their new Arc drivers, and its performing quite well. DXVK also outperforms Nvidia's DX9/10/11 emulation in some games. I could easily see a DXVK-based solution just ship with a future Windows.

The Mesa work seems to be effectively used for the "Embrace" part of the EEE strategy, it ensures that any existing Linux only software runs well under WSL, it's not really the main worry IMO.

The WSL only userspace(ie Linux applications targeting libd3d12.so and libdirectml.so) is the "Extend" part(extends the Linux userspace with WSL only userspace functionality).

Encouraging developers to drop support for non-WSL only API's/libraries on Linux is the "Extinguish" part.

So the main issue is that critical libraries like libd3d12.so and libdirectml.so have been designed to be WSL only. If this wasn't an EEE strategy I would expect that there would be some path to eventually use libd3d12.so and libdirectml.so without WSL, but that isn't possible and there appears to not be any plans to make it possible.

> This is the real and full D3D12 API, no imitations, pretender or reimplementation here… this is the real deal. libd3d12.so is compiled from the same source code as d3d12.dll on Windows but for a Linux target.

> libd3d12.so and libdxcore.so are closed source, pre-compiled user mode binaries that ship as part of Windows.

> D3D12 wouldn’t be able to operate without a GPU specific user mode driver (UMD) provided by our GPU manufacturer partners. The UMD is responsible for things like compiling shaders to hardware specific byte code and translating API rendering requests into actual GPU instructions in command buffers to be executed by the GPU. Working closely with our partners, they have recompiled their D3D12 UMD to a Linux target, enabling execution of these drivers in a WSL environment. This support is being integrated in upcoming WDDMv2.9 drivers such that GPU support in WSL is seamless to the end user. WDDMv2.9 drivers will carry a version of the DX12 UMD compiled for Linux.


> Microsoft offers WSL based Linux on Azure

No they don't, not if you're not using a Windows desktop in Azure. They have their own Linux distro for embedded stuff, extremely low power stuff, but that's very far from anything WSL2 related.

How would they even offer a WSL-based Linux on Azure without Windows?


Via a Windows WSL server on Azure, direct WSL based Linux VM's on Azure should be possible as well although it doesn't look to have been implemented yet:

https://learn.microsoft.com/en-us/windows/wsl/install-on-ser...


that's installing WSL on Windows server; that's a Windows Server feature, not an Azure thing. and, that's a long way from offering standalone WSL VMs.

standalone WSL 2 doesn't even make sense. it isn't WSL without the "W": Windows.


WSL2 and Azure are both Hyper-V based, offering standalone WSL VMs should be fairly straightforward from my understanding due to this.

> The Azure hypervisor system is based on Windows Hyper-V.

https://learn.microsoft.com/en-us/azure/security/fundamental...

> The newest version of WSL uses Hyper-V architecture to enable its virtualization.

https://learn.microsoft.com/en-us/windows/wsl/faq


yes, I know about Hyper-V and I know that WSL is virtualized, but a standalone Linux VM is just Linux. It isn't WSL. WSL isn't just a name, it's the "Windows Subsystem for Linux" which is worded weird, but it's an NT Kernel subsystem which provides Linux compatibility. WSL v1 used syscall translation from Linux to Windows, meaning it was not a VM, and WSL 2 uses a Hyper-V virtual machine (two, actually) along with some glue to pull it together into a nice feature for Windows.

If you remove Windows, it's just a Linux VM. Just Linux. Azure already offer this as do all other cloud providers.


> yes, I know about Hyper-V and I know that WSL is virtualized, but a standalone Linux VM is just Linux. It isn't WSL. WSL isn't just a name, it's the "Windows Subsystem for Linux" which is worded weird, but it's an NT Kernel subsystem which provides Linux compatibility. WSL v1 used syscall translation from Linux to Windows, meaning it was not a VM, and WSL 2 uses a Hyper-V virtual machine (two, actually) along with some glue to pull it together into a nice feature for Windows.

I'm probably not being that clear with the terminology, I'm referring to WSL2(WSL v1 is the legacy translation layer that isn't important here, I was always referring to WSL2).

> If you remove Windows, it's just a Linux VM. Just Linux. Azure already offer this as do all other cloud providers.

Different VM hypervisors have the ability to expose additional features to guest operating systems, in this case a Hyper-V/WSL2 host like Azure can provide critical functionality to the guest Linux VM that other hypervisors are not able to provide.

If you remove the Hyper-V/WSL2 host you lose the ability for the Linux guest userspace applications to make use of libraries like libd3d12.so and libdirectml.so.

The fundamental issue is that the WSL2 Linux userspace(which is basically a port of a number of Windows userspace components to Linux) has closed source libraries like libd3d12.so and libdirectml.so which have a hard requirement on dxgkrnl and a compatible Hyper-V hypervisor/Windows based VM host. If Linux applications are developed targeting these closed source libraries they will not function on normal non-WSL2/Hyper-V VM's.

See the diagrams/details here: https://devblogs.microsoft.com/directx/directx-heart-linux/

> Over the last few Windows releases, we have been busy developing client GPU virtualization technology. This technology is integrated into WDDM (Windows Display Driver Model) and all WDDMv2.5 or later drivers have native support for GPU virtualization. This technology is referred to as WDDM GPU Paravirtualization, or GPU-PV for short. GPU-PV is now a foundational part of Windows and is used in scenarios like Windows Defender Application Guard, the Windows Sandbox or the Hololens 2 emulator. Today this technology is limited to Windows guests, i.e. Windows running inside of a VM or container.

> To bring support for GPU acceleration to WSL 2, WDDMv2.9 will expand the reach of GPU-PV to Linux guests. This is achieved through a new Linux kernel driver that leverages the GPU-PV protocol to expose a GPU to user mode Linux. The projected abstraction of the GPU follows closely the WDDM GPU abstraction model, allowing API and drivers built against that abstraction to be easily ported for use in a Linux environment.

> Projecting a WDDM compatible abstraction for the GPU inside of Linux allowed us to recompile and bring our premiere graphics API to Linux when running in WSL.

> This is the real and full D3D12 API, no imitations, pretender or reimplementation here… this is the real deal. libd3d12.so is compiled from the same source code as d3d12.dll on Windows but for a Linux target.


But again, if you run the WSL2 kernel in a standalone VM then it's just Linux. All the "Extend" goes away.


To clarify it's the functionality which is provided using the dxgkrnl(which has both Linux kernel and WSL2/Hyper-V hypervisor components that are required to function) passthrough that makes the "Extend" functionality not work in a normal Linux VM with say a KVM based hypervisor.

The issue is that the WSL2 Linux userspace(which is basically a port of a number of windows userspace components to WSL2 Linux) has closed source libraries like libd3d12.so and libdirectml.so which have a hard requirement on dxgkrnl and a Hyper-V hypervisor/Windows based VM host. If Linux applications are developed targeting these closed source libraries they will not function on normal non-WSL2/Hyper-V VM's.

See the diagrams/details here: https://devblogs.microsoft.com/directx/directx-heart-linux/

> Over the last few Windows releases, we have been busy developing client GPU virtualization technology. This technology is integrated into WDDM (Windows Display Driver Model) and all WDDMv2.5 or later drivers have native support for GPU virtualization. This technology is referred to as WDDM GPU Paravirtualization, or GPU-PV for short. GPU-PV is now a foundational part of Windows and is used in scenarios like Windows Defender Application Guard, the Windows Sandbox or the Hololens 2 emulator. Today this technology is limited to Windows guests, i.e. Windows running inside of a VM or container.

> To bring support for GPU acceleration to WSL 2, WDDMv2.9 will expand the reach of GPU-PV to Linux guests. This is achieved through a new Linux kernel driver that leverages the GPU-PV protocol to expose a GPU to user mode Linux. The projected abstraction of the GPU follows closely the WDDM GPU abstraction model, allowing API and drivers built against that abstraction to be easily ported for use in a Linux environment.

> Projecting a WDDM compatible abstraction for the GPU inside of Linux allowed us to recompile and bring our premiere graphics API to Linux when running in WSL.

> This is the real and full D3D12 API, no imitations, pretender or reimplementation here… this is the real deal. libd3d12.so is compiled from the same source code as d3d12.dll on Windows but for a Linux target.


yeah exactly, it doesn't make sense.


Laptops are the new desktops, Windows is quite alive.


Alive in the marketshare sense but they don't have as much going on as they used to.

People who use Linux use it because of what it is, a fantastic piece of software that puts trillion dollar corpo to shame.

Most Windows users have never tried any alternatives. The 'tech normies' would switch to Mac in a heartbeat if they could afford them, they don't really care about what they use as long as it has shiny buttons.


Two words, free beer.

Linux would be nowhere without the UNIX trillion dollar corps that have adopted it to cut down development costs in UNIX R&D.

It would be another Hurd.


> Windows is effectively dead outside of desktops.

cough laptops cough


I think it is reasonable to assume they mean “desktop apps”. Windows is a platform to run desktop apps and that is the area where it leads. Windows laptops also run desktop apps. Laptop apps is not a category.

The other big buckets are mobile and server but “mobile” as an application category does not refer to laptops.


Okay.

I mean, people call those desktops a lot of the time in broad conversations about architectures. Did you have a point that affects their actual argument?


Well, mostly that there are not that many desktops being sold or used these days, but still lots and lots and lots of laptops. So if Windows is still king of the laptops, its very, very far from being "dead".


I run windows as my dev laptop since my job only supports Linux and Mac. It’s a way better platform than Mac now.


I'm not sure I follow what level the EEE plays on. I think if anything, Windows has a serious issue of long-term viability because of the dominance of the browser in modern desktop applications. The browser has succeeded in opening the walled gardens of OS makers, and there is really no reason for most apps to be running on a specific OS, or even for most apps to have an installation. (I know, HN is probably averse to this, but for most users this is really the truth).

Your OS is basically a gateway into the internet, the days of win32 are gone, and now Windows leans into that bundling Webview2 into win11, so that your electron-like apps won't have to be so bloated, and so slow.

I think this move is in the same direction: Windows admitting defeat in the developer space and trying to become the gateway to linux to avoid being obsolete. WSL is kind of the inverse of EEE from the perspective of Windows.

So what about directX specifically? Maybe I'm missing something but I doubt there will be a future where Linux loses much from this:

If you start building an app now, would you build it on Linux but then link to DX12 WITHOUT also having vulkan support for native linux? Not really, you would just build it on windows completely.

If you have a Linux app now, and you include DX12 support, would you transition to windows? Not really, your entire app depends on the linux standard libs, and you can now have windows support through wsl anyway.

If you're doing some scientific computing or ML would you now do it on windows against DX12 from Linux? Ultimately you won't deploy on WSL in the cloud, I hope. Why make your dev/prod so different?

I think what Windows could be doing here is moving to become the best linux distro in 20 years. I think ultimately thats a good thing. Imagine Windows is just Linux with a proprietary desktop shell around it, and everyone uses it for their daily use. That just means that Linux adoption is much greater, and porting apps to linux from windows is already 90% done because they share a backend. I think Microsoft may realise the best way to make money is to not have to maintain an entire OS in the first place.

Basically that would mean OSes become like the car market: no matter what you buy, it looks a bit different, but you get kind of the same thing under the hood. And that thing would be linux.

It may mean that Desktop Linux will never make it, but I'm not so sure. Those 2.5% marketshare are not people who are easily sold to Windows, and will see it as an inferior Linux anyway.


Doesn't this also mean that DirectX drivers can now be ported to standalone Linux distros? Or am I missing something here?


Nope, and it's pretty clear this is being designed in a way to ultimately prevent users from running applications on non-WSL Linux systems, Microsoft isn't even hiding that at all as they even list it as a specific goal of the directx related changes being pushed to mesa: https://devblogs.microsoft.com/directx/in-the-works-opencl-a...

> Make it easier for developers to port their apps to D3D12. For developers looking to move from older OpenCL and OpenGL API versions to D3D12,


It is the exact opposite. Microsoft has always tried to kill gaming on non-Microsoft platforms. Labelling the blog "DirectX Loves Linux" is the ultimate brainwash attempt on those who are too ignorant of computer history.

Fuck Microsoft with a pitchfork.


GPU acceleration sounds great, but fixing the slow disk speed across the Linux/Windows boundary would be even better.


NTFS is just slow for accessing lots of files in a short amount of time. That was a tradeoff that was made intentionally, and it won't go away until you use another filesystem for your boot volume. ReFS is apparently going to be an option for this sometime soon.


WSL2 is slow, even accounting for NTFS. It is painfully slow. Accessing the Linux volume from Windows is also slow. WSL1 was faster too.

It makes backing up the linux home directory painful.


ah, it's 9p that's slow, then.

there are other ways to get files from A to B. consider backing up to the local network rather than the host Windows machine, maybe. I don't know.

another option may be to mount your home directory from another host.

that may not suffice if you're doing a lot of windows interop I suppose. once in WSL I tend to stay there, and not do much related stuff in Windows, and I just tar & gz the home directory and put it on a flash drive when I want to back it up.


how does one make 9p slow?


how is anything slow? there are probably 1 million valid answers to that question.

if netcat is faster, then it's almost certainly 9p's fault that file transfers are slow between WSL and Windows.


you're obviously not familiar with 9p, so ill restate my original question.


> you're obviously not familiar with 9p, so ill restate my original question.

you're obviously not familiar with the term "restate." statements are restated, not questions. also, if you say you're going to ask again, you should actually ask again...

[let's pretend you asked again and that I quoted you here.]

how does one make anything slow? poor implementation, throttling, bugs, etc.

if 9p is just raw bytes over the wire then you tell me how it could be so slow. you seem to know the answer and you're clearly playing manipulative games by asking what you're asking. going out of your way to make yourself feel superior to a few words on a website.

people here are saying it's slow. you're asking how it could be. I'm gesturing vaguely at the thing saying "this is how, apparently."

stop being an ass and either participate in the conversation or leave the conversation.


Well... Just use Linux? Install ZorinOS and boom, instant better OS experience.


I can't be the only person who made this mistake, but "ReFS" doesn't refer to ReiserFS[1], but rather Resilient File System[2]

[1] https://en.wikipedia.org/wiki/ReiserFS [2] https://learn.microsoft.com/en-us/windows-server/storage/ref...


ReFS is not a new term for most Windows admins or people who follow what MS is doing, but I didn't think that someone might misinterpret that name. Thank you.


The issue is deeper than NTFS. Working on either side of the divide in WSL is plenty fast. Trying to move data across the divide is not.


I want to be able to use IntelliJ from WSL. Can I do that yet?


I've been doing it for almost a year, am I missing something?


I, too, want to mention that this has already been working.

Any app that works with "real" Xwayland (or already is Wayland native), will work on WSL2. WSL2's Xwayland uses Mesa/DRI's WDDM backend, so its translating DRI IL into DX12 IL, and passing it off to the GPU driver inside of NT kernel space.

There are a few exceptions, such as I think Firefox has a bug where its misinterpreting the DX12 Mesa target and attempting to use a feature that isn't advertised (they're working on a fix); Chrome (thus all the Chromiums and Electrons) work, GTK, Qt, the Java toolkits, etc, all work.


It was a genuine question. Good to know that this is possible! It was my understanding that it was not the case.


There were issues with enterprise vpn and firewalls last time I tried, but on a home desktop gui apps basically just work.


Can I ask why? Jetbrains products on Windows have good WSL support and remote interpreting already.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: