WSL has been a deitysend, I appreciate it quite a bit now. It allowed me to get out of Macos development hell, allows 'dropping into' Linux easily from Win if one is too lazy to dual boot, and especially useful in enterprises where Win/Mac are the only options.
That said I will agree with other comments here, the version numbering is weird. I can't even tell _how_ to tell what version of WSL I have. Versioning is still an unsolved problem even for tech giants I observe.
True true! I am stuck to Windows at work, while we SSH into development Linux machines. Windows terminal and WSL keep me from going insane. All I need to do is just F11 the terminal, make it fullscreen and forget ever that I am on Windows.
With Mac OS I still like the Mac terminal more than the Windows terminal and just SSH into Linux boxes.
Still, for me, nothing beats a Linux desktop with i3wm on it.
You can run i3 via WSL if you use an alternate X server like X410 or vcxsrv instead of the built-in wayland RDP bridge thing. vcxsrv is a bit nicer, since it lets you run the X display in fullscreen mode.
I don't know anything about WSL, but if it has a wayland server like you say and if 22SAS wants to use i3, they could try sway with the wayland backend.
I have heard of it but never used it. I'd try it out on my personal machines. Work machines are Windows thin clients, since IT doesn't want to support Macs and the Linux teams are absolutely only concerned with dev and prod Linux machines. Plus being in HFT, the machines are all tightly locked down, couldn't even install WSL and Windows Terminal on my own.
Sadly you do need both a mosh client and a server installed. But you might get tremendous productivity benefits which might make it worth auditing it (maybe worth talking to your boss/customer).
Mosh is lovely, but keep in mind that you can't use agent forwarding, nor use it in pipes. On the other hand, if you're stuck on dodgy Wi-Fi or just about any mobile link, or you want to close your laptop and resume your session elsewhere, it's an absolute godsend.
Mosh backed up with either tmux or screen is handy indeed.
Personally i think it's cool but less interesting than cygwin and less interesting than wsl 1. A well integrated virtual machine is cool and all, but actually translating system calls or rewriting software for another environment is more interesting.
My understanding is they tried using more of a cygwin approach for a couple years with WSL1, but ultimately they weren't able to get some major software like Docker to work, not to mention the long tail.
WSL1 was much more interesting than cygwin: cygwin is a collection of ports. WSL1 used the existing binaries and transported kernel calls directly to a Linux kernel "personality" for the NT Kernel (which was designed to flexibly support multiple "personalities" like that way back in the day, but which hadn't been put to much use in recent decades before WSL1 brought it back and did something cool with it).
(WSL2 is a lightweight VM in a traditional VM sense.)
Interesting, sure, but WSL2 is simply much better to use as a daily driver and for production. I don't have to worry about whether stuff works or not like in WSL1, because, as you say, it's a VM so most stuff should work just fine.
I think everything except io is an okay in wsl1 if you only run user space software. But the io throughput is just bad. Iterate over 1000 image and hash each one costs only 1s in Linux and whole 20 seconds on wsl1. The way wsl2 implement fs is just far better.
Let alone if you use nodejs and have tens of thousands of files in node_modules. 'npm install' will cost you a whole 5 minutes.
WSL1 when accessing Windows files and Linux files uses direct IO via the NT Kernel. This is "slow" because NTFS has a different CAP theorem tradeoff than POSIX expected file system semantics. (It's direct file access so working with one big file is sometimes faster: the trick is that's what NTFS is better optimized for: bigger, fewer files atomic transactions. POSIX semantics work better for lots of small files and doesn't guarantee atomic transactions in the same way.)
From Windows (such as in Explorer) accessing WSL1 Linux files the safe way passes through a Plan9-derived file server as intermediary. This is surprisingly quick, but not without overhead. (But you can if you need to, do some unsafe operations directly on the files in NTFS.)
WSL2 when accessing Windows files accesses them through a Plan9-derived file server as intermediary. This is surprisingly quick, but not without overhead. WSL2 when accessing Linux files is using a Linux filesystem in a virtual hard disk file (VHD) similar to any other VM technology. Using a Linux file system it naturally exhibits POSIX semantics and is fast in the way Linux operations are expected to exhibit in lots of little files scenarios.
From Windows (such as in Explorer) accessing WSL2 Linux files passes through a Plan9-derived file server as intermediary. This is surprisingly quick, but not without overhead. Some operations Windows can do directly via VHD support in Windows.
The issue isn't NTFS as far as I understand (based on what the WSL team themselves have explained). The problem is that the NT kernel is simply slow at opening files. Windows has transactional NTFS but it's deprecated and hardly used. The slowness can't be fixed because the open codepath goes via a lot of different filter drivers that can hook the open sequence, combined with the fact that there's no equivalent of a UNIX dentry cache because file path parsing is done by the FS driver and not the upper layers of the kernel. Even if the default filters were fixed to be made as fast as possible - which is hard because they're things like virus scanners - third party filter drivers are common and would tank performance again.
It's a pity because Windows would benefit from faster file IO in general but it seems like they gave up on improving things as being too hard.
What I mean here by "transaction" semantics is not "transactional NTFS" (or the other distributed transaction engines that replaced it) but as a short hand for all the various different ways that file locking mechanics and file consistency guarantees are very different in NT/Windows/NTFS than in the POSIX "inode" model. All of that has a lot of complex moving parts (filter drivers are indeed one part of that complex dance both affecting and affected by Windows' file "transaction" semantics).
"Transaction" is a useful analogical word here for all of that complexity because how a mini-version of the CAP theorem trade-off space can be seen to apply to file systems. Windows heavily, heavily favors Consistency above all else in file operations. Consistency checks of course slow down file opening (and other operations too). POSIX heavily favors Availability above all else and will among other things partition logical files across multiple physical inodes to do that. Neither approach to "file transactions" is "better" or "perfect", they are different trade-offs. They both have their strengths and weaknesses. Using tools designed for one is always going to have some problems operating in the other. POSIX tools are always going look at Windows as "slow file IO" because it doesn't hustle for availability. Windows tools are always going to look at POSIX as willfully dangerous when it comes to file consistency. At the end of the day these filesystem stacks are always going to be different tools for different jobs.
Yup, nothing to magic, just the usual symptoms of Windows and Linux have always had different ideas of how files are supposed to work, so give Linux its own (virtual) hard drive instead.
I don't know anything directly about Microsoft's 9p plans, but the blogs give an impression they are considerably pleased at the 9p file server for what they've been using it for (especially these cross-platform communications) and they might use it for other things.
I really "like" or at best have mixed feeling of Linux/POSIX way of handling file in use, can be deleted/moved/edited, like EXCLUSIVE LOCK means nothing to the system.
Windows took a very different path from POSIX for a lot of reasons. It frustrates me sometimes when some Linux fans insist that POSIX file system semantics are "the best" and "the only right option" simply because they've been ingrained in more than a half-century of Unix development practices. The NT Kernel team was certainly aware of POSIX file systems and their trade-offs when they built the Windows file IO libraries and made different choices for good reasons. POSIX isn't the last word on how file systems should work, some open source developers should learn that, I think.
The reason they pivoted away from wrapping linux syscalls, etc. was that ultimately efficiently supporting features tied tightly to hardware (eg; CUDA) became extremely difficult (at least, achieving decent performance).
Virtualization is so efficient nowadays that it's much performant to go that route vs porting where often there will be difficult to debug performance regressions and bugs. So what WSL provides instead is much tighter integration between that linux VM and windows (including performant filesystem access, etc.).
It's a game changer because it's branded and marketed.
Soon people will be forced to use it in particular contexts, let's say for the DRM Subsystem For Windows Subsystem For Linux. And since you need the DRM Subsystem For Windows Subsystem For Linux to run those few pieces of crucial software, WSL becomes your daily driver. Then MS starts shipping their own downstream distro with even more extensions that hook into Windows...
DRM access I wouldn't be surprised by. But I highly doubt forcing their own distro. The whole point of WSL is being able to use off the shelf distros while staying inside windows.
It's extra confusing in that you have one type of WSL (1 or 2), a version number of the install of WSL2, one version of some kernel stuff, and one version of whatever Linux OS you're running in WSL.
> Versioning is still an unsolved problem even for tech giants I observe.
Well, this is Microsoft we're talking about. Think about Windows versions. Xbox versions. Or how to figure out where 64bit and 32bit goes on Windows [1]
"in enterprises where Win/Mac are the only options."
Why does that happen? It feels like the only reason WSL exists in the first place is to help sidestep the monumental bureaucracy in fortune 100 companies that see Linux as nothing more than a risk. Its a methodone for companies that want devops, but are too calcified and encumbered to really do much besides lip service.
Most distros have a poor security model for allowing partially-trusted users. The security model seems to have two modes: users/processes that are not trusted to do anything, and users/processes that are trusted to do nothing.
E.g. try to find a sane and maintainable method for allowing non-root users to install Postgres. podman/Docker work for something like that, but what about VSCode? People can download a tarball, but I think the sandbox requires a setUID for some kind of security something.
You can give people sudo rules, but sudo whitelisting is prone to ways to escape from the command and open a shell.
WSL works because the company can control the hypervisor host strictly, and let the guest VM be more lax. The same would be true of a Linux VM inside a Linux host, but WSL lets the Windows people use Windows and the Linux people use Linux.
It's not an unsolvable issue, but I can see why companies are interested in having a single device management suite that allows end users to use Windows or Linux (ish).
> E.g. try to find a sane and maintainable method for allowing non-root users to install Postgres. podman/Docker work for something like that, but what about VSCode?
The easiest way would be usermode package managers like Nix, Guix, or Homebrew.
> People can download a tarball, but I think the sandbox requires a setUID for some kind of security something.
I'm not sure what you mean, but you don't need root to use user namespaces (sandboxing) on any remotely recent Linux kernel.
> Most distros have a poor security model for allowing partially-trusted users. [...] You can give people sudo rules, but sudo whitelisting is prone to ways to escape from the command and open a shell.
The same product my employer just rolled out for managing this on Windows has a native Linux version. I assume it must have competitors.
> WSL lets the Windows people use Windows and the Linux people use Linux
No it does not. It lets the Windows people use Windows and the Linux people still have to use Windows, and a fucked up Linux VM that can't access their hardware, has to contend with Windows broken-ass networking stack, operates extemely slowly on files that are stored where the corporation wants people to store source code (i.e., on the Windows side), etc.
> WSL works because the company can control the hypervisor host strictly, and let the guest VM be more lax.
Why? Don't all of the material issues w/r/t endpoint management (data exfiltration, update management, anti-virus, idk) recur inside the VM anyway?
> The easiest way would be usermode package managers like Nix, Guix, or Homebrew.
The insanity of the usermode packagers is that they're not popular enough to be easy to hire for (probably excepting Homebrew, though I've never seen it on Linux). I can grab a random resume from a stack and be almost positive that they know how to install and upgrade software with apt/yum.
I would bet the percentage of candidates who can compile Postgres and use outweigh the number that can install and use Postgres from Nix.
> I'm not sure what you mean, but you don't need root to use user namespaces (sandboxing) on any remotely recent Linux kernel.
> The same product my employer just rolled out for managing this on Windows has a native Linux version. I assume it must have competitors.
I'd be curious how that works. I've seen things in the space that have tried via intercepting kernel syscalls (SELinux, AppArmor), but those seem like failed projects to me. I've never worked anywhere that has them turned on, though maybe that's sampling bias.
> No it does not. It lets the Windows people use Windows and the Linux people still have to use Windows, and a fucked up Linux VM that can't access their hardware, has to contend with Windows broken-ass networking stack, operates extemely slowly on files that are stored where the corporation wants people to store source code (i.e., on the Windows side), etc.
I'm not saying it's perfect, but I'll take it over being forced to use Windows or Mac natively. At least it's familiar, even if it is slow.
> Why? Don't all of the material issues w/r/t endpoint management (data exfiltration, update management, anti-virus, idk) recur inside the VM anyway?
Some do, some don't. You can prevent people from plugging in a 4G dongle to exfil data. You can force VM traffic to go through the host networking stack so it gets scanned by endpoint protection. For a lot of compliance stuff, it's enough that the software works on the physical host even if it can't do anything with the VM. E.g. compliance might say that all hosts have to run anti-virus, but the physical machine is the "host" so it's okay if the guest VM doesn't run anti-virus. Same with software auditing; it's enough for the procurement people that it runs on the Windows host.
Most of the pragmatic issues recur inside the VM, but a lot of organizationally imposed ones go away. I'm not saying it's logical, but I've yet to win an argument with compliance.
> The insanity of the usermode packagers is that they're not popular enough to be easy to hire for (probably excepting Homebrew, though I've never seen it on Linux). I can grab a random resume from a stack and be almost positive that they know how to install and upgrade software with apt/yum.
True and very unfortunate. Hopefully this changes as (tools like) Nix and Guix continue to grow and develop! They're really well-suited to this.
> I would bet the percentage of candidates who can compile Postgres and use outweigh the number that can install and use Postgres from Nix.
I think installing and using Postgres from Nix is definitely easier, although sure, fewer devs might already know that they can easily do it.
> I'm not saying it's perfect, but I'll take it over being forced to use Windows or Mac natively. At least it's familiar, even if it is slow.
Yeah, I think we're agreed. I'm just feeling especially frustrated with the setup lately.
> Most of the pragmatic issues recur inside the VM, but a lot of organizationally imposed ones go away. I'm not saying it's logical, but I've yet to win an argument with compliance.
I’m aware of the risks of a compromised machine, but if a company doesn’t trust its own software devs with local root/Administrator privileges (which are absolutely required for doing one’s job in SWE if you’re doing anything in drivers/hardware) then they don’t trust them to do their job. It boggles the mind.
See.. my mind boggles that you think it has anything todo with personal trust.
At a guess, you've been in that scenario, and it may well have been handled badly ( and i'll grant you it often is ), but you might also have taken it just a little bit personally?
It's not my current job, but I've been doing various forms of sysadmin for a long, long time.
Biggest security/outage risk in most companies I've work at?
Me. My teams.
Even after we've spent years and millions of dollars addressing those risks.
But it's not about trusting your devs, even if somehow, you think you can trust all of them, and all your future hires, every day, forever (!?!?!)
It's whether or not you trust all the software on their machine, everyday, forever. Which is just as ridiculous a statement.
Isolated dev environments aren't that hard to setup if you really need HW access.
Do it right and you might even end up with something better than local.
And no, I don't have admin on my current work laptop either.
Yes, it's occasionally annoying, and we have tighter controls than that, i cant do a bunch of things that most envs would allow without thought - and I occasionally set of alarms that trigger the security folks to have a word, which is also occasionally annoying.
Reassuring too tho.
And actually, slightly disappointed that's it's generally just a phonecall, and polite conversation, rather than black masked ninjas rappelling down from the ceiling yelling at me to step away from the keyboard, but you can't have everything I guess.
Some companies have compliance/insurance concerns that require things to be set up a certain way, and to prevent users from undoing them. E.g. full disk encryption might be required, and users prevented from undoing that. Maybe there's some kind of audit agent that needs to run, and you don't want users removing it.
Others don't want engineers breaking their desktops, or setting things up in a way where it's going to break the automation. E.g. systemd-networkd is centrally managed, but some user hates it so they're using network-manager instead, and now they can't connect to the network because network port control got turned on. Amortize that across several thousand engineers and you can waste a lot of man-hours because people aren't using the tools they're supposed to.
I like having full root/admin permissions, but I don't really need or want 95% of the permissions it gives me. Linux makes it hard to delegate the 5% I actually want/need without giving me the other 95% that are basically only ever going to be problematic.
Not being able to access WSL from within an SSH session due to
"not possible to access wsl from session 0 (which is where the Windows OpenSSH server run)" [1,2]
is now a deal breaker for me to upgrade to whatever they called 1.0.0 version or released over Windows Store.
The old version (non-Store version) used to work, and now the upgraded / non-Preview version doesn't. I don't know how they dare remove that preview tag.
This is a great example to keep in mind against the high praise WSL receives— not because it's particularly dire, but because it's representative of how surprising (and basically intractable) WSL issues generally are. The product is chock full of bugs like this which will only impact a small subset of users, but which are showstoppers for that subset. (Some of them can be worked around, but good God are they a PITA.)
They tend to be issues that 'seem' advanced, but the missing functionality is usually something that the vast majorities of developers who have run a Linux desktop for development would take for granted as a possibility.
I have yet to find an use case for WSL, when you can't interact with hardware devices and working with the filesystem has its (performance) limitations.
Docker in WSL2? Yeah sure, but I'd rather install a VM or connect to a remote Docker host.
MinGW/MSYS2 have been around far longer and have none of those issues... Then again, MSYS2 does not have a Linux kernel running under it but that shouldn't be a problem unless you want to run containers.
I love WSL because it makes Windows just as nice to use for development as Linux / macOS. No weird janky "this is buggy on Windows" stuff, because as far as the app is concerned it's just a Linux VM.
I've never looked back since WSL came out...I'm not promoting Windows but growing up in a corp environment, Windows is where my comfort zone was...I tried switching to Textmate on Mac back in 2007 but really hated not having a direct map of various shortcut commands using CTRL key. The closest I got to this Win experience was on Linux Mint using XFCE...but again, it was pain anytime I was sent MS Office docs. Sublime/WSL has been my goto since it dropped! Slowly getting used to VSCode as well.
I'm not sure why you're being down voted. This is my exact use case as well. For what I do (front end development) it's incredibly nice to have a Linux command line for most things. I'm stuck on Windows due to legacy .NET Framework apps, so when I have to dip back into Windows I can
>With WSL2 I got rid of VM's for local development.
No you didn't, you just let Windows hide them from you. WSL2 is essentially a Hyper-V VM running Linux with some programs bridging between it and the Windows host -- or two such VMs with some additional bridging (for Wayland and audio) if WSLg is enabled.
100% of the time if you are wsl user, you have it running.
On serious side, even `findstr` is enough, but goal was to show interop in action, not to choose best grep alike solution.
On other serious side, I can ensure and enforce standardization on having WSL across the teams and departments and be sure such snippets work. Standardizing on ripgrep probably doesn't worth it.
For lonely/solo/indi technician that makes no sense and ripgrep can be better of course.
I see it in similar way to Docker extended and made it _easy_ for _end users_ to user namespaces, overlayfs, ports forwarding and distribution (docker hub), WSL/WSL2 on top of Linux solves problem of other aspects, one of them being corporate friendly (among others).
The same way of when JS dev uses docker and not giving a shit it's the whole Unix compatibile system under the hood (Linux), he just cares on getting Nginx-something from Docker Hub, effectively making Docker to be his platform of infrastructure, the same way small fraction of the crowd cares WSL2 is using Hyper-V under the hood.
If you make a poll on what kind of virtualiziation is needed to run WSL2, I bet the most used answer would be "virtua what?"
It’s awkward when used to text pipelines, but the object oriented output is just so much nicer to parse and interact with for me that after spending some serious time with Powershell I am a huge fan. Being able to use the CLR directly and any .NET packages is killer.
> > the filesystem has its (performance) limitations
> It's a normal Linux filesystem, there's no difference in performance vs. a full-fledged Linux VM. (which it is anyway)
> Are you referring to WSL1?
WSL2 just exchanged the performance problems with the Linux guest environment's native filesystem for performance problems when the Linux guest accesses Windows' filesystem. Those are still there and they're still atrocious.
I've done A LOT of performance tests of running ML tasks (via Docker) on WSL vs raw Ubuntu on the same hardware. The performance of WSL is about the same as native - some tasks are actually faster, some slower -- but net out to be the same. These tasks include: compiling a Docker image (1 hour), deinterlacing video with a NN, Open Whisper, etc.
Filesystem things are faster in WSL+Docker than native Windows for me. At least npm stuff. I'd rather skip the hassle of VM, and why use a remote Docker host? And same with MinGW. I like being able to just do stuff out of the box as if it was an Ubuntu install. Much more ergonomic, and can just follow whatever official guide for the tool I'm trying to use, without depending on someone making it possible to use in MinGW/MSYS.
So I kinda disagree, for all things you mention I prefer WSL.
Same here, Docker for windows is slow, like VERY slow.
After I moved Docker to WSL2 my app is starting and running at least 10 times faster.
The difference is so huge that I wonder why it is even possible to use Docker natively on Windows file system. It's crazy.
I'm talking 60-90 seconds to bootstrap and run each page each time under Windows filesystem to around 5s (yes the app is very slow) under WSL2
Yeah, You need Hyper-V to run basic Docker configuration on Windows, I just simplified this too much I guess.
The end result is that if you run Docker on Windows with Hyper-V alone it's using Window filesystem and then "translating" this for Docker therefore making the whole process of accessing the files incredibly slow.
I used to work with docker using Docker Kinematic (https://github.com/docker/kitematic) in the past, which basically was a Virtual box with Docker installed inside this VM, and since the VM was some kind of Linux, it worked reasonably well.
The same seems to be the case when using WSL2 - you run VM and Docker inside this VM, removing the file sync/translation part out of the equation, resulting in speed boots.
> The same seems to be the case when using WSL2 - you run VM and Docker inside this VM, removing the file sync/translation part out of the equation, resulting in speed boots.
I'm sure that this is true for most people out there, but for some random reason, WSL2 actually ran much slower for me than Hyper-V. It was one of those boring Laravel projects where you mount the application source into a container directory so PHP can execute them and serve you the results through Nginx with PHP-FPM or whatever you want, as opposed to just running something like a database or a statically compiled application.
Except that it was unusable with WSL2 as the Docker back end (regular Docker Desktop install), but much better with Hyper-V for the same goal. It was odd, because the experience of other devs on that exact same codebase was the opposite, WSL2 was faster for them. And then there was someone for whom WSL2 wouldn't work at all because of network connectivity issues.
I don't really have an explanation for this, merely an anecdote about how these abstractions can be odd, leaky or flawed at times, based on your hardware or other configuration details. That said, Windows is still okay as a desktop operating system (in the eyes of many folks out there) and I am still a proponent of containers, even if working with them directly inside of *nix as your development box is a more painless experience.
I was playing with Docker Desktop and Podman Desktop for the first time yesterday. Podman Desktop looks really good. I was impressed with how well it integrated with the docker engine I already had running, and it even had at least one feature I couldn't find in Docker Desktop (pulling an image from the GUI).
Its crazy, I've gone in circles between WSL, dual boot win/linux, VMs, everything on windows, remote linux servers, containers in Docker Desktop. Right now I'm back to just using Windows and ignoring Linux altogether, which seems to work best with the exception of Python.
Windows desktop with full linux servers via SSH is the sweet spot for me.
I don't EVER find that I need some kind of *nix command on my local machine when developing, but that's also because I'm not afraid of PowerShell / stuck on bash.
> don't EVER find that I need some kind of *nix command on my local machine when developing, but that's also because I'm not afraid of PowerShell / stuck on bash.
Have you sorted out how to get fuzzy tab completion in PowerShell, short of the buggy, menu-y integration available with fzf?
Do you find the everything you need is just already in PowerShell or so you sometimes run third party command line executables? If the latter, how do you deal with lack of man pages?
I use WSL2 for pytorch and company because the newer (WSL) kernel has CUDA support oob if you have the nvidia drivers installed on the Windows side. It’s actually extremely convenient.
Dug up my login for this site to thank you for this bit of info -- I'd been idly wanting to try some GPU-powered python scripts, and the setup on Windows made it unappealing. Now I can use 'em!
Earlier this year, I started a job where I had to use a Windows desktop for the first time in a little over a decade. For a while, I did as little in WSL as possible, and leaned hard on PowerShell, MSYS2, and Scoop. After a few months, I abandoned the 'native' tools in favor of WSL due to performance and compatibility issues. I wish MSYS2 were the solution, because at least then there'd be one (WSL sucks too).
I am still on WSL1 due to the filesystem performance with WSL2. I recently tried to move more of my workflow towards MSYS2 but various things keep breaking for me without obvious reasons.
Latest issue I encountered was that GNU parallel simple does not work. [1]
What I'm excited about is the prospect of nontechnical people being able to use a GUI like Docker Desktop to install and manage apps in a nicely sandboxed environment. I think Docker is still too targeted towards developers, but Docker Desktop is already a huge step forward in usability for people who aren't comfortable on the command line.
Are you imagining something like Flatpak for Windows? Don't Microsoft Store apps have some sandboxing? What's the end goal with the non-developer use case you have in mind?
I am not a developer but I find it useful to just be able to use rsync and a few other tools I am used to from Mac and Linux. Same reason I install Homebrew on Mac.
They have not. There are still optional components of USB4.
I’m not sure how or why this happens. USB, HDMI, BLE, all have optional components and it’s insanely frustrating. It’s got to be some new feature of designed-by-committee groups where it’s easy to get a feature only 1/2 the group wants.
For longer cables the cost is significant. USB-C can carry 5A, so you need 18 gauge wire for ~12 feet. Ideally you'd do the 8 power/ground pins at 18awg and the 16 data pins at 28awg. But that's a substantially more expensive cable than just 24 wires of 28awg.
And as a user it's annoying to have an excessively stiff cable, so it's suboptimal to have every cable support 240W charging, but for cost and convenience.
It's a mess. We all said we wanted one cable that does everything, and I still feel that way, but here I am buying different cables for laptop charging versus iPad charging because I want a light/thin cable for the iPad. Sigh.
Weren't all those names deprecated in favor of just calling it USB 10/20/40 Gbps in market facing use? I Think the whole "Gen X" makes sense for technical documentation, but it's clearly too specific for consumer communication.
When the iPad 3 released Apple called it "The new iPad" everywhere.
So you had:
- iPad ("iPad")
- iPad 2 ("iPad 2" or "iPad 2nd generation")
- iPad 3 ("The new iPad")
- iPad 4 ("iPad with Retina display")
So in late 2012 you had people shopping, and having to pick between "The New iPad" (last year's model), "iPad with Retina display" (this year's model), and the iPad Air.
Apple stopped after that, going with iPad [type] [number] naming.
I always try to just skip to the (year) identification scheme, which is potentially less confusing (unless I'm actually looking at my specific piece of hardware), unless they release multiple iterations each year, which it seems they've done a couple times.
It's not that unusual, especially in games. For instance,"The Witcher 2" version 3.3 [1], "Dota 2" version 7.32c [2], but also "Struts 2", version 6.0.3 [3]. Some others may be more considered a product name than a version number, e.g. "GNOME 2", "Winamp 5".
I'd agree, but they are calling it "WSL 1.0.0", which is reflected in the release name for example ("Microsoft.WSL_1.0.0.0_x64_ARM64.msixbundle"). Which is natural of course because they are already calling the WSL2 repo simply "WSL".
I think most people at this point call WSL1 "WSL1". That does seem to be the lasting "names" of these: WSL1 and WSL2. Arguably they aren't version numbers at all but naming in the way people name Word docs that start off as clones and slowly diverge. Perhaps someday we'll get WSL.Final and WSL.Final.Final to truly close the loop on the Word doc style naming scheme.
Hmm I was confused about this as well. Thank you for the clarification here. Perhaps we could get a title update to something like "WSL 2 out of beta".
As an aside, I love WSL. Some of my favorite software out of Microsoft in years.
It's worth noting that there have been reports of performance degradation running VMWare alongside Hyper-V[0]. Don't get me wrong, I love what Microsoft is doing with Windows Hypervisor Platform (WHP/WHPX). There's even experimental support for running accelerated QEMU VMs on Windows Hosts[1]. Hopefully these performance issues improve over time.
I know WSL2 only requires a part of Hyper-V to run ("Virtual Machine Platform"). This also allows it to work on Windows Home. I wonder if anti-cheat systems will be able to do there jobs while still allowing WSL2 to be used. I don't know enough about how Hyper-V is used to circumvent anti-cheating.
To expand (I can't edit now)... ESEA anti-cheat is posited as Hyper-V specific, but it's actually against being virtualized in general.
I was able to work around this with KVM on Linux with a GPU passed to a Windows VM.
I tweaked the libvirt XMLs (VM definitions) a few key ways and was able to convince ESEA it's bare metal, but not Valorant. I never got to play it, but it's fine by me :)
This combination works well enough, I only boot up Windows when I need a couple anti-cheats I've managed to trick (that also can't work in Proton)
It's more compatible if you want to access a usb or serial port.
WSL2 is like a car that's great in most ways but for some reason just cannot make right turns from one-way streets. You might not encounter that particular situation very often but your car still needs to be able to do it or else it's just an annoying 3/4-baked thing that is better just skipped until it's actually fully functional.
microcontroller & fpga programming, all kinds of usb and serial port stuff, all unusable on wsl2, works on wsl1, hell even cygwin. wsl2 is half-baked crap.
I think the problem is that the market is much more interested in things like container development, which doesn't generally need hardware access beyond CPU, RAM, network, maybe storage, and maybe GPU.
WSL2 is much better than WSL1 at solving my problems. But that sucks it has become worse for you :/
I another comment I likened it to an amazing car that just can't turn left off of one-way streets. You can do a lot and never notice, but I just happen to live on a one-way street.
Maybe they just shouldn't be calling it a linux system or an os but just an application server. Do that and the missing finctionality suddenly isn't a problem because it's not implied in the first place. I never would have made this complaint about tomcat or whatever.
The blogs claim WSL1 is still being maintained and there are scenarios where WSL1 still makes sense. (WSL1 has better hardware compatibility because it is not VM emulated hardware but direct access via the NT Kernel, as an example mentioned in other comment threads here.)
This seems like a case where they should have picked a different naming scheme rather than numbers like WSL-A and WSL-B or something.
The blogs also claim that their strategy for addressing the use cases where WSL1 is still needed is to try to improve WSL2 in those areas, i.e., to pave the way to fully abandoning WSL1.
I have to give kudos to Microsoft on their WSL2 product. It's not perfect, but its extremely accessible and allowed me to become comfortable enough in a Linux environment to change my primary driver to Ubuntu.
Until then, I was trapped in Windows land as both a dev and a user.
It's a horrible name that should never have been used externally, but it makes sense if you're familiar with the NT architecture. There's various environment subsystems for win32, os/2, unix, and now linux.
They can't call it that because of trademark issues. Linux is trademarked, so Microsoft can't call anything of theirs "Linux something" without a license, just like third-party software vendors can't call anything "Windows something". However, some leeway is permitted for calling your thing "something for Windows" or "something for Linux".
Someone high up at the FSF called the first WSL "GNU/kWindows". Probably the best name of all.
Between my current pre-coffee state and working with WSL2... seeing version 1.0.0 released I actually thought for a moment this meant the ability to run Windows as a subsystem on a Linux machine or something.
Removed the "Preview" label - WSL in the Store is now generally available!
Use an override in generator.early to prevent the /tmp/.X11-unix socket from being removed during boot
Don't create a pty for systemd to fix issue where systemd would time out during boot
It's pretty new, released late September. They still have their own init system. If you are using a systemd enabled distro then their init will become a child process of systemd, allowing systemd to have PID 1.
The creator of systemd works at Microsoft now[0]. Not sure if that factored into the change. systemd seems pretty heavy for VMs, but apparently they still start fast so there must be ways to optimize it.
WSL has been really amazing for our team. All our devs are on Windows systems and are staging & production are on Linux. Having a system identical to what's going to be in prod has saved us many hours of effort.
If it's similar to my employer, we have requirements from our board to have specific software such as anti-malware and anti-ransomware, among other protections. The IT team can also better manage and support it.
And yes, I think it's a bit silly. But WSL has helped us comply with the business/security requirements while still using (IMO) better dev tools and workflows that Linux provides.
There's no shittier OS to use than a Windows system loaded to the gills with corporate rootkits and their agents. And is it actually more secure than a sanely configured Linux desktop? Those running agents kinda seem like pretty sweet targets for hackers.
Indeed, and I'm not arguing that practically speaking it's more secure. But wiping the disk and running Linux because "I know better than you" isn't going to fly in most businesses.
> we have requirements from our board to have specific software such as anti-malware and anti-ransomware, among other protections
In the past, when I worked at companies like that, you could still stick the IT garbage laptop in a drawer, and then BYOD for development work (which is what I did...)
That way, the board gets the conflict-of-interest software license slush money, and the developers can do their jobs. Win-win.
WSL is great. I use my personal computer for gaming, but I also like using it for side projects, and dual-booting is a pain in the ass. WSL makes it super smooth to have a proper dev env. I can use gvim from Linux just fine, with a bit of configuration I can even share clipboard.
I think it still has a few rough edges,but fortunately I haven't had to deal with those. Also recently they added support for systemd which is awesome. Most packages "just work".
No, the 2 in WSL2 refers to the generation (i.e. the one with VM architecture). This release simply "graduates" WSL2 from pre-release (0.xx days in quasi-semantic versioning). It's still WSL2 and for most intents and purposes the specific version of the package should be irrelevant, especially if one is using the Store-published version. It's not the only moving part anyway, as WSLg and the kernel are updated separately.
I wish they could finally include some mechanism to map running WSL instances to a hostname / static ip. Also, a big difference to running a vmware instance is that you never know when the WSL instance is shut down, and you can't save the "state", that is, hibernate or suspend it.
Honestly, i just want a replacement for vmware that uses less resources. So i can run a completely isolated desktop for each project i'm working on. Suspend it, reboot my machine, and then load it up and continue where i left off, all windows and terminal sessions intact. The last bit assumes one can run a (non ephermal) graphical desktop of course, which rules out docker pretty much.
On localhost it's available by default on Windows side. If you'd like to access it by your machine's IP from a different machine on the LAN, you just need to run a single command to add a proxy for it.
where connectaddress is whatever you get from running ifconfig on wsl side. Edit: Might also use localhost instead of IP as the connectaddress, as the IP might change. Depends on what your server software handles of bindings.
Depending on your machine configuration, you might need to open the port in your firewall as well, ala
I find it's good in terms of it will hopefully prevent humans to try to run things in _production_ in WSL. I can easily imagine frontend dev guy requesting to have Windows Server in the AWS just to install WSL and run his/her `nodejs` something there cuz it feels familiar to things at local laptop.
Honestly if I could find a Windows laptop of comparable quality (and my company gave me admin rights just like they do on my mac...unlikely I know) I'd switch and use WSL/windows for everything.
Last time I tried, there where still some issues, e.g. the 'mailbox' (contrib) test of SBCL didn't succeed (just hangs). There are known issues with cancellation of threads in Windows, which I think explains why the test doesn't succeed there, but the fact that it doesn't succeed in WSL makes me wonder how well the hypervisor deals with multiple threads ...
Being able to do gpu stuff is also pretty awesome too.
I can finally use makefiles in a sane way again.
I also have an issue with flask having a buggy http server by default (sending files seems to stall with http 206), I still have to try if it works better on WSL.
How does one upgrade? WSL already nags me from this morning that "A new WSL update is available." But when I try to update it says that I already run the most recent version, which I don't - reported version is 0.70.5.0.
WSL gives me hope that the right people are making engineering decisions at Microsoft. Truly one of the most transformative pieces of tech we've gotten in the last decade, for developers.
I really love the concept of WSL and it is certainly useful but the file system performance was so bad for me that i could never get it to work with projects that had 1000+ files.
I wonder if this means WSL2 will be installed by default at some point in the future. I believe this is the only manual step remaining when installing Docker Desktop.
It's somewhat sad that this is on GitHub, but the source isn't, the release contains a binary and sources that don't match and the installation suggests using the store anyway. It almost seems like this is just posturing on GitHub while just being the same old closed distribution. Should probably have been a GitHub.io website instead of a fake repo.
Damn, if people really are this gullible I should become a con artist. "Well technically I never said that the gold plated gold watch I was selling was made of gold, sooooo I'm just gonna leave. And if you're mad at me, that's on you."
Honestly - not the best project to pick on. VS Code is actually one of the more open projects Microsoft has made.
A good chunk of the best functionality is behind extensions which are often proprietary, but that actually doesn't bother me that much. Especially because many of the proprietary extensions rely on functionality provided by 3rd party servers, where the owner (usually MS) is eating the cost. That seems fair to me.
Again - Microsoft customizes that in the same way the google customizes chromium, but having built both from source... I find it hard to argue it's anything other than open source tooling.
Again - the ecosystem is not (although it certainly can be, depending on the extensions used).
> I'd love to see someone downvoting me provide a compelling response to the full source available here under the MIT license
You are literally pointing to a github repo in order to respond to a complain about how MS makes github repos just to create the false appearence that their software is FLOSS.
For example, the C/C++ parser also appears to claim to be "MIT" licensed if you go to the repository:
However, it actually is not, by evidenced by the two line disclaimer at the beginning of that file. The entire thing is absolutely useless without the gigantic 100MB intellisense binary. Most definitely this does NOT rely on any functionality provided by "3rd party servers". It is entirely offline, 1st party code, that gets surreptitiously installed alongside the FLOSS "package". Likewise for C#, likewise for debugging, likewise for remote development, and a very long etc.
If this is not misleading, I don't know what is. The core editor being free is just a red herring here; most people think of VS code as an IDE, and are therefore disappointed by the "core editor" functionality offered by VSCodium. In fact, people will routinely ask me "why can't I use VS Code on RISC-V, if it's FLOSS?". These are knowledgeable people, and yet they are mislead by this.
I mean - I don't know what you really expect out of open source then.
The source is right fucking there. I have literally built the project on my machine using it.
It's also not at all comparable to the repo for WSL in the top post (which is literally just a couple of text files, a handful of scripts, and a binary release - which I'd certainly agree is not really open source).
I'm not really sure how you can possibly portray having the literal buildable source present as misleading. Is it misleading that I can purchase binary software designed to run on my open source linux distro?
Because that's just as much of an extension of my OS as something like VS Remote Development (or their c++ intellisense) is an extension for Code.
Further - you're talking about tooling here that MS has consistently kept closed source (historically they're very restrictive with their C++ and C# compilers). As an alternative - language support for something like Typescript absolutely is open.
Basically - what is it you want, exactly? Because again - this thing is really about as open source as it can get. Is Microsoft keeping some nice extensions closed? Sure - they're allowed to do that. That doesn't make this any less open source.
> It's also not at all comparable to the repo for WSL in the top post (which is literally just a couple of text files, a handful of scripts, and a binary release - which I'd certainly agree is not really open source).
"The source is right fucking there" too even in that case. Do you see that the point is how much of the product is actually open source versus how much is not? What I have said is that a lot of people overestimate how much of VS Code is open -- and this applies even to yourself: in your initial post you thought the only closed parts were "some which required 3rd party online services", but actually almost everything from the IDE is closed, even stuff like debugging that does not even remotely involve 3rd parties nor online services. You have been mislead by MS yourself, yet you still claim that what they're doing is not misleading.
> I'm not really sure how you can possibly portray having the literal buildable source present as misleading.
Yet you understand how the "WSL" example is misleading too. It's literally the same thing just moved even more to the extreme. They put some "source" which is literally a negligible portion of the binaries they ship. Sure it is buildable, otherwise it wouldn't even qualify as source.
> Is it misleading that I can purchase binary software designed to run on my open source linux distro?
This analogy is absolutely broken. First, the binary software would have to be 1st party. Second, the binary part should be practically larger than the otherwise open-source distro itself. Third, it would have to be required to use this binary software to do almost anything _of value_ with the distro itself. Fourth, the differences between the open source and closed source parts must be diffuse (e.g. shipped as part of the same binary package, automatically downloaded, at zero cost, etc.). And fifth, the distro must still advertise itself as a "open source" product. Then you would have the correct analogy.
What exactly would you say makes me gullible? I never said it was open or free, that was completely fabricated by yourself and others. The repo doesn't even say it's open.
Submitted a year and a half ago and no fix? What a shitshow, recently I've been having this issue with Docker Desktop more and more often, might just move to a normal VM, this is unacceptable.
It's not just a CPU usage issue, either— vmmem leaks and holds onto several times as much memory as the actual guests are using, inevitably chewing up whatever you've set as your maximum quota for WSL even when the guests are only using a few hundred megs of RAM.
That said I will agree with other comments here, the version numbering is weird. I can't even tell _how_ to tell what version of WSL I have. Versioning is still an unsolved problem even for tech giants I observe.