This is really Microsoft catching up to the Mac in terms of integration with the open-source ecosystem which importantly drives the web.
In the mid 2000's the Mac really took off due to being a good-enough Linux replacement on the command line, while taking care of all the hardware integration and providing a sleek desktop experience. This really helped the Mac take off among hackers. This system wasn't as open-source as Linux, but it was good enough: Most of us don't want to mess about with the desktop stack. If the stack we care about is open-source and runs across many systems, then we still have the freedom. I find it fairly easy to work on both Mac OS X, Linux and WSL2 -- but not pre-WSL Windows.
The inclusion of a Linux kernel really seems to cause a lot of confusion. No, Microsoft is not about to remove the NT kernel. No, they're not about to supplant Linux. WSL1 did what the guys from Illumos and FreeBSD have also done: since the Linux syscall interface is quite stable, the NT kernel can simply implement these syscalls and handle them appropriately. The was that the NT kernel was still sufficiently different that the filesystem performance was quite poor and complex applications like Docker required more kernel features than was provided by just the syscall API.
Including a Linux kernel is really the nuclear option: rather than trying to fit Linux applications in Windows, they're running in their own world and providing integrations into this world. This means you can now run Docker on WSL2 for instance, but it's also complicated: Kernels really like to think that they own the world, for instance they keep memory around to use for later. WSL2 dynamically grows the amount of RAM allocated to the VM, which is quite clever, but I don't know how/if they're going to handle freeing memory again -- here the NT kernel really needs to communicate with the Linux kernel.
Anyways, the upshot is that no, Microsoft is not going to take over the world, but perhaps it will be easier to use a Windows laptop which supports all the stupid Enterprise software I need but still have a Linux shell for actual work.
> I find it fairly easy to work on both Mac OS X, Linux and WSL2 -- but not pre-WSL Windows
The thing that annoys me most on Windows is Windows itself. After using Linuxes for almost two decades now, the notion of the OS taking tens of minutes to self update when all I wanted was to quickly reboot it is unbearable.
> I don't know how/if they're going to handle freeing memory again -- here the NT kernel really needs to communicate with the Linux kernel
Usually "free" memory only happens when a Linux machine starts. During normal usage most of the memory is being used to cache things and, when some large app frees its own memory (such as when Slack quits) it'll eventually be used for other things.
Not sure how that would work. Memory doesn't stay unused in Linux for long - whatever programs free will end up being used to cache information that's slow to access.
> The thing that annoys me most on Windows is Windows itself. After using Linuxes for almost two decades now, the notion of the OS taking tens of minutes to self update when all I wanted was to quickly reboot it is unbearable.
This is definitely a problem especially considering that the shotgun approach to some Windows updates actually hurt the performance of some machines, even breaking Windows on occasion [0]. However, in fairness I would say that I've spent just as much or more time fixing rolling release issues on Arch/Manjaro, or troubleshooting the package stopping me from sudo apt upgrade. The worst experience for me was having to reformat after the update from Ubuntu 14.04 crashed.
What annoys me most about Microsoft's overarching paternalistic philosophy is is how creepy their telemetry initiatives are. Granted plenty of companies do this now, for the same profit-driven motivations, but Microsoft goes above and beyond in terms of disabling or even ignoring opt out options [1]. I've found the Winaero blog [2] to be a good source of tools to stop some of the Windows telemetry shenanigans.
> In fairness I would say that I've spent just as much or more time fixing rolling release issues on Arch/Manjaro.
That doesn't even matter much to me. Obviously, less time spent is better, but what really matters is the timing.
When I have to spend time upgrading my linux box, I have set time aside. I decide that I have a bit of time, and if it turns out to be a bigger job than expected, I can decide to abort and continue some other time.
On Windows the upgrade is always inconvenient, because you have no way of controlling it. Microsoft decides when the best time for you to upgrade is, and once it starts you have no means of punting on it.
Although the telemetry is certainly a problem too.
> Obviously, less time spent is better, but what really matters is the timing... On Windows the upgrade is always inconvenient, because you have no way of controlling it.
True, very good point in favor of user control. Linux has the added benefit of teaching the discipline and foresight to understand that "Hey... maybe I shouldn't run sudo pacman -Syu and reboot until I have some time set aside". Though in recent years I've found Arch to be way more stable with very few updates causing major problems.
> Though in recent years I've found Arch to be way more stable with very few updates causing major problems.
Can confirm. I've had the same Arch install for nearly 6 years now. I've borked it only once in that time, and that one was my fault for not reading the news first. Fixed it in < 5 minutes.
And I've made considerable changes in that time, including moving the entire installation from one hard disk to another.
So after a very long hiatus I have decided to try installing linux on a raspberry pi - and wow, the results were amazing. Using my decade-old knowledge of using fstab I added some drives to /etc/fstab and rebooted. Oooops. Raspbian would fail boot with a message "root account locked, couldn't open console".
Fantastic - so if any of the fstab entries are missing, the entire system effectively bricks itself. The only way to fix it was to remove the SD card, put it in another machine, and manually remove the offending entries from the fstab file. Fantastic. I Don't think I have ever managed to get windows to self-brick itself with something so simple.
I think you are being unfairly downvoted. On of the major downsides to Linux is that while you can, and often have to, tinker to make it work you can break it on many levels. Sometimes it's even hard to know if it is working properly, until you realize it doesn't. Many developers uses virtualization just for those reasons alone. It might even be a major upside to using WSL, if they do it well.
Our tools are either too powerful or too weak. To each their own. I can cope with a tool that is too powerful, but I can't cope with a tool that is too weak.
I don't think that is the general problem. More so inconstancy. It's almost harder to quit vi then to delete your boot record. I am not sure many tools are even that powerful. Which often means you have to use many different ones, making it even more confusing. Linux lacks abstraction and makes you do things manually as root, it isn't very robust.
I've had a windows update brick my laptop. It failed to complete an update, would roll back, reboot, then try again. Nothing I did fixed it. I spent almost a week with it in a boot loop, trying various fixes and such before giving up and reformatting. Put linux on it and added it to my pile of linux laptops.
With great power comes great responsibility, I guess. I agree it would have been nice for Linux to come with a few more sanity checks by default, like maybe a warning flag for rm -rf /* and dd commands...
I'm just more offended that the default behaviour on an extremely popular distro is failed boot = brick. You don't even get a basic command prompt to fix something, the default behavior is to lock everything down and forbid access, making it impossible to repair the machine from itself. Windows will reboot a few times and then automatically start in safe mode when this happens, you don't need to extract the drive to manually edit some text files before the system can boot again.
> I'm just more offended that the default behaviour on an extremely popular distro is failed boot = brick
If you can't properly start the OS, that's correct behavior. You don't want the OS to start writing things to /var when it can't mount the filesystem that should be mounted to /var.
In any case, the fix with an RPi is easy - pop out the microSD, mount it on the other computer and fix whatever is broken (which should be in the logs). If the RPi is the only computer, put a microSD with a plain install of the OS, mount the other microSD through an USB dongle and fix it the same way you'd do with a laptop.
FWIW, you're using the word "brick" incorrectly. Bricking is when the only fix is to throw the device away and buy a new one, which clearly is not the case here.
I don't think I am. I've been fixing hardware for years and any device that doesn't switch on is "bricked" - its utility has dropped down to zero, it has turned into a literal brick. Just because you can revive it through some arcane procedure doesn't make it any less bricked to the end user.
you're getting downvoted, but as a 19-year Linux fanboi, I actually agree with you. If the system has booted at least once, then it knows how to successfully boot. i.e. it knows of a string of modules, kernel, initrd image, etc that worked. As you upgrade, the OS should have a courtesy feature where it doesn't simply delete these files (unless you do the equivalent of issue a "-rf" force command). For instance, we currently have (kernel, initrd, filesystem, and modules that exist on our fs). All we would need to make it (nearly) brick-proof is to add one more thing: fallback-rd that has all the previous shit that worked last time. This would save so many users' asses at hardly any cost since storage is so cheap these days.
I wanted to setup a headless Samba server - so after the reboot I didn't even have any way of knowing why the system wouldn't appear on the network anymore. I had to find the right HDMI adapter to see if the system even attempts to start as it was completely dead by all other indications.
And no, I'm sorry but I hate the argument of "you should have known better" - the system first and foremost shouldn't have defaults set up in such a way that not only the SSH server doesn't come up in case of an issue during boot, but the local console is entirely disabled for any access. That's a crazy default.
Send a hotplug notification that a region of memory is going to be disconnected. The kernel will stop making allocations there, and eventually it won't be in use. (This could take a while for a heavy system with long-running processes.) Then you send the hotplug physical disconnect notification. If it fails, you'll need to wait longer. If it succeeds, the host system can reclaim the memory.
I don't see how that is a specific problem. All operating systems have annoyances. But if you are for example an engineer you won't find a competitive range of software on Linux. For someone in need of doing software engineering a similar situation have been developing on Windows. WSL changes that.
I don't find that annoyance particularly grave or different from other annoyances. I just updated and rebooted my Windows machine I haven't used for a month, took maybe 5 seconds longer than usual. Regardless if that happens every day that isn't something I am going to base my choice of operating system on. This is one of the problems with desktop Linux, caring about some detail (even though it might affect some a lot) rather than the big picture which affects everyone.
>The thing that annoys me most on Windows is Windows itself. After using Linuxes for almost two decades now, the notion of the OS taking tens of minutes to self update when all I wanted was to quickly reboot it is unbearable.
I find the same thing with electric cars. Eight hours to charge when I've spent two decades filling in minutes is absurd.
> Most of us don't want to mess about with the desktop stack. If the stack we care about is open-source and runs across many systems, then we still have the freedom.
A tangential point but a relevant one nonetheless. I owe most of my tech. skills to the fact that it was hard to get a Linux distribution running on a cheap PC. I had to read up about hardware, learn how to fix installation issues, sometimes recompile the kernel with different options (and hence study them) and a number of other things simply because it didn't "just work". It gave me an intimate comfort with Linux that I just don't have with any other system (including MacOS which I used for about 2.5 years).
I completely get the notion that when one is a professional he or she shouldn't need to meddle with system level quirks to get a productive environment and a desktop and thankfully, modern Linux is more or less there. I haven't had any major trouble in a long time with it. However, during college or early years, having a system that demands some amount of work to get it working has, atleast to me, be crucial in honing my skills. I wouldn't change that even if I could. I'd vigourously defend it.
> rather than trying to fit Linux applications in Windows, they're running in their own world and providing integrations into this world.
It is effectively their way to have an "integrated Linux virtual machine" inside of Windows. The WSL1 on Windows was relatively similar (but not enough for me) to Wine on Linux. The WSL2 is in some ways "more integrated" than the typical VM would be, but otherwise still similar to that.
Not enough: my greatest disappointment up to now is that the "VM-like" behavior was present even in WSL1, if I understood correctly, the files "inside" were more special than the files inside Cygwin folders would be.
Another disappointment was: it was not easy to install WSL on previous Windows 10 versions. Specifically, it was "in app store" but you weren't supposed to "click to install" -- it resulted in something weird. There were recipes what to do from the command line with hoops to take care of, so at the end... I've installed Cygwin, which I knew will work. Does anybody know, did WSL get better now?
Which was especially bad: what's the point in having it "there" if you aren't allowed to access the files inside.
Also problems with making a backup of that etc. In that aspects using a real VM (e.g. VirtualBox) the situation was much clearer.
So from my perspective it was both worse than Cygwin and worse (or at least not better) than real VMs... I would really like to read if/how WSL improved.
The similar experience was when I've tried to "just install ssh server". The things I've expected to "just work" didn't. Again at the end I've just used Cygwin.
I'm disappoint to "VM-like" behavior too, so I don't expect they would be totally replaced by WSL.
But this is similar in the other direction. Many applications just rely on such "VM-like" behavior, even though they can be (re)built for Win32 and/or they do not need a real VM for functionality. Note WSL1 can already do something more, e.g. hosting programs as X clients working with VcXsrv. I don't think WSL2 will be necessarily better than WSL1 in many of such cases. (In particular, when I have to reserve VT-x for some other hypervisors, I have no other choice.)
> I've installed Cygwin, which I knew will work. Does anybody know, did it get better better now?
I really like Cygwin. With the MinTTY terminal, it was very nice. Slower than native Linux (or Windows), but enough of an environment to make me happy and productive.
I think it's a little sad that they are going this route (wrapping a running Linux kernel) rather than working to improve Windows disk performance and continuing to improve their WSL 1.x product. It looks like they are missing out on an opportunity to improve the NT kernel for what looks like short-term gain.
From what I understand (which is very, very little!), fundamental NT vs Linux architecture differences wrt their respective file systems prevent much more performance improvement in WSL 1.x.
They do, but NT file performance over many files scales abysmally. The two major reasons are a legacy of bad decisions: deleting files is forbidden if there are open write handles, and filter drivers are allowed to be placed between other drivers in the file I/O stack.
There's not much Windows can do about this, which causes tons of issues with e.g. git clone. Erick Smith is ludicrously smart, he just couldn't squeeze better FS perf out of WSL1 due to these issues.
I wouldn't say bad decisions so much as different decisions. Given WSL would abstract out file system access anyway, why didn't they just bridge WSL's FS similarly to how they did with WSL2 anyway? Though, I'm not sure there's much advantage left at that point.
> I find it fairly easy to work on both Mac OS X, Linux and WSL2 -- but not pre-WSL Windows.
Cygwin really helped me with the Windows part of that for many years. I switched to a Powerbook about a year before the Intel switch & haven't looked back since. Linux has never been a good enough desktop all-rounder to make me switch to it as a desktop OS. Maybe I should take another look.
I'm planning a desktop upgrade (now sometime later this year), eyeing an r7-3950x or next gen threadripper. I'll probably go Linux as my primary desktop again, and VM for windows and macos as needed. I feel it's there enough at this point. Probably been 5-6 years since I've tried it as my primary OS. Though I did keep my grandmother on Ubuntu for about a decade before she passed (her old game ran under wine, and no worries about windows viruses/scams).
The Linux Kernel can free memory; the balloon driver included in all kernels allows it to dynamically shrink and grow memory with a variety of urgency levels and the kernel can (optionally) shrink memory itself once it's not needed based on various parameters. It's fairly reliable and works well.
>In the mid 2000's the Mac really took off due to being a good-enough Linux replacement on the command line, while taking care of all the hardware integration and providing a sleek desktop experience. This really helped the Mac take off among hackers. This system wasn't as open-source as Linux, but it was good enough: Most of us don't want to mess about with the desktop stack.
Who are you to claim this?
The closed off nature of Apple is very dangerous to trust, and I don't know any 'hackers' who are willing to risk unchecked/vulnerable OSes.
I have no idea if Apple has a backdoor to the FBI. We do know on Linux there is no hidden backdoor.
As long as you some assume that there are no unpublished exploits in Linux. The Heartbleed bug was in open source code for a year and a half before it was found.
Well that is pretty nice. WSL gets that much more exactly like Linux.
What I really appreciate about WSL is that you get the accessibility of a bunch of OSS projects and a machine which has legit drivers for all of its component bits. What this means to me is that searching for a "linux laptop" won't be a chore, if it runs the latest Windows it will run Linux. And I can do development on the Linux side while communicating with senior management in power point on the Windows side :-). If they come up with a better/credible USB allocation scheme it will be icing on the cake.
I also think they should buy one of the X server vendors and bundle it (comments about X.org from Redhat not withstanding) I use Xwin32 on my laptop with WSL and its pretty seamless in terms of things that want to pop up a GUI.
There are a few annoyances with WSL2. For me the most important are:
1. You can't connect to a port listening on WSL localhost like in WSL1, you have to figure out the WSL IP address and use that.
2. From WSL you can't connect to a Windows TCP port on localhost, you must figure out the Windows IP (cat /etc/resolv.conf) and use that.
3. The WSL remote interpreter on PyCharm is not working anymore. The suggested workaround is "use SSH remote interpreter" but given #1 you can't connect to localhost (and the WSL IP changes every time you restart it)
In order to use the SSH remote interpreter in my favorite IDE I'm using this script (on Windows):
import subprocess
import re
output = subprocess.check_output('wsl.exe ifconfig')
match = re.search(r"\sinet\s+(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\s", output.decode('utf-8'))
if match:
ip = match.group(1)
with open(r"C:\Windows\System32\drivers\etc\hosts") as i:
content = i.read()
new_content = re.sub(r"(172.\d+\.\d+\.\d+)\s+wsl", f"{ip} wsl", content, re.M)
if new_content != content:
try:
with open(r"C:\Windows\System32\drivers\etc\hosts", "w") as o:
o.write(new_content)
except PermissionError:
print(new_content)
First I use the Windows Scheduler to start SSH (wsl service start ssh) on logon. Then this script is executed on logon with elevated privileges to update the hosts file. Instead of "localhost", I use "wsl" and it works well enough for me while I wait for the fixes.
Despite the above, the combo Windows 10 + WSL2 is the best "Linux Desktop" for me.
I'll add the separate filesystems problem. I generally need to access files from both windows & linux. WSL 2 may have made file access faster within the vhd, but it seems to be slower to use those files from Windows, or impossible with tools that don't yet support the network file paths provided by the 9p server. VS Code's solution with the remote extension is really nice, but every Win tool would have to do something similar to make the experience truly seamless.
For the record I was used to SFTP NetDrive for my remote host (map a sftp target to a local drive on windows), and I just started using that to make sure all my windows tools can access it just fine.
Doesn't solve the speed issue, but at least you're never blocked by a software that only work with a local/non-network path.
That's a pretty nice idea for the net paths issue. I'll borrow it, thanks.
I'd guess all significant Windows developer tools will add support for the wsl paths over time. VS Code obviously leads the pack with its remote extensions. I hope Jetbrains will do something similar with IntelliJ Idea - the piecemeal approach doesn't work as well IMO.
I'll be interested to see what Microsoft can do about the speed of access to the windows file system. It's painfully slow right now.
It (like most every OS) used the BSD TCP/IP stack because it worked and didn’t have license encumberances. They switched to their own stack in Vista though.
Ahhh, the eternel dilemna they face. Either recreate the wheel and get complaints about that, or re-use the wheel and have people "crack up" about you doing that.
The irony is that “etc” makes no sense whatsoever in Windows, and particularly in that place. It probably came about because it was easier to have it there than to patch whatever bit of code they borrowed, and it’s now likely vestigial, but probably there is too much stuff relying on it to change things. The irony is in an OS trying extremely hard to not be a unix, ending up with unixy folder names because laziness.
I know you probably don't want to give up your editor, but FWIW if you can find an acceptable collection of bindings and python extensions for VSCode the remote development extensions work amazingly well with WSL. I was able to set up a full haskell IDE using haskero in under ten minutes with zero configuration.
A solution to the IP address problem might be to write the IP address to the Linux file system when Linux boots, then read it from Windows as part of your SSH launch.
For local development in WSL my solution is to simply run a ssh server from WSL and connect to that from Windows, forwarding any ports I need. I've found the Chrome SSH app [0] to work really well (aside from needing to run Chrome), as it can forward ports and supports tmux w/ mouse control and copy paste nicely. But any decent ssh client will work.
You don’t need any of that if you use VS Code’s “remote development” extensions. It has a few issues with agent forwarding and suchlike, but is getting there.
So, you buy Windows to get better hardware compatibility for your Linux system on top of whatever Windows ecosystem brings you. I did something like that with non-Windows VM's. Although I couldn't find the link, there was an academic prototype you might have liked that used virtualization to reuse Windows drivers in other OS's or VM's.
Seems like there could be a market for that where it's combined with recent enhancements in efficiency targeted toward people like you that want more compatibility. What do you think?
In general I think it is an interesting idea. There has always been an economic issue with hardware manufacturers who were unwilling to expose too much information in an open source driver, and security issues of running a binary blob at ring level 0 in the kernel.
Microsoft puts itself in the middle of that by certifying a driver through a vendor sign up / certification process, and since it doesn't require the vendor to release source, the vendor can keep that information secret. As a result you have more drivers for more hardware created under the "Made for Windows" driver development plan than you do from enthusiastic reverse engineering efforts in the Linux camp.
To the extent that you can capture the value of Microsoft's oversight without having to pay the 'tax' of a Microsoft OS, you add value for the end users.
There's another potential advantage. Thanks to SLAM, the Windows drivers have some strong verification against that interface. If one implements it well, then they get both benefits of Windows hardware in general and extra robustness of their driver verification. As in, drivers might be better than Linux drivers without such interface verification.
The drawback being whatever in middle interfaces the Windows interface to the non-Windows VM's might introduce bugs. It must be correct and compatible.
Well, a mix of Windows and Linux drivers where we use Linux whenever it works well enough. Windows when it doesn't. Might also be able to tie that in Windows Embedded to trim as much fat out of the driver VM as possible.
That was possible for quite some years if you'd just used any VM solution locally and just SSH into it. At least VMware has decent file system performance when VirtualBox doesn't.
So how do the drivers work? Is Microsoft implementing and integrating cross platform drivers? Have they somehow made Linux a microkernel that uses windows drivers through a shim?
I wish Microsoft would release their own linux-based OS with a compatibility layer to let me run Windows apps. I'm not overly impressed with the direction Apple is going but I really enjoy a *nix-native environment too much to go back to vanilla Windows. It would change the math a lot for me if it was full-blown linux under the hood.
Disclosure: I work at Microsoft but not on Windows (and my work machine is a MacBook Pro FWIW)
WSL 2 is a lot like that -- it's a VM in the sense that it uses Hyper-V, but it's super lightweight. And in fact, when WSL 2 is enabled, both Windows AND Linux are running in the same type of VM (separate, obviously), so it's near-native performance in a lot of ways. There are still some I/O issues that the team is actively working on, but early benchmarks are quite promising.
I'm nervous about WSL 2. I use WSL at home quite a bit and I'm on the Fast Insider ring. My PC hasnt been able to boot Win10 for over a year with virtualization support enabled in my BIOS. No errors or anything, just hangs at boot. Ive left feedback in the hub, but crickets. Don't know what the problem is. It used to work, but at some point an update broke it. Not bleeding edge hardware, either. About a 4 to 5 year old PC. Intel i7 CPU, think it's an X79 chipset (not at the computer, so cant verify). Its just weird...
I had similar problem. With Hyper-V my PC BSODed like every other day. I did some investigation on dumps and I think that problem was in nvidia driver (not entirely sure, though, but stacktraces were from that driver). I don't really know whether it resolved or not, because I decided not to use Hyper-V ever again, VirtualBox is much better.
I never got BSODs, just hangs at start, so I don't have any dumps to look at. I'm also using nVidia (1080 GTX) and it has probably been a few months since I updated drivers, but I've been having the issue I think since at last the 1803 update.
I havent been on the PC for several weeks, so I don't have a link to my feedback. I'll post next time I'm on.
This is a total aside, but I really don't understand voting on comments. I've no idea what it could mean - levels of agreement, interest, offensiveness (or scores of other possibilities)? It seems entirely meaningless, therefore pointless, to me.
This is basically it. I upvote things I want to go up and downvote things I want to go down. Off-topic meta discussions about votes should always go down.
Sure, I know what it does, I'm just not sure why anyone would use it. I never do normally want a comment to go up or down (other than the odd egregiously abusive or aggressive item).
I may disagree with a comment, but that doesn't remotely generate in me an opinion re where it should sit in the tree. I don't think other people should or are likely to find a comment agreeable/interesting/relevant/insightful just because I do. Different strokes I guess.
Fair enough. I honestly suspect your signal there is more than lost amongst the noise of readers reflexively voting for what they happen to agree with. Anyway that's probably enough of this particular dead horse.
A comment more near the top will be seen more and therefore get more discussion, so if you want that, vote it up. If you want it to get less discussion, vote it down.
It’s happening at the hypervisor level, and in this case it’s Type 1, so I guess you could but I don’t know how. My ops colleagues are hanging their heads at me right now.
> There are still some I/O issues that the team is actively working on, but early benchmarks are quite promising.
=/
Hoping to run my R code on WSL 2. Glad I got the answer. I wish you guys well on I/O issues.
Is there any performance hit for WSL2 in regard to process forking? I recall that how window and unix deal with process is different (my R code uses mclapply parallel package for unix). I'm curious if there will be any performance hit that stands out versus running on a unix system for parallel/concurrency code.
On Reddit, they had a benchmark for WSL1 vs WSL2 and it showed that the Single core perf of WSL2 was sometimes better than Windows (and even bare metal linux - how that happened I am not sure). The multi-core was not as good, but still better than WSL1.
I can't find that exact thread right now, but here's another one comparing WSL with Windows:
I bet the windows hardware drivers for initialising caches, memory busses, power states, cooling etc. set certain systems up for slightly better performance.
Just out of curiosity: did they let you use the MacBook Pro without problems? Can you guys @ Microsoft decide to use whatever OS is more suitable for what you do?
Microsoft has had Macs in use for a very long time, even pre-dating the famous Gates investment. There are some infamous pics of delivery vans unloading tons of Apple boxes in Redmond. I would expect them to be even more liberal now under Nadella, but to be honest, they probably get great prices on Surfaces, which are very nice machines now.
Just saw this. The answer is it depends on the team, but unless there is a hardware/software reason tied to your job for a specific platform, people can choose what they want.
Many of my colleagues use macOS, some use Windows, some Linux. I have a work-issued Mac and a work-issued Surface Book, because it’s important to test compatibility, especially when it comes to CLI stuff, across different platforms. I have Linux VMs and docker containers and WSL configured too.
I really hope this results in PCIe passthrough on Hyper-V becoming reasonable, it was an absolute nightmare to try to configure and I could never get it to work more than a single VM boot.
In practice it's not really working though. At least not yet. There's a very long thread on virtualbox forums with people trying to get it to work but failing.
I personally have to reboot when I need to use docker or virtualbox. Very annoying.
Yep, this is currently confirmed broken on Windows 10 1903 (Windows Hypervisor Platform extended user-mode APIs). Affects VirtualBox, qemu, etc. We may see a patch, but it's up in the air right now. (<closes eyes and sighs>, I know.)
Hyper-V supports nested virtualization (only Hyper-V nested inside th Hyper-V, though), and seeing as VirtualBox supports a Hyper-V virtualization backend, the solution may be to just find and fix all the bugs in that backend code when under Hyper-V
This is probably an unpopular opinion but.. there is a lot of value in having multiple competing kernels. In the same vein as browser engines, any monoculture is a bad idea. Not to mention there's been a lot of good work done on the NT kernel itself. Under the hood it's pretty advanced, even with all the user-space cruft on top.
Sadly, much like with the web, things have evolved such that it is virtually impossible to create a competitor due to all the accumulated complexity. The driver problem on the PC platform is pretty much insurmountable at this point, so any competitor would have to begin life, and gain massively in popularity somehow, on a much less flexible hardware platform.
It's a shame and makes me sad for modern computing.
I have think the same but maybe TODAY is not as hard as before:
1- We have 4/5 mainstream OSes today: Win/OSX/iOS/Android + Linux. In the case of iOS/android: Some users alread switch between the 2.
2- If it have apps is almost all people need. Drivers are not THAT significant IMHO.
3- Chromebook is a thing, despite it sound stupid: A OS that is just a browser.
Web is the 6 truly mainstream "OS".
4- What people need on drivers? USB (and with usb-c), hdmi, bluetooth (maybe?) + wifi. With this alone and you get even better than iPad: Monitors, wireless mouse, keyboards, external storage, and by extension you catch the rest.
5- Printers? By proxy of iOS it can work full wireless with no drivers on device.
So I think a true desktop OS could launch with less worry about legacy hardware.
Where IS the problem is the apps. It need to be better than chromebook, have very great first party apps that cover basics and hopefully, a VM layer so you could run linux and catch the rest.
> What people need on drivers? USB (and with usb-c), hdmi, bluetooth (maybe?)
Do you honestly think there is a single USB-C driver that covers literally everything that can connect over USB? And anything with an HDMI port uses the same driver!?
No, but I suspect the array of devices is more narrow than suspected?
I think that nail the software (app + dev experience) is a bigger challenge and priority than worry about the (external) hardware.
P.D: One thin I forget to articulate is the possibility to leverage linux as a bridge for drivers (possible)? so the new OS ship a smallish linux just for get compatibility.
Then you're just making a Linux distribution, or something like Android I suppose. You may as well say extending Chromium would solve the web monoculture issue. That isn't how it works.
> I think that nail the software (app + dev experience) is a bigger challenge and priority than worry about the (external) hardware.
Considering that the alternative to “OS in a browser” these days seems to be “every app is a (different) browser” (Electron), the Chromebook concept is not that stupid.
It’s sad that commercial realities are pushing people to throw away 30 years of progress on desktop UIs, but here we are.
I wouldn't be so cynical. As with all things, tech comes and goes in cycles. I think with the limit to Moore's law rapidly approaching for current silicon, there's a good chance that efficiency & simplicity will become front and centre again at some point.
Not sure if this will mean new architectures, operating systems & better tools.. but one can dream!
It is difficult to continue to hope for that while watching people cheer on additional layers of complexity like the garbage fire that is the modern web.
I did. I abused the history API on my website when it was new, and then they went and fixed it. As it turns out, sometimes people care more about the marshmallows than they do about the fire.
Less flippant answer: To some extant, I feel it is on me to try to access good things via the Internet, figure out how to interact with it reasonably safely and satisfyingly and not get overly bent out of shape about the inevitable problems.
All things have their downsides. There are no perfect things that are nothing but goodness and light.
As they say: Sunshine all the time makes a desert.
Dozens and dozens of UNIX kernels (with Fuchsia being the exception I believe..? Not sure). Yes I know kernels have far outgrown POSIX, but it's still a form of monoculture. Diversity in all dimensions matters imo.
Unix kernels could use more cohesion, not less, in terms of API. Even if that API is implemented mostly in userspace
View the kernel API like HTML. Browser diversity is good, but they need to be implementing the same interface if the smaller players want to support any hardware that works with Linux
Windows Subsystem for Linux is really close to that though. If you have not tried it yet you can download a Windows VM for developers from Microsoft that includes WSL and other dev goodies. Its mostly for evaluation but good enough to get started with.
Iirc, even the Win32 APIs sit above the subsystem layer. There was the OS/2 subsystem back in the day, Win32, and one more I can’t remember from my training 11 years ago.
FWIW, I have given major conference talks years back on my linux laptop with PowerPoint running via Crossover Office (wine), mostly due to my advisor's insistence on using PowerPoint. So that has already been a thing for a long while.
My issue is that with nvidia laptops the hdmi connection almost never works as expected, when I have cuda installed. I'll try proprietary or noveau drivers. I've driven my monitor with intel drivers. I've tried so many things, but it has been multiple laptops that I've just never gotten this to work. I just can't have cuda and use my laptop for presentations. So I just always keep a windows partition just for presentations. If you do have a suggestion I'd love to hear it.
I've never used crossover. Thanks, I'll look into it.
Well, each WSL1 process is a Windows process. If you’re running vim in bash, for example, you’ll see “vim” and “bash” as separate processes in Task Manager. WSL is all about ELF binaries natively, just as Win32 runs PE executables natively.
"windows process" is misleading, WSL uses a separate "linux subsystem" with its own syscalls, process/thread data structures, etc. It also uses its own file system (albeit with all the state stored in NTFS and affected by filter drivers, which is why its I/O is slow)
True, I thought that a few hours after I wrote my last comment but couldn’t update it by that point and didn’t respond to it. Windows NT is the kernel, Win32 is the Windows subsystem that everyone’s used to (including kernel32.dll), WSL is a subsystem that runs ELF binaries and implements the Linux syscalls. It’s not yet obvious to me how much WSL2 is actually even a Windows subsystem, given that it’s running it as an actually separate OS.
I dont think its a VM like Docker no. WSL processes show up under the task manager. This is more like Wine but Line... In a sense. But a little trickier.
I think the OP is probably referring to the fact that running Docker on Windows and macOS is accomplished by running a Linux VM which the Docker containers run in. Not that Docker containers are VMs.
In practice, the vast majority of docker containers are Linux containers, so most uses of docker for windows need a Linux kernel to do anything useful.
As such, the 'native' docker for windows you linked is actually managing a Linux VM (HyperV or VBox) fot you; the docker server runs on that VM, and the native docker windows binary simply connects to that server.
They also do a lot of tricks to connect networking and file shares with that VM. The file shares part is almost completely non-working in my experience, with scary comments in the bugs like 'doesn't work if your windows password contains certain characters' (it sets up CIFS shares between the Linux VM and some parts of your Windows file system).
So, not sure what docker for Mac looks like, but Docker For Windows is only 'native' in a very hand-wavy sort of way, unless (maybe) if you're running Windows docker containers.
Docker desktop runs a Hyper-V VM. Prior to that, Docker used a Virtualbox VM on Windows. The current plan seems to be to integrate with WSL2 once that ships [1].
Why? This is a common ask, but what exactly makes a Linux kernel superior to Windows?
There seems to be an assumption that Linux is the ultimate in OSes. But most of the value comes from the large ecosystem and momentum, rather than specific technical advantages.
The biggest practical advantage I have found is that Linux has dramatically better filesystem I/O performance. Like, a C++ project that builds in 20 seconds on Linux takes several minutes to build on the same hardware in Windows.
I don't think this is about technical superiority of one kernel vs. the other. It is about the software environment. Linux is the most actively used Unix-derivate at this time. So by running Linux you get a whole software stack. Parts of this stack has been ported to Windows too (e.g. via Cygwin) but you get the most coherent user experience by just installing a Linux distribution, as WSL allows you.
In the post-SGI, pre-Second-Jobsian-Revolution world of the the mid-late 90s, there was a real malaise in workstation computing. The systems were dumbed down, the hardware was homogenized and commoditized into joyless beige, an ocean of "Try AOL Free for 100 days" CDs jostled and slushed amidst the flickering fluorescence of the cube farm, where CRTs and overhead lights would never quite hum along at the same frequency, zapping the air with an eerie, sanitized sick-static. This was the future of computing.
This gloomy world of palpable beige-yellow-purple is the world of Microsoft Windows. As a product, it caters exclusively to that clientele, in that environment. To be blunt, above all other design considerations, Windows was built to accommodate the lackluster office drone -- or, more precisely, their bean-counting overlords who wanted to save on user workstations.
Despite attempts to undo this, the design has proven impossible to build upon. How many legitimate improvements in the cutting edge of either academic or industrial compsci have been built on Windows technology, for any reason other than "MS signs my paycheck so I'm contriving this to appear like I'm happy to be using PowerShell"?
MS was able to paper over the deficiencies for a while through sheer force (pushing .NET as a semi-unified computing environment, Ballmer screaming "DEVELOPERS!", etc.), but Windows was in no way prepared for the revolution brought by virtualization in the mid-aughts, and any shred of a hope that MS would somehow recover was utterly and entirely obliterated by the widespread proliferation of containerization. Good luck getting a real version of that working on Windows.
I've said it before and I'll say it again: in about 5 years, I wouldn't be surprised at all to see the new version of "Windows" plumbed end-to-end atop a nix-like kernel, and driven by a hybrid userland comprised of a spit-polished copy-paste from WINE + a grab bag of snippets from the proprietary MS-internal Win32.
The flexibility of the nix model is the indisputable, indomnitable winner here, and it stunned and killed the Goliath in its tracks. This is the ultimate surrender to the open-source model. Windows's top-down, "report to your cube by 8:26am sharp and don't question the men in the fancy suits" approach to computing resulted in a rigid operating system that was unable to keep pace with the technological demands of the many pantheons of loyal corporate drones that comprised their user base. Even strictly-Microsoft shops are forced to give developers MacBooks now, or they're unable to get anyone competent to sign on.
It really couldn't get more poetic than MS desperately integrating Linux into their OS so that people won't switch to the more flexible nix-like systems for their workstations, although I anxiously await MS accidentally linking in a GPL module and thus becoming required to disclose the Windows source code. :)
Meanwhile, per usual, nix-like OSes have been humming for 50-ish years now and show no signs of slowing down. IMO, that's all the objective, borne-out proof one needs to say that for all practical purposes, Windows couldn't stand the test of time.
Please note that this is not about Linux per se, but the overarching design theory and development processes in use in major operating systems, and the massive success it represents for open-source, research-driven systems.
The chapter on "The Windows Way" has been written, and whatever its benefits may be in theory, they don't bear out as sustainable in practice.
It'd be fun to do a macro-scale timeline comparison to Sun. These invincible tech behemoths that cater to their narrow niche become rotting, hollowed-out fossils as the free systems continue to develop and evolve the cutting edge. Microsoft is right on time here, and I fully expect to see them struggle through the next decade or so until Larry Ellison finally puts them out of their misery.
I think I've read this free-beer explanation of yours in about 100 distinct comments. In most of them you explain why some particular programming language that you hate with a passion came to develop mindshare while it was so unlikely considering what a bad design it is.
For some balance, you could consider how Windows got preinstalled on most computers since the 90s and what huge dominance it had in the consumer and also development domains (also thanks to some evil capitalist practices, some might say!). That there are free versions of Visual Studio and other IDEs and dev tooling for various programming languages for Windows just as well. Also, consider that I can buy Windows 10 Professional for $8.90 with 5 seconds of googling.
Absolutely.
At this point, there isn't much value for Microsoft to hang on to the legacy of Windows - with most of their revenue coming from Azure/Cloud and developer products, it might be time to say goodbye to that legacy and make it subsystem on a modern Linux-based system.
That might also be a way for them to become more relevant in mobile devices and platforms.
The line item in Microsoft's last annual report which encompasses Windows is "More Personal Computing" and shows $10 billion dollars of operating income on $42 billion dollars of revenue.
Windows revenue grew $925 million. Not to $925 million. By $925 million.
> At this point, there isn't much value for Microsoft to hang on to the legacy of Windows
I would say this move by Microsoft is the exact opposite, because it maintains Windows at the centre of the picture.
Microsoft is basically saying, look you can have all your Linux goodness and best of all you can access all that goodness using your current Windows desktop.
Using WSL2 the between Windows/Linux are very blurred.
I'm convinced there's nothing stopping Microsoft from making their own Linux based OS by porting their desktop and GUI models on top of a Linux kernel + filesystem + drivers, not that far from what Apple did with OSX.
Technically it'd be an interesting challenge, but I fear the day we'll see also Linux software asking to be run on that Linux for "better compatibility" because that is what the developers used. In that case, thanks but no thanks.
I wonder whether they'll experiment with Windows compatibility in the WSL2 layer as Extend[1] phase, to possibility utalize it in a future upcoming distro creating an Extinguish[1] situation.
... except that (a) WSLv2's kernel will be running in a VM, and (b) macOS's kernel is much more like Linux's -- fork/exec, networking, filesystem, memory, TTY.
I see WSLv2 as more of a concession that a *nix-style interface is what developers want, and that the NT kernel can't deliver that.
NT supports interfaces for supporting a fork model efficiently, it's just not well documented. In fact, you can write an efficient, Windows-native POSIX environment entirely from user space using existing APIs: https://midipix.org/
The only thing that NT really lacks is a Unix-style TTY subsystem. The interplay in Unix between TTYs, process control, and signals is complex and deep, and none of it lends itself toward abstractions--it's all quintessentially Unix. For almost everything else, NT provides interfaces that can be used to efficiently implement Unix semantics.
Microsoft isn't interested in supporting a modern POSIX environment natively as that would merely accelerate the migration of the software ecosystem away from Windows APIs. IMO, it's why they killed WSL1. WSL1 was de facto a modern, native Unix personality on NT. They may not have managed 100% Linux compatibility, but they could've simply relabeled it Windix or something, used a small team to round things out, and received UNIX V7 certification in a jiffy.
WSL is about managing the shift toward a Linux-based cloud world, capturing as much of that business as they can without unnecessarily risking their existing footprint. WSL2 neatly allows them to maintain Windows API lock-in while also providing substantial value for those needing to develop Linux-based software. WSL1 fell short on both accounts, though standing alone I think it was much cooler technology.
> They may not have managed 100% Linux compatibility, but they could've simply relabeled it Windix or something
My take on it was that they were never able to implement all the syscalls (WSL just couldn’t reliably run some production server software), and filesystem calls were horribly slow (try npm install on a reasonably large JS project if you have any doubt).
You may be right — but I’d still like to dream that one day Microsoft will release a true Linux (or OpenSolaris, BSD, or whatever!) OS of their own, which their Office apps will fully support. That’ll be the day I weigh another option besides macOS.
Accessing Windows' NTFS volume from WSL2 Linux is (and will always be) even slower. I have no doubt they've could've substantially improved file access, but management pulled the plug. WSL1 was basically just a proof of concept, afterall.
Also, don't forget that Windows and NTFS has never been known for performance. Expecting file access to be as fast as ext4 from a Linux kernel was just the wrong set of expectations. If you want Linux performance you need to use Linux, of course, accessing it's own block device directly; and people demanding that are going to get it with WSL2. The cost is that integration will be worse, both in terms of ease of use as well as performance. AFAIU WSL2 is using 9P now and in the future probably virtio-fs (https://virtio-fs.gitlab.io/) or something similar.
People keep saying how lightweight the WSL2 VM is. Well, that's how all modern VM architectures are now. If you launch a vanilla Linux, FreeBSD, or OpenBSD kernel inside Linux KVM, FreeBSD bhyve, or OpenBSD VMM there's very little hardware emulation, if any[1]; they all have virtio drivers for block storage, network, balloon paging (equivalent of malloc/free), serial console, etc; for both host and guest modes.
[1] I don't think OpenBSD VMM supports any hardware emulation.
> The only thing that NT really lacks is a Unix-style TTY subsystem.
The long, laborious ConHost / NT Console subsystem rewrite/refactor over the last few years has moved Windows a lot closer to the Unix PTY system for internal plumbing (while not breaking the NT Object side of things). The fancy new Microsoft Terminal in alpha testing now takes advantage of that.
> WSL1 fell short on both accounts, though standing alone I think it was much cooler technology.
I feel like this is an interesting case where version numbers hurt more than help, because it sounds like WSL1 and WSL2 rather than a pure 2 > 1 are going to be side-by-side for some time now (moving distros between WSL1 and WSL2 is supposedly a simple powershell operation and can be done in either direction) and hints from Microsoft that there may even still be additional investments in WSL1 depending on user use cases and interest.
I think it's good for the NT Kernel to maintain subsystem diversity, and so I do hope that WSL1 continues to see interesting uses, even if a lot of developers will prefer WSL2 more for day-to-day performance.
Just thinking out look because that is just very improbable. But effort wise, implementing the win32 layer over linux is a harder thing than what they did with wsl1, surely right? But at the same, their wsl team was like what, a few guys? 10, 20? But the linux kernel source is opened maybe that's the difference.
My brain threw an exception when I read this title. Awesome that this is a reality though. Now that we have a Linux kernels running and accessible on both Windows and Chromebooks I feel like we can finally say: 2019 actually is the year of Linux on the desktop. It's not a meme anymore, it's finally just a true statement.
Edit: To those questioning if this really counts as Linux on the Desktop. Yes, I understand what you're saying, I too am a bit of a blowhard about Linux and prefer to use the real deal, not some kernel on a kernel bs. But I think this is still a huge deal in terms of the accessibility of Linux, think of how many young programmers will be able to dip their toes in the Linux pool on their gaming rigs without having to go through an Ubuntu install process (I write, thinking in the back of my mind that I had to go through that, and so maybe they should too.) Yes there is something of an ideological compromise here, because the Linux they're running is sitting on a binary blob and not truly free... but Linux still exists in it's gloriously free form for them to use when they decide that's important to them, I see no harm in them reaping some of the other benefits of Linux without the freedom, it will hopefully become an on ramp to more Linux adopters and programmers.
Despite the wording I don't think that is the original definition? The idea is to make Linux viable as a desktop operating system, and whatever comes with that. Not running Linux as an application on another desktop. Which is almost the opposite as in leveraging capabilities outside of Linux.
Maybe more importantly redefining the goal isn't helping Linux much. I remember everyone raving about Android, but now look at the lack of vanilla graphics drivers for embedded platforms.
I was on Linux desktop 99% of the time between 2006 and 2014 (I now use a Windows workspace with GPU passthrough on my main system), and just today my frustration with Windows was piquing to the extent that I'm ready to go back to a nix desktop.
Maybe people just get fed up with the same grating pain points and need to spread the frustration around. Periodic changes in scenery are healthy. :)
Windows and MacOS have their frustrating points as well, I don't think any of these systems are perfect. I suspect the vast majority of computer using people could get by with a Linux desktop with about the same amount of frustration they suffer already.
At least on Linux I can fix the things that are really making me nuts.
When people on hacker news say "no problems" what I have found is they generally mean "no problems I don't consider trivial or easy to handle."
We are far, far past the point where the general population is going to spend effort to learn about something as simple as using apt-update or even a GUI application update manager, ever. Phone operating systems give the user everything they care about (mostly content consumption, a little creation) without having to read or learn anything. If you ask that of people (to read and learn about an operating system) it's a non-starter for >80% of users.
For general, basic use I would recommend a chromebook or ipad. That's what most people are looking for - something that lets them do the handful of things they want to do with as little overhead as possible.
>I keep seeing this angst over whether Linux will ever be ready on the desktop. Well, I've been using it exclusively for something like 15 years.
>For most purposes, it has been ready for years.[...] Most people, however, could switch to one of many Linux distributions and be just as productive if not moreso.
If "most people" includes non-technical users, I doubt they could switch to Linux without difficulties.
E.g. a typical non-geek user might be my friend that runs Windows. Some examples of showstoppers that makes Linux totally a non-option:
- Intuit Quicken which she's been using for 20 years. Yes Linux Mint was a possible alternative but its early releases (inside of your 15-year time period) didn't have reliable online downloads from financial institutions. Mint's later partnership with Yodlee api for transaction downloads still didn't make it equal to Quicken. Yes, Quicken is terrible and buggy software but early Mint was even worse for online banking scenarios.
- Netflix streaming was not easy to run on Linux until recently[0]
- AAA games (including recent ones like Fortnite) don't run easily on Linux. Valve Steam Proton is a recent effort.
- iPhone sync with Apple iTunes - running on Linux requires googling for articles of running a Windows vm or Rhythmbox on Ubuntu which may not work with certain iOS updates
- sewing machine embroidery software all runs on Windows and not Linux or even MacOS. The software also requires a dongle for copy-protection and the hardware drivers for the dongles only exist for Windows. Running Windows as a vm inside of a Linux Desktop and exposing the host USB port to the client vm won't fool the dongle software. If the ultimate solution to "Windows in a virtual machine" shortfalls is to dual-boot Windows and Linux, that advanced configuration adds more complexity and it contradicts the ideal of "run Linux desktop exclusively".
For people to run Linux without issue, the person would need to possess technical skills equivalent to you (e.g. a HN poster) -- or the person has a "guardian angel" as on-call tech support (e.g. a son/daughter/friend) to get them over technical issues (like Netflix) with workarounds.
I don't doubt you've been able to run Linux exclusively and there are more examples like you. Nevertheless, it still required a very atypical usage profile to run a Linux desktop exclusively for the last 15 years.
Even today in 2019, I would not recommend the Linux desktop to any of my non-programmer and non-sysadmin type of friends & family unless I was willing to be their on-call tech support to handle their inevitable Windows compatibility issues.
For Linux to work in a mass-consumer-facing situation, it has to be an "appliance" type of installation and "invisible" such that the user doesn't realize they're running Linux. E.g. as the underlying os in Android smartphones, or the os in smart TVs, or the os in Tesla cars.
I agree with this sentiment. I'm not a hacker, gamer or coder. I'm an architect who enjoys tech. I've used Ubuntu and other distros. The one thing that stops Linux from becoming mainstream for desktop use is software. Software for enterprises and software for consumers.
The tech community doesn't realize that there is more than just office applications and browsers that people use. I can not install BIM (Revit) software on Ubuntu for example. I can't install Lightroom on Ubuntu. I know that there are alternatives and work arounds to software, but consumers only understand what they understand and is easy and mainstream.
The tech community can't expect consumers to spend time looking for alternative software. I feel that this is why the Windows Phone failed, because there was a lack of mainstream software (apps).
The day that BIM (Revit) is available to install on Ubuntu is the day I switch.
>I can not install BIM (Revit) software on Ubuntu for example.
Yes, a lot of Linux desktop enthusiasts only include "web browsing and email" scenarios in their mental models. Therefore, they are not aware of how the Windows os is an unavoidable platform dependency in many critical workflows. This perspective is why "Linux desktop exclusively" appears totally realistic to them.
A similar scenario to yours just happened to me last month.
A land surveyor gave me some 3D laser scan point cloud files.
(Trimble RealWorks files which are ".rwcx" files generated by the Trimble SX10.) The software (Trimble Business Center) to import those files only runs on MS Windows. I tried running it on VMware but the Trimble software required DirectX 11 so it crashed with an unrecoverable error[0]. Well, VMware only supports up to DirectX 10[1]. It's another example of "just run Windows in a vm" on Linux Desktop doesn't always solve the problem.
This also highlights another underappreciated and unseen difficulty with Linux desktops: You often don't know you will have a roadblock with Linux until you encounter that roadblock. It's not easy to predict your future incompatibilities!
Few of those fields have professional software available for Linux i.e. what is actually used in the industry. It's the students, engineers and artists that are invested enough to switch platforms. Most people are just going to use the FOSS software on Windows instead.
I have been watching netflix on linux for the past couple of years using Chrome. Install. Just works.
Gaming is still very game dependent. Think of it like a console. Some "exclusives" just wont run.
If someone is using their computer to surf, write emails, watch netflix, alongside lite gaming, I find linux to be more enjoyable. I don't have to do any command line oriented stuff at all for general use. The initial install is also very simple. On a desktop :)
We will have to agree to disagree. I think it is more an issue of framing perspective than anything.
For most purposes, it has been ready for years. There are a handful of proprietary programs that individual people may need for work that aren't available, and you aren't going to be able to play the latest games on Linux. Most people, however, could switch to one of many Linux distributions and be just as productive if not moreso.
As much as I love Linux (I'm typing this from my IBM Thinkpad running Arch with a KDE desktop) There are far too many warts to call it "ready". Every morning, I plug into my widescreen monitor, and watch as application windows randomly decide which monitor they'll appear on. Then begins the fight with my bluetooth mouse. I enable bluetooth, then turn on the mouse. It's recognized and "connected". But it does not work. I have to disconnect and reconnect. I can't in good conscience advise my mother or wife that this is a "normal" experience, therefore, it's not "ready".
Now, does using Ubuntu cover some of these glaring warts? Perhaps. But that often opens its own set of problems. Each comes with its own set of workarounds. Where macOS and Windows excel, is their sense of "polish". Most happy path things just work, and work 99% of the time.
All that said, I'm die-hard Linux ALL THE WAY! I just couldn't say that it's ready for most purposes. For the things that I'd use a tablet for? Sure.
Then we disagree. Linux works until is doesn't, which is the problem. It isn't a handful of programs so much as a wide range of capabilities. We can all speculate the taste of the average user, but do you think companies wouldn't love running desktop Linux instead of paying millions to Microsoft? That is what everyone did when Linux actually became good enough as a server OS.
I guess we do, as I've said elsewhere, it has literally been years since I've encountered anything I couldn't do on Linux just the same as on Windows. I firmly believe the reason more organisations haven't changed is inertia. To paraphrase a cliché, no one was ever fired for choosing Microsoft.
I don't see how inertia would be the reason if you also believe that it has been good enough for 15 years. I would say desktop Linux simply doesn't offer enough unique value for the effort involved in large scale deployments, unless you are someone like Google. Most things that have improved for users in recent years, like web application, actually favours Windows. Because the primary use case for desktop operating systems are becoming what is beyond that of web or smart phone applications. Leaving desktop Linux with the lowest common denominator.
I said I've used it for 15 years not that it would've necessarily been a good choice for a non-technical user that long ago. On the other hand, non-technical users usually require significant IT support on Windows as well. That's where inertia comes in. Linux may not offer a significant value proposition over Windows, so Windows stays. That doesn't necessarily mean Windows offers a significant value proposition over Linux either. Inertia.
> That doesn't necessarily mean Windows offers a significant value proposition over Linux either.
But as far as I know that is this case. That Microsoft's offering is much stronger when it comes to large scale corporate deployments. Unless you want to make the claim that e.g. RedHat's offering is on par or better, which isn't something I have heard in the wild.
Given that drivers are Linux's biggest problem, isn't there benefit to leveraging Windows drivers and then running Linux on top of that virtualized hardware?
I don't see why you couldn't run x11 in the window manager with the new Linux kernel in Windows.
Windows has less hardware support than Linux. Most single board computers can run mainline Linux, whereas just a handful support Windows IoT.
AMD, ARM (CPUs & Mali GPUs), Intel, VIA, Qualcomm (Adreno) all have support in kernel 5, whereas Microsoft is still stuck subsetting a small group of ARM GPUs and branding Windows 10's OpenGL ES support as DirectX 11. This is a repeat of the troubles Microsoft had with supporting Windows Phone, but now the userbase is significantly smaller despite a wider array of hardware to support.
There are things that don't work well though, classic example are laptops that dynamically switch between discrete and integrated graphics. You'll probably run everything on the dedicated GPU which hurts battery life.
Still, my old desktop scanner that the manufacturer stopped publishing drives for during the Windows Vista era? Yeah Linux runs it like a boss. No looking up drivers or config parameters on the internet, it just works.
> classic example are laptops that dynamically switch between discrete and integrated graphics
YMMV, but as far as I know that's more or less a solved problem by default (for X anyway) with DRI3.
This particular complaint echoes folks (I was one) who booted Ubuntu desktop a decade ago and couldn't get wifi to work, and proceeded to complain about shoddy driver support (to present day, clearly), using only that single outdated* example as an argument. Of course this is compounded by a ~months to ~years delay in most desktops getting those improvements thanks to the glacial pace at which the mainstream desktop distros update their repos.
Was there a point when Optimus/Bumblebee/Prime was a shitshow? Yes. Is that still reality? No.
What this ignores is that Linux driver support is generally fantastic, works out of the box in a way that desktop architects at MS dream about and is infinitely more current in practice since you go to one place to update all your software, including driver software, something MS hasn't been able to get right in a decade of trying.
Regardless, mobile battery life's still worse on Linux. And as much as some things are super convenient compared to the Windows/Apple world, the truism a friend told me as I wrestled with Ubuntu ten years ago remains true today: Linux is for folks that enjoy configuring Linux.
* I have to use a combo of DKMS and an AUR package to get WiFi on my one year old IdeaPad, so outdated may be the wrong word there. Better to say that realtek and broadcom chips have gotten hit hard by Intel's move into consumer networking.
Worth pointing out that 'year of the Linux desktop' probably predates that.
> What this ignores is that Linux driver support is generally fantastic, works out of the box in a way that desktop architects at MS dream about and is infinitely more current in practice since you go to one place to update all your software, including driver software, something MS hasn't been able to get right in a decade of trying.
Generally, yes, it's pretty decent on first go and there's a nice default happy path.
Unfortunately in my own experience, that path isn't particularly wide, and there's a huge number of gaps.
Version compatibility is a major issue, imo. One driver or package works great on one kernel version is completely broken on the next.
A few hours ago I tried to install the official AMD drivers for Ubuntu. It's not until the install script has already gone and screwed up my system that I get told that they don't support 19.04.
I just don't have that issue on Windows as a rule. I'm not claiming Windows is perfect by any means, it's got it's own set of issues.
Version incompatibility is expected, because Linux kernel changes a lot. You need the correct driver for your kernel. The amdgpu driver is a part of Linux distribution, your best option is to install it from Ubuntu apt repository, not from manufacturer's website. Which now explicitly and clearly says the driver is for Ubuntu 18.04 only - did you not see that?
> Version incompatibility is expected, because Linux kernel changes a lot.
Expected by whom, though?
I could understand it if it was major kernel versions or something like that, but it seems that a whole bunch of things are really tightly linked.
> Which now explicitly and clearly says the driver is for Ubuntu 18.04 only - did you not see that?
I honestly didn't. I went back and checked - yep, it does say for 18.04[1].
I have to say though that I wouldn't have automatically assumed it was ONLY for 18.04 though without it being more explicit about that. If the official drivers are available within the repo from now on, then it'd be great for AMD to actually say that. (I realise this isn't Ubuntu's fault)
Of course, not expected by a basic user. But if you follow Linux, it is quite well known that the Linux project intentionally does not promise stable internal api in the kernel, they "run a tight ship" where if a program needs to use kernel api, it has to be maintained in the Linux tree. Given this modus operandi of the Linux project, breakage of the old drivers with the new kernel version is expected.
Advice to Linux users: one would best get their the graphics driver and a matching Linux kernel from the same source, either the Linux project, or the OS distribution. Mixing versions downloaded from AMD website with random kernel is supported by nobody and is testing your luck. That is a Windows model and kind of works with Linux only with nvidia drivers for their hardware, although it brings a lot of headaches too.
I agree that the installer should have warned you about the incompatibility at the beginning of the install, not at the end. That sucks.
With graphics, it is usually best to run the newest drivers with Linux, that means the newest kernel possible. Except for the older cards, which are not supported by AMD anymore (which sucks), where one can only use old drivers with appropriate old kernel.
Those are pretty old cards though (I've had amdgpu support for my six year old 7970 since kernel 4.9, and I think they've extended it back to a generation or two older architectures now), and you can use the open source radeon drivers with any kernel.
I think the folks using Catalyst for better 3D acceleration have probably moved on to cards supported by amdgpu by now.
Version compatibility is not an issue for in-kernel drivers, only for the few remaining external ones. On Windows you have this issue much more often if you are trying to use an older device, in particular if it's one that came out before Vista, on Linux, once a driver is in the kernel it's continuously adjusted to driver API changes and will keep working. You can still run a current kernel on a 386 if you want to.
I'm running a quite current Dell on an essentially unpatched kernel (just includes Gentoo's default patches) with no additional modules involved and everything I tested up until now works, even fancy things like Dell's mini-dock.
> On Windows you have this issue much more often if you are trying to use an older device, in particular if it's one that came out before Vista,
I think that's a bit of a difference though. Vista came out nearly ~13 years ago, and we're talking things breaking a year or less later.
Heck, for more obscure drivers[1], it seems necessary to recompile for every kernel patch.
Perhaps that's the fault of that driver's developer for not following the correct way to build kernel drivers, or there's something unique about this particular device - I don't know.
> > On Windows you have this issue much more often if you are trying to use an older device, in particular if it's one that came out before Vista,
> I think that's a bit of a difference though. Vista came out nearly ~13 years ago, and we're talking things breaking a year or less later.
I'm not. I'm talking about the fact that drivers for devices older than me that have been merged into the kernel keep on working today while on Windows for some subsystems (like graphics or sound) you can't expect things to work after "just" 15 years. I admit that this is a long time, it's still a huge difference.
And yes, on Linux you are expected to recompile drivers for every new kernel version, that's intentional (https://github.com/torvalds/linux/blob/master/Documentation/...). Since the driver API is reasonably stable, the code doesn't need to be adjusted for every version, and if the driver has landed in the kernel, this is done while changing the API.
I do not enjoy configuring Linux, but I hate fixing Windows issues. I just reboot, deinstall, reinstall and at the end I have not learned anything and I do not even know if the problem will occur again. When I fix a problem in Linux, it is a journey that makes me discover unknown territories. At the end, I have improved my experience and knowledge.
For linux, I am in control. For windows I am a puppet of Microsoft will.
The last time I was shopping for a laptop (a year ago), Arch's wiki said it was all broken for the models that interested me. Here's an example, if you have better information maybe update the wiki.
I spent a lot of time trying to get dynamic switching working and after countless hours I gave up.
Optimus is definitely not a "solved problem", unless you know some method I didn't find in my dozens of hours of googling how to get it to work on a system76 laptop (Linux preinstalled) and a 2012 MacBook.
Yeah, the "happy path" here is applicable if you don't attempt to install janky/poorly maintained proprietary out of tree drivers. The reason these drivers are out of tree is usually due to either serious hardware flaws, incompetent/inept vendors, or a combination of both. AMD has (in large part) fixed this by reusing the kernel shim from AMDGPU (open source & mainlined in kernel) for their proprietary driver (AMDGPU-Pro).
Nvidia meanwhile has stated they will not support Wayland, and has sandbagged the integration of their Linux Kernel patches for their single board computers (like the ones that are used in Tesla's cars). They don't give a fuck if their clients are stuck on broken, insecure BSPs, and frankly they operate as a malicious vendor: https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/
Yes, Nvidia provides terrible support for Linux, but it doesn't matter whose fault it is. That is the basis of many, many complaints about drivers for Linux. Nvidia ships in a great many laptops. If Nvidia sucks on linux, linux has a problem with drivers, full stop.
I'm not sure how the bubble you live in came to be but for ordinary desktop/laptop hardware that is as false as can be.
And there is a very simple reason for that. All desktop PCs and laptops are sold with windows, if no support exist the hardware as a whole will not exist.
Meanwhile in linux land I still can't use my three monitors because displayport MST doesn't work with the open source AMD driver. Support has existed for quite a while, but it seems none of the five people on the internet that have actually tried it has gotten it to work. Just one of the many driver issues I currently have on a couple of machines with linux.
> YMMV, but as far as I know that's more or less a solved problem by default (for X anyway) with DRI3.
It is not false. Complaining about obscure gpu driver feature not working in Linux kind of proves the parent's point. Most usual hardware works out of the box on Linux now, except for the nvidia cards, but even that can be made to work with their binary driver. If the AMD driver does not support some obscure feature, that is on AMD. They work on it, complain there, but naturally some things have higher priority than driving multiple monitors from a single port.
I'm not blaming linux for manufacturer not supporting linux well enough. But I'm stating drivers are an immense issue for linux - regardless of who is to blame.
Maybe I shouldn't have started with an "obscure" example. Maybe that the driver crashes if displays are awakened from sleep? Mind you - only waking the screens from sleep, not the entire system (that doesn't work either, but I don't know which driver that is to blame for that yet).
Or that ubuntu LTS are just incapable of turning off most machines I've installed it on? (machines that did exist for a while when the LTS version came out).
For what it's worth it is my experience that displayport MST has never worked reliably on any OS or hardware whatsoever. I gave up on it and am thankful I can now just buy a thunderbolt dock with multiple video outputs.
If you're gonna reference the wiki, cite the right article[1].
Under the 'GPU Offloading' heading (the feature most folks want in dual discrete/integrated laptops):
> Note: This setting is no longer necessary when using the default intel/modesetting driver from the official repos, as they have DRI3 enabled by default and will therefore automatically make these assignments. Explicitly setting them again does no harm, though.
I had to do nothing out of the box to have a working PRIME setup on my work laptop over a year ago, and I'd never used a laptop with discrete graphics prior. My only driver issue was with DisplayLink (proprietary), where I had to pin xorg because it wasn't compatible with 1.19.
Most people want to use the proprietary drivers, because nouveau's performance is prohibitively low for gaming. And that only supports switching GPUs with a reboot.
It does work, but it's not very nice, which is what we are upset about.
eGPUs in a laptop format is a bit silly, you sacrifice the mobility of the laptop to get a very cut down discrete mobile GPU. AMD has started to eat this market alive due to the performance per watt advantage: https://redd.it/bc5hkg
Consumers are much more likely to bring a sub-5lb HP x360 everywhere than a bulky 8+lb laptop.
Your comment confuses me. Are you talking about external GPUs not making sense? Or dedicated GPUs in laptops?
Anyways, I'm pretty happy with the weight of my Dell XPS 15 which has an Nvidia card, but I regret buying it because of the lackluster Linux support.
I haven't seen any laptops with dedicated AMD cards at all lately, and until recently I had no idea that AMD's integrated cards were so competitive, so that's why I didn't. I plan to buy AMD next time I'm in the market.
One more: OEMs like Dell and HP aren't allowed (by Microsoft) to sell PCs that dual-boot between Windows and something else, such as GNU/Linux so the only people that get to try Linux are those that have it pre-installed as the only OS, or those willing and able to install it themselves. Source: read about it somewhere or other; main topic was the history of BeOS.
Why? As far as I am concerned these are all legacy problems. Drivers mostly work, people are aware of Linux and it can be e.g. run off an external drive. The problem is desktop Linux just isn't good enough. From a macro perspective it takes significant effort to manage, significant effort to develop for and provides significantly less value to users, developers and organizations alike. And exceptions doesn't make that less true. There is no conspiracy against Linux, or at least not an effective one. If there is anything hurting Linux it is its mainstream proponents like Google, who like to take but not give back agency. If desktop Linux was good enough, or more precisely great at what people need, they would be using it.
Drivers are sometimes a bit slow to come to Linux, but it's been years since I've had a problem with any hardware more than about 1 year old. The community is large and talented and has a diverse range of interests. Some truly old and obscure stuff is supported. Most new hardware is supported with a reasonable time after release.
You pinpointed exactly the rub I was experiencing. Reminds me of "Rules as Written" vs "Rules as Intended" dichotomy I see mentioned on the Dungeons and Dragons stack exchange.
If this really is the year of the linux desktop, it's not what I had imagined.
While people nitpick over what "linux desktop" means, don't lose sight of the fact that general purpose computing is slowly dying and being replaced by iOS and Android. Roughly as many iPhones are sold per year as desktops and laptops combined.
One contributing factor to that statistic it that the phone lifecycle is a lot shorter than the desktop/laptop lifecycle. Also desktops and laptops are more likely to be shared between family members. The point is people still have access to and use traditional computers.
While this must be true for a sizeable part of the population, more and more people also don’t ever touch a desktop/laptop or only use the one at work.
Unsurprisingly, the latter case gets truer for low computer litteracy adults for who investing in a decent smartphone has a better ROI than buying a meh phone with a cheap laptop.
Anecdotally we bought our aging parents each an ipad two years ago.
They were decent but not that proficient at Windows, but got comfortable very fast at viewing/saving photos, printing, email/messaging and browsing on the ipad.
From there they gave up on their budget “my carrier gave it for free” phone to move to the iphone Xr and started doing all of the above on their phones most of the time. The laptop is now a third rank device they use to check their savings once a month.
In your anecdote, your parents weren't really using their PC before they got the iPad. I think this is true of a lot of people who are smartphone-only today. Android and iOS are expanding to a market that PCs never really got to.
They were using their PC. But begrudginly, getting annoyed by the updates, virus warnings, us perstering them to switch browsers etc.
For most of their lifes PC were the only way to do some tasks, including photo management (and we were sharing them tons and tons of photo, they also filled SD card fter SD cards in the summer), and it’s only recently (basically the ipad Pro) that looking at us they though they could give it a try.
> While this must be true for a sizeable part of the population, more and more people also don’t ever touch a desktop/laptop or only use the one at work.
I have a hard time believing this, do you have any studies to back this up?
Might be different in the US given popularity of Apple software, but in the rest of the world, Microsoft Office suite is a fundamental tool in almost every office. I don't think it'll run on a Chromebook.
(And I mean the desktop version. O365 is dumbed down half-way to Google's Office Suite level - which is cool for an occasional document or spreadsheet, but is lacking both features and efficiency for professional use.)
Depends on the use case. I work for a MSP and I do almost all my office work in GSuite on the web. The only thing I fire up LibreOffice for is dealing with CSV files which are a pain in GSuite.
99% of my job requires nothing more than a web browser and a terminal with a SSH client. The remainder is mostly Wireshark.
Indeed it does. At her last-1 job, my wife could probably do most of the spreadsheet-related stuff in GSuite, except for that one spreadsheet that brought desktop Excel to its knees, and would totally explode the browser if you've ever tried it.
My experience from watching other people and occasional summer jobs is mixed. Some of the stuff you technically could manage on GSuite. Others you wouldn't, not because the files were too big/complex, but because the office computers were underpowered.
We need a proper keyboard, and a bigger screen, at a bigger distance. I can't look at my phone too long. My eyes start to tear up and the pain is horrible. A tablet won't be much better if you hold it in your hand - same distance as the phone. I can understand that many people use a tablet for work, but not for 6-8 hours a day.
And just as the Mac Mini is a laptop without screen and keyboard, the latest Macbooks are more and more iPads with a good keyboard. Well, good keyboards is questionable even these days...
Which isn't fully POSIX, breaks down with every Android release that clamps down security, and most of it is just plain ISO C, ISO C++ and OpenGL, hardly Linux specific.
None of them have anything to do with Linux, given that I was replying about "Linux desktop" here.
Second, iOS being based on UNIX hardly matters, no one is doing shell scripts and writing UNIX daemons on it.
Thirdly, no one was talking about macOS on that comment,.
Finally, even if you want to bring macOS into the picture, while macOS is a proper UNIX, what matters for Apple community developers lives in C++, Objective-C and Swift libraries, hardly POSIX related.
I asked whether apple’s user friendly (integrated) experience, iPod halo effect, consumer interest in the new iPhone, and their shift to low cost Mac mini would be a threat. He laughed it off. I asked “what about those who have gotten tired of waiting for the new Windows to come out and bought a Mac?” He asked if that was a personal story & told me to come back; I’d make enough money to buy a real computer. He was right. I bought an iMac as a full time employee and was one of the top bug filers for Win7 compat issues.
It took years for Microsoft to take iPhone seriously. They never predicted the “byo phone” era and figured IT pros would insist on enterprise features. Years after iphones were out the Windows phone OS code names were bears that ate blackberries.
Installing Linux in VirtualBox is not hard. If anything, it's extremely easy. Microsoft does not bring anything substantial, you could use Linux in VirtualBox since forever and they just bundle it in their OS. Windows just getting worse and worse. I was amazed when they decided to develop Linux syscall layer, but now it's just extremely lazy approach. Hey, guys, we don't have enough money to build proper Linux layer, let's just ship VirtualBox with Windows. LoL. I just don't understand all that hype. There's nothing innovative with their approach.
Can you tell me more or provide a link where I can read about it? I know that WSL1 indeed provided that level of integration, when you can launch Linux binary from Windows program or vice versa, because they are actually executing on the same Windows kernel, when files are actually on the same NTFS file system. That is truly amazing integration.
How is their WSL2 integration is different from something like shared folders?
From what I've heard the change from the layer approach to the virtual linux was mostly related to file system performance. Does it have to be innovative if it offers better integration than VirtualBox? Or is it reasonable that they use well-known and working tech?
Also, while I think it is useful. I imagine much of the hype here is also from the weirdness of it all. Would you agree that it is noteworthy that Microsoft is publishing a Linux-kernel for Windows? It think it is.
If there's not enough file system performance, they should improve their file system layer. It'll benefit all Windows applications as a side-effect. Believe it or not, but people actually use Git with Windows :) If Linux is able to pull much better file system performance, I don't believe that it's just impossible for Windows to achieve similar levels.
I think that Microsoft sent many patches to Linux kernel before. It might look weird, but it's nothing really new. Their Azure business uses Linux.
Yep, except in this case is just one goalpost and everyone wins. Now we can enjoy using software as MS Office and userland from Linux distros in the same machine seamlessly.
This is the "embrace" phase. It also kind of encompasses "extend" by adding hardware support via Windows drivers. This alone is not enough to get to "extinguish" though, so allow me to make a prediction.
The next step is fully locked down hardware that requires a signed OS. Linux will not run on it, but MS will pretend to support you by allowing linux apps to run on their kernel. Then, once no hardware will allow you to run linux natively they will depreciate it and get people to migrate to better supported APIs of their own.
People are just walking right into this as if none of Microsofts history ever happened.
The only OK scenario is that MS becomes a larger version of Redhat and migrates users the other direction. Is that happening? No. It's all about bringing your toys to their house.
Problem with this line of thinking is that operating system kernel matters. It does not anymore. What matters now is: if people are buying their cloud solution. No one cares about desktop users, no one cares about selling server OS. Real money is in big/medium companies buying their cloud stuff with nice monthly subscription. Good luck making people pay you every month for an OS, they don't want to upgrade and pay for new one even in 10 years so running Win XP.
In that line of thinking you also want costs of running your own cloud solution as low as possible, so it kind of even makes sense to phase out Windows because all competitors are running their cloud on Linux based systems, where they get all upsides of collaboration on Linux. Where MS trying to run all their stuff on Win pour money into something they could have for free or just also investing a lot less into Linux than what they spend maintaining Win.
> No one cares about desktop users, no one cares about selling server OS.
I don't see this in reality though. Any kind of corporate office IT is still firmly in the hands of Microsoft and Windows and I don't see any change for that on the horizon. Quite the contrary as IT departments have to double down on applying the Microsoft way with Windows 10 Enterprise and its changed licensing scheme. Where you've been able to dodge having a machine dedicated to license management (Windows Server, of course), you now are forced into a different licensing setup with Windows 10 Enterprise. Why Windows 10? Because Microsoft forbids you to use anything older on new hardware. Why Enterprise? Because you can't reliably make Windows 10 Pro stop phoning home about the documents you are opening. The only offer here is Windows 10 Enterprise.
All this futzing around with open source and Linux subsystems is just for the developer facing part of Windows. All the other areas, those that developers rarely encounter unless they talk to IT, are still shaped by the same Microsoft of 10 or 15 years ago.
Microsoft has seen that there was a huge brain drain towards apps running on Linux servers, developed on Macs (webdev, containers, deep learning, ...).
The new strategy is to get developers back on Windows via Linux. Once it's the new norm to run your apps in Linux containers that run inside Windows Server 2022, developed on Windows 10 machines, they are going to edge Linux out of the equation again. You won't be able to switch your containers away from Windows after that.
Believing that it doesn't matter how your end user devices fit into corporate IT or ignoring what steps Microsoft has taken to make Windows Server the default in the cloud is really dangerous.
That kind of locked down hardware already exists, and microsoft never needed to make any kind of argument about how you can technically run linux in a VM.
In other words, I'm not worried about WSL as a vector.
Exactly this. I still remember M$ playing rough in the browser game. Bad actors never change. You can purge "bad apples" again and again, change masks, pretend to be fluffy and lovable, but you're still same old evil co. We remember. We see you snooping. We see you seething with hate for FOSS underneath your let's opensource .net core mask.
And I, personally, will never forgive M$ for Elopcalypse, burning platform and death of nokia.
MS is actually walking back from lockdown. New Qualcomm ARM laptops allow you to disable secure boot exactly like on old x86 ones.
They don't need to do this anymore, pushing Windows is not their priority, they want to get people on Azure, Office 365 and Xbox live. Everything is about services these days.
problem is the kernel is sitting on top of a binary blob, and that manipulates the access to the real machine.
we shouldnt be so worried about the kernel, we should be concerned about what the binary between the kernel and the hardware is doing/notdoing , and what degree of control do we have over that layer.
A cloud provider and your personal machine are different things with different considerations. The advantage of using Linux on a cloud provider is primarily to prevent vendor lock-in and be able to move to a different provider or a physical server under your control as needed.
Using Linux on a personal machine is primarily so you can trust your machine to serve you and no one else. This is defeated if Windows or backdoored firmware is running below Linux.
Note that the no lock-in benefit also exists on the desktop and because of that it makes sense to switch to Linux on a machine that requires proprietary firmware as an intermediate step to moving to a better machine like those from https://puri.sm/
Cloud providers have no profit motive in breaking your trust. I fully believe an OS-vendor for personal computers would, especially if there's almost no alternatives.
Running Linux in a VM under Windows is not a new capability, nor is this convenience development particularly helpful to Linux's long term support of modern desktop hardware if it results in what would be new linux-on-metal users never bothering and instead using Windows as a driver layer for Linux VMs.
Chrome and Windows using the Linux kernel as a POSIX support layer is only going to diminish the Linux desktop community and the quality of its modern hardware support.
Yeah, you need to google for a tutorial, install something like Rufus or unetbootin. Have the whole mental model about what iso files are what is a boot disk, how to fix the boot order in bios if needed, etc... Very far from trivial for most people.
Microsoft: put a good POSIX layer into Windows. The times are mature now, and this allows your platform to benefit from a lot of open source system software easily, without portings and compatibility layers. At the same time, now that developers start to be annoyed big time by Apple nonsensical Mac handling, this gives a lot of devs, not developing primarily for the Win arch, the ability to evaluate Windows as a development machine.
Spot on on quite a few counts. It is a way to regain developer mindshare, drive migration to Windows 10 (a major business target since many companies are still running Windows 7, which prevents adoption of modern enterprise features — even bits of Office services) and onboard more developers to Azure, since a substantial portion of online companies run Linux.
As a long-time Mac user working at Microsoft (with Azure), this is the perfect storm. I can do everything I need on WSL 1 (except some networking stuff, obviously) and WSL 2 is going to let me get rid of Docker Desktop’s virtualization for Linux containers.
The pure geek in me really wishes they'd bring back SFU. Windows has a really impressive mechanism for implementing subsystems (I can't remember for sure, but I believe win32 is actually implemented on top of this layer).
I'm sure there's far more complexity involved than I could imagine, but I've always loved the idea of having a bunch of different subsystems running without the need for virtualization.
I'm sure that I'll have a nightmare tonight where I'm stuck in a Scooby Doo episode, and they unmask the ntoskrnl.exe binary to reveal this Linux kernel underneath....
I've only imagined a Microsoft version of Linux (or BSD or ...). I figured it was the only way to get a polished consistent experience much like I have gotten (mostly, but not perfect) from Apple and macOS. The reason I left Linux on the Desktop in 2006 for macOS. I was tired of inconsistent experience, constantly tweaking, sleep/suspend/hibernate haphazardly working (or not).
However, the one problem I have with switching to Windows, even with a flawless WSL (2), or full Linux/BSD/whatever OS under the hood, the whole Apple ecosystem integration, seamless syncing between all my devices and accounts will be hard to let go unless something from MSFT / Windows and company can provide that same convenience. From what I have researched, it's close, but still not there.
It's too convenient. I know people will say they have problems with it (iCloud/macOS/iOS syncing/integration), but my experience and all my fellow constituents and family haven't. So, I have not witnessed nor experienced these issues people bring up in threads on HackerNews, Reddit, etc that I have seen (yet can't seem to find exact examples of at the moment with links).
I really admire this work. I hope MSFT continues down the path they have been. I've always had a soft spot for them. I'm rooting for them.
I admire what Microsoft have done with WSL and now WSL2, it's really pretty cool. Satya has done an excellent job revitalizing Microsoft from the Ballmer days.
However, WTF. Why has the "Microsoft" directory got a leading capital. I mean every.other.directory.in.linux.is.lower.case!!!!!! Look around folks! Every other vendor in that tree has their name lowercase, please respect the local customs!
Sure, but if I worked at Microsoft there might be guidelines on the use of the company name. I doubt it’s worth reading into as some nefarious situation.
Question for those with more technological knowledge than I have: will this allow AF_PACKET support on windows so I can run monitor mode stuff? CUDA acceleration?
If I could avoid spending time getting nvidia drivers working on my laptop, that would be a massive help. And yeah, I know, I'm supposed to buy AMD stuff so I can use open source etc. But all the best hardware is nvidia, and so top hardware ships with nvidia. I've got to use this thing for my work as well, so it's not really an option, and I don't have the technical chops to quickly debug my drivers or wm or whatever.
These are two of their highest requested features at the moment [1] [2], and if I recall correctly they specifically remarked on passing through hardware as one of their main priorities going forward with WSL2. Unfortunately it requires some vendor support so it's not an easy task to solve right away, but I would stay optimistic.
I am indeed very optimistic. As much as I love linux, not having to get optimus GPUs set up on it would be a massive relief. I've voted on both before; here's to hoping!
Honestly, I would kill to see real shifts and development in operating systems space and computing space in general. I would rather love to be in a world where the industry incrementally adapts newer kinds of operating systems in their workflow, a world where different kinds of ideas (example plan9, inferno, lisp, oberon) are regularly experimented with in the marketplace. Here we are stuck with the same kinds of systems which just work in one way, and working on standardizing it; where CLI programs are still considered professional;
This is spent effort to be lazy.
Having said that, it is technologically worth studying.
I'm still sure as ever Microsoft is no friend of free (as in Freedom, not price) software and can't help but shake my head at this latest publicity stunt. I wouldn't trust their software any further than I throw it.
That open letter had nothing to do open source (i.e libre) software as we understand the term today. It was simply asking hobbyists to pay for Altair BASIC which was not distributed under an open source license.
I don't see how that's anything but reasonable unless you take the position that all software ought to be open source regardless the creators wishes.
Bill wrote his letter 21 years before this definition materialized and it actually served to inspire the free software/open source concept altogether.
> There is a viable alternative to the problems raised by Bill Gates in his irate letter to computer hobbyists concerning "ripping off" software. When software is free, or so inexpensive that it's easier to pay for it than to duplicate it, then it won't be "stolen".
Microsoft of today is only interested in one thing: earning money. Like any other corporation. if contributing to free software earns money (Azure, Developer mindshare, Store) they will do it. And the Linux foundation knows how to defend itself IP wise.
It is about earning money. RedHat does it. Microsoft does it.
They still don't have a terminal application that deserves the name. The crap they built into the Ubuntu app is still the same torturous parody that's used in the DOS cmd interpreter. Even Cygwin has a much better terminal than that.
Too many free / non-free / Linux / OSS ideologues.
Does this let users do more? Does this give users more options? This is a good, practical feature. I'm excited to, hopefully soon, fully ditch Linux on desktop for something that actually works.
This system even installs new Microsoft software and tracks all your actions without any respect for you turning it off. Finally you can forget about the pain of deciding what is going on in your computer. Microsoft just knows the best so it indeed "just works(TM)". /s
I already use WSL1 as my main setup for web dev at work. If Windows comes to a point where doing anything doesn't immediately make the fan spin to maximum RPM, I'll be very happy to call Windows my digital home.
These words are not made up by journalists, these are the exact words Microsoft used internally.
They do this, every. Single. Time.
Everyone keeps falling for it, talking proudly about how MS changed so much and they are now somehow totally turned around. Yes.. ok, whatever you need to believe in. Just be warned, they will utterly drstroy everything they come into contact with.
Or should I say: denial is the most predictable of all human responses. But, rest assured, this will be the umpteenth time we have destroyed it, and we have become exceedingly efficient at it.
Please could someone point me to a practical guide for a layman on how to run Linux apps on Windows 10? I want to give users compact installation instructions for shell and Node.js apps (1) mostly written for Linux/Mac OS, but I couldn't figure out what Windows version (Professional?) is required for WSL/WSL2, and how to install it.
(1) Node.js of course runs natively on Win; however my apps makes particular assumptions wrt file paths, shared-reading/locking semantics, and spawning external shell scripts
WSL can be installed on any version of windows 10 from home up.
Users first need to enable the WSL optional feature in their windows install, then they can install their favorite distro's userland from the windows store.
Additionally, if you don't wish to use the Microsoft Store you can download the distro packages individually. [0]
You then open/unzip the '.appx' package with any archiver such as 7Zip and place the files in a folder of your choosing; simply launching the distro executable ('ubuntu1804.exe' for example) to install and register the distro.
You can then use that same executable to launch WSL for that distro in its own window, which makes it easy to add the icon as a shortcut to your taskbar for quick access like you would any Windows program, without having to open command line first.
Added benefit is that you can store all your distros as folders (with individual rootfs and config files) in a parent location of your choosing such as a fast external array, without having to use the MS Store or an MS Account.
2. Stop racketeering Android vendors for their patented "inventions"
Also, what happened with NTFS-3G[1] project? Seems dead for many years, while still not supporting some of the NTFS features. They should at least upload it to GitHub.
As a dev and Linux/OSX user I'm still struggling to see who exactly this is for. All of the tools I want to use are very well supported on OSX and I don't have much use for Microsoft software like Office these days with Google Sheets et cetera. Is this being geared more towards corporate customers who have chosen to standardise all of their IT on Microsoft products, but who need easier access to dev tools/environments?
This is for me. I prefer Windows on the desktop, have a bunch of software that I use for my day to day that only exists on Windows, use a Linux VM for a portion of my dev work, employer is locked in on Windows, can't dual boot because of IT, etc...
The current WSL is pretty good, but version 2 is going to be a massive upgrade.
I also refuse to give Apple money for their overpriced garbage products.
I recently switched to Windows after 10 years of Macs, because I didn’t fee like forking out €3k for a decent 15” laptop. I switched to a Razer (€1700) which can actually game + with WSL I can actually do Rails development. After a month of using this machine, my carpal tunnel pains dropped off significantly (in part because I ditched the Magic Mouse for a Logitech MX) and my fingers don’t swell or hurt from bashing a non-existant keyboard on the MBP. So to me when it comes to computers, Apple is dead.
FWIW, when I switched from Mac to Windows, I was able to still do Rails development in Windows itself (I guess it ultimately depends on your Rails app complexity and the gems you rely on) and deploy my solutions to Linux servers. This was before WSL came out.
> As a dev and Linux/OSX user I'm still struggling to see who exactly this is for
It is for Windows users.
> All of the tools I want to use are very well supported on OSX and I don't have much use for Microsoft software like Office these days with Google Sheets et cetera.
Sure, OS X is a good solution for people or institutions who can afford those computers, who don't want to or can't use Linux, and don't mind paying that much.
> Is this being geared more towards corporate customers who have chosen to standardise all of their IT on Microsoft products, but who need easier access to dev tools/environments?
Yes, definitely, or just individual developers who want to use Windows. I don't know any devs who do this, but Kenneth Reitz was tweeting a lot about how much he was enjoying working on Windows and thought people's complaints about it were overblown.
That Kenneth Reitz comment reminded me of a drama in which he was accused of siphoning off money and buying new Mac from funds meant for some OSS development.
This is great for any developer who prefers the Windows desktop experience to linux/macOS. I'm primarily on a mac myself, but I've used the original WSL and it works quite well.
I'm not sure what Windows offer better in terms of third party tooling which is what matters more than OS itself.
I understand some need Windows only app to work but otherwise what does Windows offer better for third party apps that Mac doesn't for developers that WSL targets? I just can't think of one but I can think quite several the other way around.
Certianly aimed at me (non-corp) as I won't go near anything from Google. And while I've never had problems with them (worked there once), I'm amazingly pro Microsoft lately.
Now we only need a Windows-native Wayland server, which integrates seamlessly into the Windows desktop and of course just uses the Windows graphics stack.
This is truly amazing. I have always developed on Windows and recently got a new contract where whole development environment is based on OSX/Linux. It has saved me hours and hours of work (and $$$) since I can run their environment in subsystem, while keeping Windows interface. Thank you, Nadella.
What would be the motivation for Apple to do such a thing? MacOS is already a Unix, with a BSD-based userland. Also, they are doing active kernel development - as in architecture development - to enhance the system security, so that even root cannot compromise the system. One part of this is moving kernel extensions into user processes thanks to the fact that Mach is a microkernel. They couldn't do that with the current Linux architecture.
I see it as another publicity stunt by the Big 5 tech companies to save their reputation amidst years of anti-privacy violations. From buying Github, to "supporting" open-source software. Yeah right, more like marketing to save your reputation.
I'll keep moving further and further away from Big Tech.
Will Microsoft send themselves more "aggregated" user data on how their new Linux kernel is being used? Will this have to be another rule for my firewall to block?
It is possible. First, sometimes it is more benefitial to just pass on the shiny corp gadget. They are new, they look cool, everybody talks about them, but you often find that you can save attention span and protect privacy by ignoring it (e.g. facebook). Second, this stance make you look for alternatives. First you give up a lot, you have to work harder to work around. But if you do, you will have to go deeper into technology and think about how those things work. So it gives back so much in terms of brain development.
I think this will benefit windows users because now they have easy access to Linux but also probably to Linux users who now can write Linux programs that can access resources on a Windows machine.
Conceptually it amounts to having England and France whose people speak their own languages and now there is an European Union language translator program that allows them to talk to each other in their own language. No?
When I remember right, Microsoft is already a big contributor to the Linux Kernel. Do not forget, Azure has tons of Linux machines hosted. I think they know exactly how to ha ndle this.
And I think currently they are just patching their own Linux together like any distro does
for starters, it's not your typical VirtualBox VM running off your host OS. The Linux kernel in WSL2 runs on top of Hyper-V alongside Windows, which is also in a VM.
Microsoft is leveraging the same technology they use to run Windows on Xbox alongside the game OS, so it's really like an OS with 2 kernels
I get that it's on a hypervisor but from a user standpoint how is that beneficial or any different from just running a normal VM in a hypervisor, is my question? Like I could've already run Hyper-V and installed Linux, so (how) is WSL2 different/better?
I cannot answer your question. I can just tell you that they brought back Dave Cutler from retirement for this concept found in XBox and now Windows 10. This is definitely not that everyone's VM.
And if you do not know Dave Cutler... Well he is a VMS veteran (a system which was also ground up virtualized) who designed the Windows NT Kernel. He is essentially a legend in the field of operating systems and engineering. There is a nice documentation around him.
I’ve used both, and prefer the Surface Laptop to the X1 because the X1 screen and keyboard were garbage (blurry pixels and squishy keys).
I work at Microsoft and was issued a Lenovo, then got a Surface Pro 4 and finally a Laptop. Everyone in my team who got the Book instead is a little sad because it’s a huge lumbering beast compared to the Laptop and tends to be top-heavy compared to the X1s and Yogas we had. Those of us with Pros (tablets) and Laptops love the portability, even if you only get one USB port.
Also, all the Surface trackpads are MacBook-grade (although the software handling might need some tweaking for you to get exactly the same feeling - but I’m happy with mine now).
Thick ones, able to house active cooling, and therefore fast CPUs.
Where you can upgrade RAM and SSD with a screwdriver, without a heat gun.
A brand doesn’t matter.
This makes me nervous. This is a win-win now, but who can say they won't just do what they do and destroy desktop Linux as we know it when theirs gets the traction necessary? If I were a nefarious assembly of MS execs, my long-term plan would be to use Windows' UI friendliness and compatibility to attract Linux users (embrace) and, when there would be a critical mass of users, extend MS Linux to break compatibility with true Linux (extend and extinguish).
What about the memorable MS exec quote "Linux is cancer"? For MS, Linux is cancer. For the consumer, MS is the cancer. Don't use their products. What's a company got to do to be shunned by the market?
It's a win for the Linux user, because it increases the reach of Linux software. Think the uninitiated co-worker being able to run your scripts or something. It's a win for MS, because it makes their platform more powerful.
Of course, to repeat myself, I don't think Windows is a good thing.
Who cares about the year of the Linux desktop when the world is still turning into 1984? There was a reason we wanted Linux to dominate and it wasn't so it would end like this.
Arguably. It's hard to say Linux won on technical merits of networking compared to FreeBSD or OpenBSD, or on storage compared to FreeBSD+ZFS, or say that generally when devs chose MacOS X.
I suggest Linux won for being: the most popular out of the things which cost $0.
BSD lost for not being popular, commercial Unix, Windows and MacOS lost for costing money.
People used to say that BSD lost because of the ambiguous copyright status of the early 90s, allowing Linux to glide in unopposed as the "free unix-like os for PCs".
It'd also be fair to say *BSD development is more centralized ("cathedral-like" to borrow from ESRs 1990s work) and that may have some repercussions in development speed.
I didn't mean to say Linux is the best OS overall (I'm a *BSD user after all). Just that Linux became big among developers for practical reasons rather than ideological (though ideologically-driven contributors certainly helped).
Also the first versions of the kernel was released at the right time.
It is decent enough, free, open source and was around at the right time. The internet was just becoming a thing, the PC ecosystem had "won" and it was pretty open in that anyone could manufacture a PC. There was an AT&T lawsuit wasn't happening in the 90s IIRC against the BSD Code. If Linus had created his kernel a little later or a litter earlier than it may have never received the interest it did.
You know, it's funny. The browser world becomes a Chrome mono-culture and all the SV devs throw a fit, rightly, about the lack of competition. Then they turn around and cheer about the idea of Windows becoming just another chromium... er, sorry, Linux distribution.
Those are not comparable things at all. For starters, Linux is under the GPL. It also wouldn't give Microsoft the level of control over the web that the Chrome monoculture gives to Google.
But to extinguish, you need monopoly power, which MS is used to swinging, but isn't quite up to anymore, despite still owning a clear majority of desktops. Apparently owning desktops doesn't give you ownership of the people who use them, anymore.
So, lately, MS is fully occupied extinguishing its own previous products. XP, check. 7? Maybe. 8? Not entirely. By the evidence they might be thought to be trying to extinguish 10.
Edge. Couple of years ago I was really pissed how they fucked up the merge between Hotmail and Skype, I lost so many "friends" because they killed MSN / hotmail.
Microsoft first licensed Internet Explorer in 1994. It's most used engine Trident was released in 1997. IE/Edge was discontinued in 2019. That's one hell of an "Embrace Extend Extingush" cycle.
No, not everyone is going to abandon Linux; I'm going to define Linux in this context as the traditional Linux desktop where people run some sort of X11 or Wayland window manager or desktop environment on top of a Linux kernel running on bare hardware. However, it would be interesting to see what impact the popularity of WSL will have on Linux desktop environments such as KDE, GNOME and its derivatives MATE and Cinnamon, and others such as XFCE. For example, macOS has long been popular with those wanting a polished *nix environment with support for commercial software packages and certain types of drivers, to the point that some high-profile developers such as Miguel de Icaza adopted the Mac. Some may argue that the Linux desktop may have evolved differently had macOS not been an option. Even so, the existence of macOS has not led to the extinction of the Linux desktop. Now, what makes WSL different from macOS is every major PC manufacturer ships their PCs with Windows 10 by default, and WSL can be installed easily on any PC running Windows 10. You can run Linux alongside Microsoft Office and Adobe Creative Suite without having to use WINE. The barrier of entry for using WSL is much lower for a lot of people than the barrier of entry for purchasing a Mac.
This leads to some interesting questions. If a significant segment of Linux users abandon FOSS desktops such as KDE and GNOME for the Windows ecosystem, what will this mean for projects such as Firefox and LibreOffice? What will it mean for KDE, GNOME, and other FOSS desktop environments and window managers?
Now, there will be a contingent of Linux users who won't migrate to WSL for a few reasons. There are some Linux users who won't use Windows, whether it's due to certain features such as telemetry and ads or whether it's due to some other aspect of Windows they don't like. There are some Linux users who need to run Linux directly on their hardware and thus WSL wouldn't fit their needs. There are also Linux users who care about using a stack of software that is fully free and open source and thus wouldn't switch to SWL.
But will there be enough remaining users in order to justify further development of the FOSS Linux desktop in the eyes of major projects like LibreOffice and Firefox? We will see.
What you just said is such a realistic analysis and terrifying....I will keep on running linux on bare metal hardware no matter what. I like to know what is ACTUALLY happening inside an OS,not being told by a company what is SUPPOSEDLY happening.
They already did it the moment they started migrating to OS X to develop UNIX based software.
Microsoft just realized that those devs actually don't care about Linux as such, they just want POSIX like features, otherwise they would be helping OEMs that still try to sell Linux based systems.
It can be pretty easily applied to GPLv2 in commercial agreement. "You get the source under GPL and all the rights, but if you publish anything, you're dead to us and you need our product". This is what grsecurity does with Linux kernel, but the extinguish part is not in their interest.
It was -- and may still be, where it can be made to work -- official corporate policy and strategy, at highest levels. Complaining about it didn't make it go away. Reduction of monopoly power limited opportunities to employ it. They have never renounced it, or admitted that it was wrong to have done it.
This is a topic about Microsoft not Google: Mail, Amp, Android (Linux), Google Talk / Hangouts (XMPP), Chrome (websites by Google and external parties that only work on Chrome), and I am probably missing other cases where standard tech has been extended into ways that only Google can support.
Yes everyone knows where it came from, but the FOSS communities continued screeching and pot banging with that phrase and Microsoft is both tired, played out, and near pointless considering what Google is actively doing to the software world right now.
Thankfully, at least in this case, the same is not true for companies. When the highest level of leadership changes, its quite likely that the company, itself, will change as well.
With competent leadership, Microsoft has found a new business model which is working well for them without trying to control the direction of the entire industry. We've got other big players, now, that are far more concerning to me.
To be fair, plenty of people care more about the "Linux" than they do about the "freedom" part. Case in point: macOS. Basically everything that makes it "macOS" is proprietary, but it gained popularity because it was mostly Linux-compatible.
Unix compatibility is worth quite a bit on its own, Linux wouldn't have gotten half as far without it. And Unix is historically proprietary and closed-source.
> And Unix is historically proprietary and closed-source
"Unix" the brand name, maybe, but "Unix" the source code less so. BSD was based on Unix. The story of why Linux became popular instead of BSD is that BSD was mired in a copyright fight with AT&T over that relationship at the time.
"It would take another two years and the great Internet explosion of 1993–1994 before the true importance of Linux and the open-source BSD distributions became evident to the rest of the Unix world. Unfortunately for the BSDers, an AT&T lawsuit against BSDI (the startup company that had backed the Jolitz port) consumed much of that time and motivated some key Berkeley developers to switch to Linux.
> Code copying and theft of trade secrets was alleged. The actual infringing code was not identified for nearly two years. The lawsuit could have dragged on for much longer but for the fact that Novell bought USL from AT&T and sought a settlement. In the end, three files were removed from the 18,000 that made up the distribution, and a number of minor changes were made to other files. In addition, the University agreed to add USL copyrights to about 70 files, with the stipulation that those files continued to be freely redistributed. -- Marshall Kirk McKusick
The settlement set an important precedent by freeing an entire working Unix from proprietary control, but its effects on BSD itself were dire. Matters were not helped when, in 1992–1994, the Computer Science Research Group at Berkeley shut down; afterwards, factional warfare within the BSD community split it into three competing development efforts. As a result, the BSD lineage lagged behind Linux at a crucial time and lost to it the lead position in the Unix community."
Macos is not "mostly Linux-compatible". It has unixy tools from GNU and other places, but it has little to do with Linux. Ecosystem, native development is different. Some tech is badlyemulated, e.g. Docker containers. If anything, use of Macos shows that people found a way to work with a shiny unixy desktop software without Linux and without software freedom.
Why would MSFT throw away decades of development that have gone into the NT kernel? NT is an advanced and, to my eye, very elegant kernel. Win32 is much less so, but NT isn't Win32.
It's a theory and nothing more but I hope very much Windows 11 will be named W-11 and be just a really accessible Linux that looks and acts like Windows but will have the Linux kernel underneath the fluff. Maybe CMD.EXE could even be re-worked into a shell for compatibility with BATs, and Microsoft could work with Wine to make a perfect version that could work with any Windows binary. Far-fetched and barely plausible but a boy can dream.
10 years ago I would have thought it unthinkable a Windows machine would be sending data behind the scenes about me to a remote server.
This privacy policy[1] is insane:
> Together, diagnostics and feedback are how you and your Windows 10 device tell Microsoft what's really going on. As you use Windows, we collect diagnostic information, and to make sure we're listening to you, our customer, we've also built ways for you to send us feedback anytime, and at specific times, like when Windows 10 asks you a question about how something is working for you.
> There are two levels of diagnostic data: Basic and Full. [...] This data is transmitted to Microsoft and stored with one or more unique identifiers that can help us recognize an individual user on an individual device and understand the device's service issues and use patterns.
> Specific data items collected in Windows diagnostics are subject to change to give Microsoft flexibility to collect the data needed for the purposes described. For example, to ensure Microsoft can troubleshoot the latest performance issue impacting users’ computing experience or update a Windows 10 device that is new to the market, Microsoft may need to collect data items that were not collected previously. For detailed info about the data that Microsoft collects at the Basic level, see Basic level Windows diagnostic events and fields. For more info about the data that Microsoft collects at the Full level, see Diagnostic data for full level.
> We use the Basic level of diagnostic data to improve Windows. We use the Full level of diagnostic data to improve Windows and related products and services.
> Some of the data described above may not be collected from your device even if your Diagnostic data setting is set to Full. Microsoft minimizes the volume of data we collect from all devices by collecting some of the data at the Full level from only a small percentage of devices (sample). By running the Diagnostic Data Viewer tool (available on newer versions of Windows), you can see an icon which indicates whether your device is part of a sample and also which specific data is collected from your device. Instructions for how to download the Diagnostic Data Viewer tool can be found at Start > Settings > Privacy > Diagnostics & feedback.
> You can view diagnostic data for your device in real time by using the Diagnostic Data Viewer. Note that you will only be able to view data that is available while the Diagnostic Data Viewer is running. The Diagnostic Data Viewer does not allow you to view your diagnostic data history.
If you click the feedback panel on the right edge, you get sent to another Microsoft privacy policy[2]:
> We also use the data to operate our business, which includes analyzing our performance, meeting our legal obligations, developing our workforce, and doing research. In carrying out these purposes, we combine data we collect from different contexts (for example, from your use of two Microsoft products) or obtain from third parties to give you a more seamless, consistent, and personalized experience, to make informed business decisions, and for other legitimate purposes.
> We share your personal data with your consent or to complete any transaction or provide any product you have requested or authorized. We also share data with Microsoft-controlled affiliates and subsidiaries; with vendors working on our behalf; when required by law or to respond to legal process; to protect our customers; to protect lives; to maintain the security of our products; and to protect the rights and property of Microsoft and its customers.
> You can also make choices about the collection and use of your data by Microsoft. You can control your personal data that Microsoft has obtained, and exercise your data protection rights, by contacting Microsoft or using various tools we provide. In some cases, your ability to access or control your personal data will be limited, as required or permitted by applicable law.
It's not really that insane. Microsoft just provides a very detailed privacy policy that covers every single possible scenario and clearly explains it to the user.
In any generic platform privacy policy, the same lines you quoted from their privacy statement would be summed up as "We collect [insert here] as required for operational and security reasons", "We use your data to perform business functions" and "Your data may be shared with third-party service providers such as payment processors". It's nothing scary and is what's required for Microsoft to operate as an entity.
I'd also be careful how you interpret it. Microsoft's privacy statement applies to literally every single product and service they offer and acts as a catch-all, and from there they have independent statements that cover individual products in more detail.
In this case, the diagnostics and feedback page describes all of the scenarios that they collect and use the information, and therefore the global statement isn't relevant.
> one or more unique identifiers that can help us recognize an individual user on an individual device and understand the device's service issues and use patterns.
They have a file on each user, it is straight out of 1984.
The only reason I'm sticking to my Macbook is the Linux based kernal, which makes it easier for me to use the command line, install NodeJS, build an application, install Redis Server on my local.
Though everything can be done in Windows, it's not easy to do so.
I specifically remember trying to install Redis on Windows and had to give up after the end of the day. Ended up creating a server and installing Redis on that on AWS.
Also, people are complaining that Microsoft once said "Linux is Cancer" at some point. A lot has changed since then. Ubuntu was released after that which IMO helped to adopt Linux by an end user.
MacBook userspace is derived from NetBSD. The kernel is so far removed from BSD UNIX it's not worth even mentioning.
This link you commented on points to the kernel used in WSL2 which gives you a full Linux under Windows. Unlike Mac where you probably will use homebrew to install Unix packages, this is a full, unadultered Linux machine. Even WSL1 allowed you to install anything from Ubuntu although some (very, very few!) didn't run, most importantly (for us) some debuggers.
In the mid 2000's the Mac really took off due to being a good-enough Linux replacement on the command line, while taking care of all the hardware integration and providing a sleek desktop experience. This really helped the Mac take off among hackers. This system wasn't as open-source as Linux, but it was good enough: Most of us don't want to mess about with the desktop stack. If the stack we care about is open-source and runs across many systems, then we still have the freedom. I find it fairly easy to work on both Mac OS X, Linux and WSL2 -- but not pre-WSL Windows.
The inclusion of a Linux kernel really seems to cause a lot of confusion. No, Microsoft is not about to remove the NT kernel. No, they're not about to supplant Linux. WSL1 did what the guys from Illumos and FreeBSD have also done: since the Linux syscall interface is quite stable, the NT kernel can simply implement these syscalls and handle them appropriately. The was that the NT kernel was still sufficiently different that the filesystem performance was quite poor and complex applications like Docker required more kernel features than was provided by just the syscall API.
Including a Linux kernel is really the nuclear option: rather than trying to fit Linux applications in Windows, they're running in their own world and providing integrations into this world. This means you can now run Docker on WSL2 for instance, but it's also complicated: Kernels really like to think that they own the world, for instance they keep memory around to use for later. WSL2 dynamically grows the amount of RAM allocated to the VM, which is quite clever, but I don't know how/if they're going to handle freeing memory again -- here the NT kernel really needs to communicate with the Linux kernel.
Anyways, the upshot is that no, Microsoft is not going to take over the world, but perhaps it will be easier to use a Windows laptop which supports all the stupid Enterprise software I need but still have a Linux shell for actual work.