Hacker News new | past | comments | ask | show | jobs | submit login
eBPF on Windows (github.com/microsoft)
294 points by praseodym on May 10, 2021 | hide | past | favorite | 165 comments



A fun detail of this work is that it uses a formal-methods-based verifier (designed outside of Microsoft) that accepts a wider range of programs than does the Linux verifier, which is itself kind of nightmare fuel.

https://vbpf.github.io/assets/prevail-paper.pdf

The verifier in this paper also has some biting limitations; for instance, you can't resize a packet in it, because they don't account for pointer invalidation. I wonder whether they've since implemented these verifier features, since they'd be problematic for compatibility otherwise.

Additionally, the PREVAIL paper explicitly doesn't verify program termination, which is kind of a dealbreaker for kernel BPF.


The ebpf-for-windows maintainers have been working with the PREVAIL verifier maintainers for some time now on improving the implementation for everyone, since we agree that addressing such limitations is critical. For example, the PREVAIL maintainers added program termination verification back in January 2021. Check it out at https://github.com/vbpf/ebpf-verifier


This is all super exciting. A weird aside: one thing that'd be really nice to have is a userland verifier that we could build into our CI system; testing verifiability involves a lot of really contrived steps in our current process.


the PREVAIL implementation seems to be more robust then Linux's verifier, are there any plans to port it to Linux?

From the counter examples directory, this implementation seems very promising.


Re: termination, can't you use Ethereum gas style termination? I always found termination verification strange, because "gas" is usually more practical alternative.


The ZFS file system supports running Lua programs in Kernel mode. It supports setting a limit on the number of instructions run and the amount of memory used:

https://www.delphix.com/blog/delphix-engineering/zfs-channel...

So it’s been done before. I think traditional BPF programs were really short and ran in performance critical contexts. Disallowing backwards branches and limiting size was preferred to slowing down these code paths with instruction counting.


How would you maintain the gas cout? Per instruction or per basic block?


>Additionally, the PREVAIL paper explicitly doesn't verify program termination, which is kind of a dealbreaker for kernel BPF.

I'd be more alarmed if someone had solved the Halting Problem and I hadn't heard about it, to be honest.


Microsoft did termination proof for Windows drivers in 2006. Termination proof does not require solving halting problem.

Termination proofs for systems code (PLDI 2006): https://dl.acm.org/doi/10.1145/1133255.1134029


As mentioned in a sibling reply, there goes my week. Thanks for the reference! Time to learn something I apparently did not know!


I think you can reason it like this.

Assuming certain constraints (what you can define bounded analysis rules for) you can prove that things terminate. These rules things are usually far more advanced than we suspect initially, but they're not still not fully unbounded.(And we can often come up with more rules to make things practical even if unwieldy)

Deciding termination for an arbitrarily large program in a turing complete system CAN require no less than execution of such a program thus we CANNOT guarantee to be able to verify termination for EVERY program. (This is the halting problem)

I spent a lot of time on my thesis learning about type inference systems and as many such systems for uncooperative languages(JS,Python,etc) need to do abstract interpretation they fall into the same kind of issues and this is why practical systems use JIT compilation instead of AOT even if there are exceptions (Shed Skin Python is quite impressive for example).


I thought any general system contain arithmetic (as the proof based on transformation) cannot be proved it will be halted. Termination ... can it be terminated without halting?


Halting is indeed the same as termination. The halting problem DOES allow you to prove that some programs terminate. The problem is that there will be other programs that do terminate, but you won't be able to prove that they terminate.

So there is no way to know, for EVERY possible program, whether that program will terminate or not. But there are still some programs that you can know for certain that those programs will terminate.


From my understanding eBPF is sufficiently constrained to where the halting problem is solvable for all representable programs (in eBPF).


BPF verifiers don't accept arbitrary programs.


Awww shit.

I knew this day would come. Be back in like a week. Gotta get my neurons warmed up and translating Theoretic CompSci/discrete math notation again.

If I don't come back send a search party. I'll probably be stuck somewhere around pumping lemmas screaming "This is arbitrary bullshit!"


It’s not magic, just constraints. there are some obvious tricks (which have since been relaxed).

No unbounded loops for one. I think no backward jumps was previously a rule? I think the ability to tail call another bpf program was added, so not sure if you can accidentally loop forever between individually terminating programs.


You can't; each program is verified independently, and there's a small limit to the number of tail calls you can do.

Being able to run an unbounded loop in eBPF would be a big deal (doesn't mean it can't be done; in fact: it almost surely can be) --- a serious vulnerability.


Here is a recent comment from Dropbox engineers about the state of tracing tooling on Linux compared to Windows: https://dropbox.tech/infrastructure/boosting-dropbox-upload-...

Is their assessment correct? If so, how comes that we got DTrace in 2019 and now eBPF ported to Windows? Are they trying to consolidate all tooling into one platform?


ETW (What netsh trace in the post uses underneath) has been around for 20+ years in Windows and is extremely powerful. However, Microsoft has always neglected the usability side of it. It's a bitch to set up and hard to interpret. Documentation is there, but scattered and hard to find easily.

Most people here have probably seen Bruce Dawson's performance analysis and debugging posts. WPA/WPR are probably the only user-friendly (ish) ETW applications and even those are not that easy to use.

So yeah, Windows has had powerful tracing and inspection tools for a while, but few people know how to use them well.

As far as why they are doing DTrace and eBPF, I think both of those fill some holes that ETW doesn't. Mainly dynamic tracing and instrumentation, but I am sure there are other advantages I am not thinking of as well.

They could have come up with another hopelessly complicated Microsoftie framework to do the same things, but it's probably a good thing that they didn't.


ETW is the topic of the post titled "The Worst API Ever Made" and the title is well deserved. If ETW tooling is lacking, it's because API is so terrible nobody wants to write tooling.

https://caseymuratori.com/blog_0025


There are some good articles in here (recently added) - https://devblogs.microsoft.com/performance-diagnostics/ - but @randomascii's stuff was priceless (I'm still using UIforETW as it's friendliest little ui to setup xperf). I'm still struggling with WPA though (from loading/waiting to load symbols, to sometimes getting right there lost - but often it's helpful).

Visual Studio's own Performance Profiler (Alt+F2, Ctrl+Alt+F2) also uses ETW - just more streamlined, better UI - but once it collects a bit too much data, it just can't handle it.


>This was an eye-opening experience for us. We saw how far behind Linux is on the tracing tooling side.

Yet, in a typical Microsoft way, if you want to use that Message Analyzer tool now:

>Microsoft Message Analyzer (MMA) is being retired and its download packages removed from microsoft.com sites on November 25 2019. There is currently no Microsoft replacement for Microsoft Message Analyzer in development at this time.


Wow, I'm really stoked to see this!

This could be a game-changer for the infosec community in particular - now, if you want to get into internals, such as tracing file system and registry calls, you've got to write drivers. And drivers are very tricky to write, and it's very easy to miss corner cases - which can result in the dreaded BSOD. Plus, drivers need to go through a verification and signing process by Microsoft.

Having access to that capability from user-mode, without having to write drivers... that would be amazing.


Everything you described is already available on Windows in userland via filters (FileSystemWatcher/ReadDirectoryChangesW, RegNotifyChangeKeyValue, et al). Obviously you can only monitor whatever lives in your security context.

eBPF looks cool since it is a VM, that could have access to more kernel structures (network?), but solving the issues you described is largely a solved problem on the platform.


Nope, with FileSystemWatcher (and the Windows APIs it uses under the hood) you get minimal information - only using a driver can you get contextual information, such as the handle of the thread, process and user that made the call.

And RegNotifyChangeKeyValue is only useful for watching a single, specific value - if you monitor a tree, it doesn't even tell you what changed, only that something matching the filter did. And as with file system changes, if you want to know which thread/process/user made the change, you need to use a driver.


Sysinternal's Process Monitor does that from a process perspective: https://docs.microsoft.com/en-us/sysinternals/downloads/proc...

There is also System Monitor which logs the events to EventLog: https://docs.microsoft.com/en-us/sysinternals/downloads/sysm...

Are those not enough?


Process Monitor is a great utility, but I need programmatic access to this information.

SysMon is a great tool, but the license prevents distribution (such as bundling with an installer) - users must download it from Microsoft.


If all you want is user-mode tracing, there's ETW. Where you run into problems is that it doesn't let you block on calls, etc.


I do need to block on calls. I also need the process and user that made the change, and ETW doesn't provide those (to get them myself would require blocking).


It is really great to see that eBPF is causing an industry wide change instead of just a Linux one!


Great news, I'm looking forward to analyzing performance on Windows with BPF! Given that PerfView and WPA also got flame graphs it'll start to feel like home. :-)


From the architecture diagram, it looks like just network ebpf only.

Does anyone know if it can profile file/disk io type activities?


They make it pretty clear from the post that its support is currently limited, and that they plan on adding more hooks in the future.


I wonder how long until I can run cilium on my mixed node kubernetes clusters!


I admittedly have only an extremely cursory knowledge of these sort of technologies, but how does eBPF compare to NDIS filters and WFP filters? Biggest reason I could imagine for this is easier portability of existing eBPF applications.


You can do a lot more in NDIS and probably more in WFP than you can in eBPF, if only because eBPF has strict limits on the kinds of loops you can express (they have to be verifiably bounded). Packet-processing BPF programs are tightly bound to Linux kernel APIs, many of which won't be ported here, so I think write-once-run-anywhere is unlikely to be an attribute of this.

Importantly: eBPF has at this point not much to do with packets; it's a generic kernel and userland instrumentation layer. Most new BPF code is written to monitor local program runtimes, not to look at packets.

If Windows adopts XDP, we might get to a point where


The packet-processing BPF programs are less tightly bound to Linux kernel APIs than you might think. Even in Linux, there has been motivation to make the APIs more generic to support different kernel hooks for packets, in particular XDP which doesn't operate on the standard internal packet buffer representation (skbuff).


Microsoft doesn't support XDP, do they?(And you can only use XDP on Linux in certain circumstances). The clsbpf stuff is all pretty heavily tied to skbs.

Also, even in an XDP program, you're still likely to use a bunch of perf stuff, which is again pretty Linux-specific.


…where?


That's exactly what I thought. Because WFP are powerful, NDIS is bonkers (it's what what used by Internet Download Manager and Kaspersky for their features), so the only use case is maybe YARA-style eBPF rulesets.


I wonder what kind of performance this gets when compared with running eBPF since eBPF was written with performance in mind from the start.


Hence, the reason why eBPF has been referred to in the past as a Spectre-accelerator


They're doing native code generation, like Linux.


What?


BPF or berkely packet filter was written to be a faster replacement of tcpdump. People saw that it was pretty neat and started using it for non-tcp dump like stuff and it became extended BPF/eBPF. I would guess that running eBPF on Windows would be a lot slower, but it would be interesting to see a performance comparison.


Point of order: BPF wasn't written simply for tcpdump; it's part of a line of research on using PL runtimes to configure and operate networking stacks; so, right after McCanne's BPF paper, you get MPF, which is proposed to do all of demux for Mach.


Neat, did not know that.


A long post I wrote about this stuff, taking the history back to the Xerox Alto:

https://fly.io/blog/bpf-xdp-packet-filters-and-udp/


Is there any analogue of seccomp in windows that can be used with BPF?


Windows already has a native system call filter, doesn't it?


I don't think so? I actually looked for such functionality recently and couldn't find anything. Kaspersky uses a hypervisor to hook syscalls[0] in order to provide such functionality. There's also DTrace for Windows[1], but that requires being enabled through bcdedit which is a bit... meh.

[0]: https://github.com/iPower/KasperskyHook

[1]: https://docs.microsoft.com/en-us/windows-hardware/drivers/de...


I'm thinking of the win32k.sys filter, which, my Windows-literate friends inform me, only blocks a subset (a gnarly subset, but a subset still) of the total kernel attack surface; it's not a general-purpose filter.


And that is, kids, how MS Windows piece-by-piece was slowly transformed into a Linux distro.


"Eschew flamebait. Avoid unrelated controversies and generic tangents." https://news.ycombinator.com/newsguidelines.html

Comments like this one take threads in predictable, uninteresting directions, and—what's worse—they often get upvoted, accumulating mass at the top of the thread and choking out the specific interesting stuff.

"Windows v Linux" is a classic example of a generic black hole, sucking passing spaceships into a state from which no light can emerge: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

To be fair, this is a co-creation between the comment and the upvotes, and the upvotes are more to blame. But the idea of that guideline is to refrain from introducing black holes in the first place, so the spaceship can poke around something interesting in the vicinity without collapsing, screaming, into an irreversible fate.

Related explanation here if anyone wants more: https://news.ycombinator.com/item?id=26894739. Note the point about diffs—that's key. Diffs are what's interesting!


Oops, sorry about that!

Intention was to highlight how MS, that fought with *nix tooth and nail, now they are adopting more and more ideas from *nix, basically moving towards being best enterprise Linux.


You mean the company that used to have the best UNIX for PCs during the early 80's (Xenix), and a POSIX subsystem for NT/2000 for getting governement contracts?


There's a lot of interesting stuff in this post, including a different architecture for verifying eBPF programs, so it's frustrating to see a bullshitting comment at the top of the thread. I don't like Windows either, but the virality of the eBPF (and dtrace) model is an interesting story that tells us stuff about what operating systems are going to look like 10 years from now, and we need to do better than comments like this.


I recently spun up a Windows EC2 server instance in aws, running the latest Windows, and was not impressed. The entire system is not very well set up for operation via SSH.

SSH is not even enabled by default on ec2. You have to Remote Desktop in, run a bunch of commands to enable it, and even then I could never figure out how to get authorized_keys to work.

Once you have SSH running, you can run commands via command prompt or powershell. Powershell is apparently pretty modern and well designed, but it's quite confusing if you're used to Unix.

A few applications I tried to install came as MSI's and they could not be installed via the command line, requiring the graphical interface.

It felt like a bad way to run a server. You want fast and lightweight remote access on a server, and you want scriptability. Windows was simply worse in this area.

Even if Windows was 100% compatible with Linux apps (including the modern CUDA deep learning stack...), I don't know why one would ever want to run Windows on a server. It lost the mindshare war, and I really see why. It really is that bad of an experience.

The main reason I've heard people use Windows server is because they are forced to because of app compatibility (no tech startup would find themselves in this position), or the preference of maintaining a server with clicking in the GUI (again, not a place most of us here find ourselves).

Anyways, I'd love to hear from someone, anyone, who likes Windows server more than Linux.

The point of this comment is also to state the opinion that Windows is currently a very bad Linux distro.


I won't comment on the eBPF or WSL or even the Windows as linux debate, but I used to do some Windows admin.

Some clarifications are in order.

MSIs CAN be installed on servers with no gui required. Or much more commonly via powershell automation. Completely remotely and I'm not referring to remote desktop either.

I know its fun to complain about Windows, but really, if you're not using powershell and the right tools such as a Windows Domain, you're kind of like some kind of debian linux admin that refuses to use bash scripts / debian apt / ssh etc plus chef or similar management. It really is that kind of comparison you're making. What would you say about a linux admin that refused to use SSH or any scripting? You'd wonder why they are complaining.

Most of Windows Server comes alive when you join the machine to a domain and it gets access to actual infrastructure rather than some lone isolated server. This is to the point where you don't even need to log into the machine via remote desktop or ssh at all. In fact you manage the server from a completely different Windows machine with an admin account within the domain. You can then run the powershell script on your management computer and things happen on the target computer(s). I haven't even mentioned the multiple other tools you get as part of the machines being joined to a domain. Or you can deploy the script to the machine and execute it there. Still no remote desktop required.


I can't imagine how anybody manages any number of Windows machines w/o a domain. Even just an NT 4.0-style "downlevel" domain (hosted with Samba, if need be) made life so much easier "back in the day". Once Active Directory came out I never looked back. Group Policy makes so many Windows administration tasks reproducible and automated.

I keep meaning, year after year, to take at Samba and see how its Active Directory emulation functionality has progressed. I never end up getting around to it. Presumably, since so much of Active Directory is really just leveraging LDAP and a funky proprietary schema, it has progressed significantly.

I've had a hard time figuring out how to position Desired State Configuration. It feels like an appeal to Linux sysadmins used to Puppet, Chef, et al. I can already do everything it can do with Group Policy, and I'd want an Active Directory domain for single-sign-on anyway. I have a hard time coming up with a use case for learning all new tooling when I've already got AD and Group Policy.


Group policy "automation" is painful and inconsistent. Parts are only applied on user login. Parts only on reboot. Parts instantly. Hard to impossible to know which is which. And no way to log off users or reboot machines via group policy, so you need remote logins anyways.


I'd argue it's just being familiar with the tools. I'm biased, no doubt. Windows sysadmin work has paid the lion's share of my bills since the late 90s.

I've been using Group Policy heavily since 1999 (during the Windows 2000 beta), so I'm used to knowing what applies when. Being familiar with which client-side extensions (CSEs) do which processing helps. The documentation for Microsoft's own functionality is reasonably good. Third-party software is a mixed bag.

Remotely rebooting machines is easy. There has been a remote shutdown API in Windows NT since the at least Windows NT 4.0 (and probably going all the way back to the beginning). Since Windows 2003 there has been a command line tool included to actually use the API. Likewise, there are command line methods to remotely logoff users, refresh Group Policy, etc.


Re: rebooting is easy

Full agreement. We were remotely powering on/off hundreds of machines with a single command line. Then just a web page because we needed non-IT people doing it on regular basis.

We used to change wallpapers of certain groups of machines on a daily or even hourly basis due to special events. Do people really think we remotely logged into each one? Or manually remoted each one? Hmmm.

Next someone will be saying its impossible to get the event logs from each machine... truly bizarre. (Yes you can, and we did, even from client machines if a machine had developed "quirks")

Again, these complaints are just strange. Its as if they weren't using their infrastructure.

None of this is any kind of wizardry. I'd expect any linux or windows admin to know the tools required.


Depends, event forwarding is quite weird and broken. Default settings delay and buffer events for quite some time so that an attacker can easily suppress unsent event buffers. Crashes are also usually missed because buffered events have not been sent in time. That can be configured down, but all the provided defaults are useless.

Also, encryption and authentication for log forwaring is bound to AD Kerberos credentials that frequently expire, leaving you without logs again.

Any old syslog client and server is far superior in all the aspects mentioned.


Reading your comments it seems like you have had a bad time. I hope you moved over to linux or similar instead. Otherwise there's a lot of optimisation required in your AD with clean up of multiple areas required. Likely issues with network infrastructure as well.

Also, event forwarding isn’t the only way.


I didn't claim that rebooting a Windows box is impossible or hard. It is just that group policies are no complete solution for anything, they are just a small and imho weird part.

Also, for a lot of things there is just no group policy you can set somewhere, so you are still in need of some "execute me that script that does the needful" in a lot of cases.


We have different experiences. Group Policy is a tool that I'm confident, in virtually any engagement, I can use to automate nearly all the goings-on of a Windows Server environment. Third-party software may not play well, but between the GPP CSEs and scripting, as necessary, I can get the job done. That's been my experience.

I'm not sure what to say re: "execute me that script that does the needful". That's going to hold true in any environment, Windows or Linux or whatever. If you've got third-party software that uses unique configuration persistence mechanisms, or doesn't work-and-play with the OS-provided management interfaces (Service Control Manager on Windows, systemd on Linux, etc) then you're going to have to script one-offs. Most real sysadmin work, in my opinion, involves leveraging or creating code and infrastructure to handle automating assemblages of software. Scripting is often how that's done. Configuration management tools add some formalism to the process, but it's still "glue and tape"-- albeit fancy.


I think Group Policies are a very incomplete, partial solution. As you said, you usually have to combine it with a few more things to get the job done, and that is how any sysadmin job works. But: You need not only a one-off-script to make things really work.

You need a script that is idempotent, so you can execute it on all your machines and maybe do a retry or two if some are unreachable or an error happens. So the script should not only change things it is supposed to change (e.g. append a configuration line somewhere) but also check if that action is necessary or already done (e.g. that line has already been appended). Then you maybe need to restart the respective service to reread it's configuration. But of course you wouldn't want to unnecessarily restart it, since that might be disruptive or expensive, so you have to check if that configuration has been changed or not before restarting. Then handle automatic application for machines that are currently offline, have just been installed or need a refresh. Add in a some templateing and parameters, different groups, logging success/failure and you are at quite a complex script.

Or rather, a script that no-one should write, because it is exactly what configuration management is supposed to do. If you are writing all that by hand for each one-off, you are doing it wrong because you are reinventing the wheel, and usually badly. If you aren't doing all that, you are missing important parts. So imho configuration management like puppet, salt, chef, … is table stakes for any kind of sysadmin work. I'm not familiar with Windows DSC, but I find it strange that you dismiss it as pandering to the Linux crowd.


I like Windows Server more than Linux. I administer both of them, although I am more familiar with Windows Server which definitely plays a big part in my preference. But I think that's going to be the biggest part of anyone's preference.

Everything you complain about Windows Server not supporting is actually supported, but it was your unfamiliarity that was ultimately the issue. "Linux guy who is unfamiliar with Windows Server prefers Linux" isn't particularly shocking news.


"When in Rome, do as the Romans do."

I don't complain that Linux is hard to administer with PowerShell. I learned SSH and Bash.

The reverse does not seem to be true, Linux admins generally expect everything to work exactly the same in Windows as in Linux.

A negative effect of this is that Windows has many recent hires working on some of their teams with mostly Linux experience, and they're copying Linuxisms into Windows bug-for-bug. Even if it makes no sense in Windows.

A recent example that boiled my blood is that the new Windows Terminal emulates the incorrect "Clear-Host" behaviour of ancient Linux terminals. It doesn't clear the virtual terminal any more, just the current viewport. Before, it was a convenient way to reset the terminal's display state without resetting variables. NOW, it just deletes some of the display state, so if you scroll back you get scrambled garbage.

This is not a small thing: it's significantly impacting my workflow. I've seen garbage output, overwritten output, scrolling back accidentally corrupts the output completely, you name it. This is not a coding error, this is correctly copying a limitation of some terrible terminal from the 1960s nobody should give a shit about in 2021.

Worse: If you look back at the history of the debate around fixing this, as far back as the early 1990s there were Linux people advocating for fixing this! It's obviously broken.

Then: When the same debate came up in the Windows Terminal issue tracker, people making the same rational, logical arguments were shot down with: "We have to copy the standard."

Hint: There is no "standard". This is not an RFC. It's just what Linux does! Linux does this just because this is what some random MIT or Berkley student wrote in a hurry in the 1960s! If you make the argument that the majority of systems work some way or another, and that's a defacto standard... then Windows should win because it was used for the the vast majority of computing for many decades. Linux has taken over only recently, and only in some areas (Android and web servers).


> A recent example that boiled my blood is that the new Windows Terminal emulates the incorrect "Clear-Host" behaviour of ancient Linux terminals.

Hi! I run the team who made this decision, and I flatly reject the characterization that these changes were made by "recent hires" who are "copying Linuxisms into Windows bug-for-bug."

The discussion you're referring to spans multiple threads, so I'm not certain the exact one you're referring to. I can, however, give some of my input:

1. We worked pretty hard to make sure that PowerShell's Clear-Host and CMD's "CLS", which operate on the entire scrollback buffer in the traditional console, are properly translated into a full buffer clear over SSH/in Terminal/etc.

2. Terminal fully supports CSI 3 J, a non-standard extension to the Erase in Display control sequence that clears the scrollback buffer. The handful of tools I am using that support "clear" actually seem to emit both "clear viewport" and "clear buffer." If you're using a version of clear that has been configured to NOT emit ED 3 (clear scrollback history) by default, that's simply not my team's fault. I've heard that you can use "clear -x" to affect your desired behavior.

It turns out that there being no true standard here plays to our advantage: we can support requests for all manner of clear, and trust that an application can be made to do the right thing. If you're using an application that can't be made to do the right thing per at least thirty other terminal emulators' designs--spanning more than just Windows and Linux, mind you--I apologize.

If you'd like to reopen that discussion here instead of on GitHub, you are more than welcome to.


> Hi! I run the team who made this decision, and I flatly reject the characterization that these changes were made by "recent hires" who are "copying Linuxisms into Windows bug-for-bug."

Hi! I see Microsoft people on "docs.microsoft.com" use forward slashes now in examples that only work on Windows. So, there's that.

If I open the latest version of Windows Terminal, running PowerShell Core (not some Bash thing with some VTY codes or whatever) and I run "Clear-Host", I can scroll back and see content that wasn't cleared.

THIS BREAKS MY WORKFLOW. I used Clear-Host so I can run a command with pages out of output, and then I know that scrolling back will go back to that output only, not something that happened three hours ago but superficially looks identical.

If I do this with PowerShell 5.1 with the old Terminal, it works as expected. It works like it has for the last 40 years of DOS and Windows history.

PS: It's broken even worse in Visual Studio Code.

You guys outright abandoned your user base to pander to Linux users. On Windows. Linux users on Windows. That enormous, huge majority of your paying user base that is apparently more important that the hundreds of millions of people like me.

So yeah, you should reopen the issue and rethink your priorities...


>You guys outright abandoned you user base to pander to Linux users. On Windows. Linux users on Windows.

IMO they're going after developers using Macs. Linux (and Docker) runs better on Windows than it does on Mac OS. They're doing _something_ right.


So you're saying that Linux developers on Macs are now Microsoft's target market?

Not Windows developers on Windows?


Basically it feels like going after the SV crowd coding with macs on starbucks.


It feels like that Blizzard developer Wyatt Cheng that got on stage and asked the PC gamer crowd: "Do you guys not have phones?" https://www.youtube.com/watch?v=n5QRgpjfarY

The Windows Terminal guys are similar. "Do you have any plans at all to make this work for users of Windows?" "No."


I feel your pain, what used to be Visual Studio wizards are now CLI tools.

With luck you need to install VSCode and there is an extension to call said CLI.

Who needs VS designers or Blend for Windows UI development? Just recompile and run WinUI applications.

I really don't get it.


Those recent hires are also affecting how Visual Studio gets developed.

It is starting to become tiresome to get some CLI stuff instead of proper VS wizards.


Microsoft is getting really lazy with their new "PowerShell for everything" mantra. Many examples with Windows Server and even the Azure Portal. It's great to have the ability to do things with both a GUI and CLI, but then arbitrarily limiting things to one or the other (usually the CLI) gets incredibly annoying.


Lets limit GUI then :)


Ha, that's actually what I'm complaining about. So many things that can only be done via CLI and there's no way to even know they're possible without crawling through documentation or (more often) finding somebody's blog post or a StackOverflow answer.


PowerShell CLI is totally discoverable tho.


I'm probably pretty similar in experience to you. I "grew up" with Linux (and Xenix) in the early 90s, not being able to afford to buy Windows Server. I transitioned over to Windows Server around NT 3.51 and now handle both.

All the tooling for headless / remotely-manageable Windows is "in there" in 2008 and later. Prior versions required more jumping thru hoops to manage via command line, but Resource Kit tools helped.


My server administration journey actually began with a pirated copy of Windows 2000 Advanced Server. And I don't feel bad for it, because having started very young with Windows Server has directly led to many tens/hundreds of thousands of dollars in licensing fees paid to Microsoft at my direction :)


You missed a lot of tedious reboots by starting your experience with the Windows NT family w/ Windows 2000. Windows NT 4.0 was a breath of fresh air, as far as the UI went, compared to NT 3.51, but Windows NT didn't really come into its own, in my opinion, until Windows 2000.

The tedious "re-apply the service pack and reboot" or "you thought about changing a setting, so therefore reboot" scenarios were nearly removed with Windows 2000. Having plug 'n play and hot-plug hardware work well was great, too.


I do mostly Windows, and given the option would rather keep doing so, however you just reminded me of the "fun" of writing ISAPI extensions on Windows 2000.

For those unaware of it, IIS used to execute fully on kernel level, so ISAPI extensions were comparable to device drivers, which meant any programming error would just kill the kernel, thus requiring a reboot.

So tracking down memory corruption issues on ISAPI extensions was bound to have a countless amount of reboots during the work day.


Goodness, who decided that was a good idea?!


Thr ISAPI modules don't run in kernel mode, but they're in-process with IIS (back in W2K) and it's a bit fraught.

I assumed that kernel-mode HTTP service was an answer by Microsoft to the "Tux" web server: https://en.m.wikipedia.org/wiki/TUX_web_server


> Everything you complain about Windows Server not supporting is actually supported, but it was your unfamiliarity that was ultimately the issue.

Funny, I've seen people saying exactly that when newbies bounce off some aspect of Linux, and in both cases it's a usability failure.

Discoverability of this stuff on Windows is terrible. Things like powershell remoting exist but getting them working between two random machines is opaque. Yes, some of it gets easier in a domain, and that's one of the big Windows wins (since the NIS/YP era), but domains are an extremely premium feature.


As if UNIX was any better with its archaic configuration files and cryptic commands to avoid typing one extra character.

Most of the stuff I learned about UNIX required actually buying books, not much different from Windows.

Drop someone on a proper UNIX just with man to see how far they go joining a NFS server or logging in via yellow pages.


Its trivial without domain TBH.

If on workgroup, you just have to have the same user name on different machine and it works like in domain.

> Discoverability of this stuff on Windows is terrible

Meh... matter of taste and adequate Windows social skills, not a fact.


"If on workgroup, you just have to have the same user name on different machine and it works like in domain"

... and password. ... and no it doesn't really because an AD has a few other extras, such as Kerberos and an LDAP database with some odd ideas to chat to. You can of course spin up a Samba AD DC or two for a cheaper alternative.


I don't think comparing discoverability of features in Desktop Linux to Windows Server is fair at all.

And really, if you want to talk about discoverability between Linux in a server role and a Windows Server... Windows Server wins by an absolute landslide. But I don't think discoverability is all that important when you're talking about server administration.


This. RTFM


Windows OS is a thing of the past, when people were supposed to do most of their work from a desktop. As soon as computing become based on remote services, Windows lost its main reason to exist. It is Windows that needs to adapts to the new times, not people working on other OSs.


It is the OS from the past and still most people use it on their computers. Because they are still doing things from the desktop. And I suppose they're happy not having to adapt to technology du jour again and again.


MSIs, if put together well, easily install silently from the command line.

msiexec /qn /i msi-name.msi

Edit (now that I have more time):

All the tooling to make Windows command-line manageable is there "out of the box".

In a former "life" I staged a Windows Server 2008 R2 install that was remotely manageable with SSH (using a third-party SSH server) "out of the box". Similar to a "kickstart" w/ the Anaconda installer, you're talking about chaining scripts to run after the installation to enable desired functionality. It wasn't much different than automating a Linux install.

If you enable the serial console you can do some nice command-line management of Windows VMs through your hypervisor's virtual serial port functionality too. I enjoy having the serial console open because I've been able to diagnose "hung" Windows machines (that is, not responding to GUI logon attempts) thru the console. It's very handy.


and /norestart too


Its about 5 lines of powershell to install and start OpenSSH. Maybe a few more if you want to automatically inject public keys. Well within possibilities of a user_data script that runs on the first boot of an AWS instance.

My main pain with OpenSSH on Windows was that its an older 7.7 Windows port that gets installed. However 8.1 is finally bundled as part of the recent KB5001391 and fixes some annoying bugs.


I seem to recall authorized_keys works as expected; it goes into %HOME%\.ssh\authorized_keys (I'm not sure/can't recall if there are some similar permission requirements). Anyway, that's eg: c:\users\mcgyver\.ssh\authorized_keys

And ssh needs a restart.

That said, windows remote management is traditionally via WMI, not powershell. But PS has come a long way, and ssh is a sane transport. And it can function as a tunnel for WMI, RDP and PS (ssh is easier to use for key based auth, disabling password auth).


If you're a member of the administrators group, by default it uses something out of C:\ProgramData instead.

https://docs.microsoft.com/en-us/windows-server/administrati...


But of course, you're not running as Administrator, right? That's why we have runas... :)


In my experience, even MSIs can be run from the command line, but the amount of effort required to look this up and write a clean script is definitely way more than the equivalent Linux sysadmin task because Windows is just not expecting users to work this way at all.

My gripe with installing things on Linux is that every project assumes install means “build from source”, and it’s often annoyingly hard to find the exact yum/apt-get name for something (do I need a PPA? is the name thing, libthing, libthing-dev, libthing2, or something else?) But this is a much smaller issue than anything I run into with Windows Server.


> Powershell is apparently pretty modern and well designed, but it's quite confusing if you're used to Unix.

bash is pretty confusing eve if you're used to unix

-- Greg's


Bash is simply outdated. It grown organically and thats what you got. Now you have zillion scripts and situation is even worst then with cobol.

PowerShell was made by Unix people after 2+ decades of bash experience. It solved hundreds of things.

Yet people complain it didn't solve few or it takes a bit more to load (no, verbosity = RTFM). You can't ever please I guess.


Writing powershell scripts on vscode with the ps plugin is quite pleasant. Would recommend everyone to try it.


were powershell designers from the unix world ? I didn't know that

I don't mind bash warts, history is what it is, I don't sell powershell .. but someone saying powershell is confusing has serious jaws. powershell is really nice and only if you fancy sed/grep everything then you'll consider bash* superior


> Sure, although, you know, my team – we all have deep UNIX backgrounds, before getting into NT, I was a UNIX development manager at Digital [Equipment Corporation], … worked on Ultrix, System 5.

https://www.networkworld.com/article/3110744/linuxcon-qa-wit...


oh wow, I had no idea

thanks for the link


I don't think I want to compare them - I dislike them both! One is too basic and cryptic, the other one is overly complex for a shell, and in the end, I want to use neither.

I'm not sure what the ideal shell would look like though. Maybe an actual (simple and expressive) programming language with a way to 'lift' statements ?


oh my bad then, and the quest for an ideal shell / text user interface is a nice one. between lispy repls, to notebooks .. I guess there's a better solution that is both concise and fun yet not brittle.


The situation gets worse every year with multitude new commanline utilities and scripts being created as part of new frameworks, languages and tools. And then everything gets mixed together to form some unstable, ever changing, inconsistent programming language.


sed/grep/awk have nothing to do with bash tho and you can run them equally easy on Windows (cinst sed grep awk) if you like outdated reg ex.


yes they're not bash but they're an integral part of bash scripts pipe processing


They are "integral" only because shell is lacking those functions.

On some places tools are not there (docker image for example).

Some people complain about size of the dotNet runtime, but if you install all those tools bash "needs" for comfy usage, you can equally easy install dotNet.


I don't think that's right. unix users care a lot about streams of bytes as exchange format, powershell tried the object stream way, to avoid parsing manually stuff every time and also share some common bits (table formatting for free, map/filter for free too)

unix give you partial stuff for that like comm but it's just brittle


I don't use it in production but yea, I've read that automation for windows servers is mainly geared toward management with powershell. It's been a while since I've used it and it's definitely a lot different than bash, but it seemed petty elegant when I used it last. I like objected oriented programming, so having an object oriented shell was pretty neat.


But PowerShell feels so clunky and slow to me (disclosure: I'm used to Linux). If you want object-oriented automation tools, Python works much better on Linux. Granted, it's not a shell, but if we're talking automation, then it doesn't really matter.

I think the big issue for Windows is that most "admins" aren't in the habit of automating things, or at most they'll write a basic bat file. They still expect to click around their GUIs, so they'll be fairly reticent to install the Windows Server Core version [0]. "To be able to intervene in case something happens". The main issue with this is that they often aren't aware of possibilities offered only via PowerShell[1]. There's also the issue that when they look things over in the GUI, they won't see the configuration that's only visible through PowerShell.

---

[0] Windows Server Core comes without a GUI, but it can be managed remotely with the usual tools. However, not all server roles work on it. Remote Desktop Gateway is one such example, even though it doesn't have any "desktop" functionality.

[1] For example setting up split-view DNS. This is possible since Windows 2016, but only via PowerShell, and it's impossible to know from the GUI that it's activated. Also, this configuration doesn't replicate through ActiveDirectory.


I don't have much experience with Linux. All our customers (health care, mostly US) have Windows servers. Some of them have some Linux also, but that seems to be a minority. We need to be able to ship software they can run. Most of our development is windows-based.


I think mostly they're forced to use Windows server because of their proprietary software like outlook, active directory, iis etc.

It was maybe easier to find people to support Windows servers... Just click here here and here to install.


All of the modern GUI is just a skin over powershell. Most server functionality/configuration isn't exposed through the GUI.


I kinda like Windows server, mostly for the stability - both of the runtime and of functionality. Compared to Linux it requires less attention, i can concentrate more on software and less on the system: after two years of not touching i can still find configuration options in more or less same places. With Linux i have to re-learn quite a lot after not paying attention for a while. Missing good ssh/command line but not badly, I don't really need advanced shell scripting.


On the contrary, that is how Microsoft fixes the big mistake of not having kept POSIX subsytem around and improved it throughout the last 25 years.


I thought they agreed to not compete in the Unix world when they sold Xenix? Is it because SCO is dead that they can jump right into Linux?


They implement Linux functionality enough to suck at it and then the Windows users that try it thinks it is Linux that sucks and not Windows.


Let me tell you something interesting:

.Net Core programs run much faster on Linux.

Does that sound like something they would let happend if they wasn't serious about their Linux efforts?

That said: all is not good. For years there seems to be a fight going on in the wheelhouse.

One month it is: Microsoft, the reliable, reasonable vendor in a world full of Oracle and Google.

Next month it is: let's increase the cost for this small company by $10000 just because we can.

Next month: something admirable.

Next month: try to push Edge using some sleazy tactic learned from Googles Chrome push combined with resetting the defaults.

It must be frustrating for the guys who try so hard to drag the rest of Microsoft kicking and screaming into the future everytime some old guys gets up from the wheelchair and turn the weel all the way to the port side while the helmsman wasn't paying attention :-]


>Does that sound like something they would let happend if they wasn't serious about their Linux efforts?

Hhhm I think the most remarkable parr is that it runs at all, really! .Net Core on Linux by itself shows plenty commitment.

The performance of it? Considering NT was only developped by a single company, has to maintains a lot of stable/legacy driver APIs, and seems to have been on the backburner for a few years, as a Linux user I'd find it very humbling if our kernel still came out behind :)

It's interesting to see where this is all converging. It'll be easier to run Linux tech on Windows, and all the MS legacy still runs.

But at the same time maybe they're familiarising their loyal userbase with the outside ecosystem a little more than they should. If new Windows users are encouraged to learn Linux ways, at some point you're making it easier for people to transition


Found the Micro$oft fanboy.


The Linux functionality is intended to allow Windows developers to develop for Linux and deploy to Linux.


Compiling on WSL2 would have terrible performance. WSL2 is just there to try to entice companies to say their developers shouldn't need to move to Linux or Mac. No one is seriously developing in WSL.


It's not really targeted at compiling either. Think line of business apps on web technology like .NET Core, PHP, Python, etc...

With WSL you can setup some docker containers or do some orchestration with Kubernetes and push it off to the cloud with minimal effort.

If you really want a full Linux environment you can spin up a VM in Hyper-V. WSL just makes it easier to do things where a VM is a hassle.


Docker containers don't run in WSL.


You still have to install Docker Desktop and enable the integration. You can't simply install docker in your WSL2 distro and use it.


Not accurate. I don't have docker desktop and docker works fine for me in WSL2



They do in WSL2, as I understand it.


Try it out... install Ubuntu using WSL2. Install docker using apt-get. See that it doesn't work without the Docker Desktop install for Windows and a special integration. Try running podman or other container-based solutions and see that they don't work.


Having followed the instructions from https://docs.docker.com/engine/install/ubuntu/, I was able to get this working fine. This is without Docker Desktop installed. It probably would have also worked with the version from distro repository.

  zed@ZED-PC:~$ sudo service docker start
   * Starting Docker: docker                      [ OK ]
  zed@ZED-PC:~$ sudo docker run hello-world
  Unable to find image 'hello-world:latest' locally
  latest: Pulling from library/hello-world
  b8dfde127a29: Pull complete
  Digest: sha256:f2266cbfc127c960fd30e76b7c792dc23b588c0db76233517e1891a4e357d519
  Status: Downloaded newer image for hello-world:latest
  
  Hello from Docker!
  This message shows that your installation appears to be working correctly.

The full output for 'docker run' was much longer, so I've snipped it down to size


Are you even on Windows? This is with WSL 2, as you can systemd doesn't work as expected in WSL2. This is a known issue. If systemd is working for you, then you are doing something magic:

  sudo systemctl start docker
  [sudo] password for u3332:
  System has not been booted with systemd as init system (PID 
  1). Can't operate.
  Failed to connect to bus: Host is down
https://stackoverflow.com/questions/55579342/why-systemd-is-...


From that link (permalink to answer: https://stackoverflow.com/a/61887923) :

> Nowadays you can try:

> sudo service docker start

> when using WSL2, if you are running on windows version 2004 or higher (I assume).

Which is what I did in the listing above.


  sudo service docker start
  [sudo] password for u3323:
  docker: unrecognized service


Did you install docker from their own PPA linked in the instructions above, and are you on Ubuntu 20.04 with WSL2? Those are the only steps I took.

e:

  ver
on 'cmd' also shows

  Microsoft Windows [Version 10.0.19042.928]
if that helps

e:

Everything except the service start, which I had to find in that StackOverflow thread. We've hit the max comment depth, so I couldn't response to you directly.


Are you referring to the install instructions for Ubuntu? No where does it mention running the commands:

  sudo service docker start


If you run the Docker daemon manually, it works just fine. I'm running a container on it right now.

It's not a problem with Docker on WSL 2, but a problem with the way WSL uses its own init system instead of systemd, while some Ubuntu packages are packaged for a systemd system.


> Compiling on WSL2 would have terrible performance.

define terrible. Compared to what ?

> No one is seriously developing in WSL.

A very strong statement. Do you have any proof perhaps ?


> define terrible. Compared to what ?

Terrible = significantly worse than a native Linux distro that doesn't use a file system integration layer.

> A very strong statement. Do you have any proof perhaps ?

Just experience working in the industry.


> worse than a native Linux distro that doesn't use a file system integration layer

What filesystem integration layer?



Yes, having your compiler on Linux access its files over plan9 would indeed have terrible performance. That's why Microsoft tell you not to do that.

Just run everything on the Linux kernel and compilation will be the same as on Linux. You do have a choice of where you put your files.


Why would they need WSL2 to do that? AD and Office were and still are plenty enough to keep MS on corporate PCs.


[flagged]


No, this would just be embrace, they have not extended the ebpf capabilities. It would be "extend" if there were one way comparability, which doesn't seem to be the case.


If they patent ways to analyze more complex eBPF programs that would be extend in a very literal sense of the term.


I don't get what you're trying to convey here. Yes, there are hypothetical situations we could talk about?


Sounds like this time around they are going with embrace, import, extinguish.


[flagged]


This seems like the exact opposite of EEE. Care to explain your thinking?


If you're thinking in EEE terms, this is Embrace. Extend and Extinguish come later, once you've got a preponderance of developers on your platform.


Linux is cited as an example in the link: https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...


One assumes the idea is to make windows have feature parity with linux, then try to leverage network effects for features windows has but linux doesn't, such that the "extinguish" step will follow the "extend" step.


That's called "being better than the competition" and not really what EEE was meant to describe.


I'm not sure there's any crisply definable bright line separating them. Generally embrace/extend strategies do involve trying to make extensions that some people will consider valuable.


Pretty cool, but let's not fool ourselves. Recall history or why Microsoft is doing this. This is a long-term strategy.

https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...


Has Microsoft been doing a lot of embracing, extending, and extinguishing lately? I guess Typescript kind of embraces and extends Javascript, but I think every frontend engineer will be doing a happy dance if they manage to extinguish it. Not quite 1990s Microsoft's approach.


Microsoft is really being bold about adopting open tech to improve either it’s bottom line (Azure now runs more Linux VMs than Windows ones) and now with eBPF. Truly a new and different Microsoft.


On top of HyperV, a Windows tech stack.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: