Hacker News new | past | comments | ask | show | jobs | submit login
Any sufficiently advanced uninstaller is indistinguishable from malware (microsoft.com)
887 points by mycall on Sept 13, 2023 | hide | past | favorite | 510 comments



Here's the codeproject link the code came from.

https://www.codeproject.com/Articles/17052/Self-Deleting-Exe...

> Whether they follow the licensing terms for that code I do not know.

I'm guessing they didn't ship the binary with a link pointing back to this page?

These's also another codeproject example that uses a bat file, which is fairly similar to the recommendation in the post. I guess that's the better example.

https://www.codeproject.com/Articles/4027/Writing-a-self-des...


At least the author seems to agree with Raymond Chan on the similarities between his approach and malware...

> shellcode is the technical term (in security circles) for binary machine code that is typically used in exploits as the payload. Here's a quick and dirty way of generating the shellcode from the obj file generated when you compile your source files. In our case, we are interested in whipping the shellcode up for the remote_thread routine. Here's what you've got to do:

The whole article has the vibes of some questionable DIY blog along the lines of "Your house is infested by vermin? Here is an easy way to get rid of them using a small, homebuilt neutron bomb!"


Unfortunatelt CodeProject is full of code like this.


Nit: Raymond Chen, not Chan


Ah, I'm sorry. That happens when you write messages on the go... seems too late to edit the message though unfortunately.


Funnily enough, real malware does this correctly. Usually by just ShellExecing "ping -n 3 127.0.0.1 >nul" followed by "del" - no temporary file needed.


The author says the binary looks like malware because it self-deletes, sleeps and touches this uninstaller thing. But the script he proposes, which would be triggered by this same thing, does the same. I am ignoring the injection thing since he guesses at it (likely correct) and also because, lots of things inject into processes without being malware. (monitoring stuff like AV etc.) Additionally, binaries which terminate with a run some script via a scripthost... this could just as well be some malware? (stage1 malware downloads script, runs it via scripthost?)

my question(s): How is the proposed solution better than the original thing? Isn't this a case of using bad heuristics to determine maliciousness?

In the end, he goes a bit further, and sees its non-malicious. So, with a more elaborate rule or heuristic, wouldn't it be clear its not malicious?


The .js script isn't injecting code into another program in order to deletee itself; it is deleeting itself directly.

It can do that because, I'm guessing, the file isn't open; the run-time isn't executing instructions from that file. The file was read, the content compiled into memory, and closed.

The script is deleting its source code, not itself. What actually deletes the script itself is the garbage collector in the run-time. Once the fso.Deletefile call and the for loop are executed, they are no longer reachable and so they are garbage. If there is a way for the "var fso" and "var path" variables to go out of scope, they become garbage also.

A binary executable is mapped into memory while the process is running it. In Windows, an open file cannot be deleted.

But, even on Windows, a prog.exe could delete the prog.c source code it was compiled from, right? Same thing, sort of.


Being pedantic, you can delete open files on Windows if you open them with FILE_SHARE_DELETE.


The script doesn't inject. But a lot of malware downloads a script and runs it, so you'd hit another rule.


The way I understand it, the uninstaller program that wants to delete itself doesn't have to download the script from anywhere; it generates the script out to a file.


this is true, a good heuristic would see the difference perhaps. but malicious scripts can also be generated rather than downloaded (or more commonly decrypted from some seemonly random data) so it can be hard to tell, especially given threat actors having access to security products easily while the opposite is not always true.


Is the behavior that a running .js script is fully loaded into memory and the file doesn't need to exist documented, supported behavior?

What if, hypothetically, the system was suspended in the middle of script execution, and the resume function was designed to reload the script from disk?

It just feels like a different hack to me.

Also - trying 20 times and pausing 500 ms seems wasteful. What are the chances that it's going to succeed a subsequent try if it fails the first try? Why not catch the error message and only retry for errors that have a plausible chance of succeeding if you retry?


There is never a good reason to inject code into another process - particularly a system process. At the point at which you believe this is necessary you are several layers of hackiness deep and should go find a beverage and think over what your actual goal is.

As a metaphor: you find the instructions to sweep your floor cumbersome so you reprogram your neighbor's Roomba to come clean your floor. Sure, it may well go back to their house and no harm done, but it's hacky, socially unacceptable, and no matter how hard your broom is to use it's not OK.


security products have valid reasons, though maybe not _good_. forcing plt and got entries to be bound rather than lazy loaded, forcing certain segments to be read only and hooking a bunch of stuff is neccesary for them, and that can only be done by suspending processes at startup and then injecting and modifying them. its a bandaid to a bad system hence its a valid but maybe not good reason (better to prevent than this cure of theirs..)


> Is the behavior that a running .js script is fully loaded into memory and the file doesn't need to exist documented, supported behavior?

If Raymond Chen says to rely on it then yes.


even though i admot this guy is genious level, its not really good to rely on one persons judgement for anything. thats a bad practice in general. zerotrust and all :)


Even if it doesn't, I think you could do something similar by just spawning a shell (command prompt) that executes a small script trying to delete the file. You just have to take care to make sure the process is detached from the original one and then let the spawning process terminate to release the lock on the executable. PowerShell could also work, but I know it is pretty restricted in a lot of environments. These completely avoid any intermediate file.

I think the retry is necessary because if you launch "wscript cleanup.js" from the process that wants to be cleaned up, you then need to wait for the spawning process to finish executing. I agree if it fails after 20 times, you should probably spawn an alert or something letting the user know that uninstall failed. There are also so many random processes that might take a reference on the file like antivirus in Windows so just spamming retry will help wait that stuff out (this problem does not exist on Linux since Linux generally just does garbage collection which has the downside of not keeping specific paths around, just file inodes).


Agree, I'm shocked at how ugly the recommended alternative is. This does not make MS look good.


There are plenty of other solutions; that was merely one straightforward option that could be shown in a few lines of code and which doesn't require *injecting code into a binary you don't own*.


Sure, but there isn't even an offhand remark about how hacky this kind of polling is. It's presented as if it's a completely normal way to do things in reliable software.


No worse than the 10 minute delay rule in DllCanUnloadNow.

DllCanUnloadNow returns an indication that a DLL can be unloaded. A DLL cannot be unloaded if any threads are executing the code. But a DLL can only change to the unloadable state by executing some code, and that code has to return after it has set the indication. Only after it returns is the DLL is actually unloadable! So a delay is needed for that thread to vacate the DLL.

https://groups.google.com/g/microsoft.public.vc.atl/c/AQvHCW... [2001]


So in the present example from Raymond Chen you need the loop for a similar reason.

The binary .exe program which the script is trying to delete is the one which created the script and launched it. So that means the .exe is still running at that point and cannot yet be deleted. Lauching the script indicates "I'm about to die", not "I'm already dead". The script cannot delete the .exe until the .exe terminates. Without some event to indicate that, you poll.

The script knows it can delete itself, so it tries that only once.

If a handle could be attached to the process, then the script could do a WaitForSingleObject on it; that would be the prim and proper way.

It doesn't seem worth doing; the chances are low that the process cannot terminate within 20 seconds of launching the reaper script.


i am not shocked, but its hacky like comments suggested. ultimately done because the lack of a better alternative. id still follow chens advice even though its hacky.


Once you move past comparing hashes against known malware (by definition useless against novel malware), and the slightly more complex matching of specific binary strings, detecting malware with "shitty heuristics" is basically all we've got.

Companies that buy AV/EDR products expect them to detect unwanted behaviour while allowing any sort of weird, hacky, abuses of the system that they rely upon for their business.

It's never been entirely clear to me why windows provides such a rich interface for one process to inject and start executing code in the address space of another but IMHO I want to know when this is happening even when it's done by a "legitimate" uninstaller.


The first point is true i admit. There are very complex and good ways to identify stuff but those perform so bad they cannot be used in practice.

AV/EDR products do try to prevent a lot of stuff. They can 'generically' block things like injection etc. by 'hooking all the things' and injecting into everything (yes, yuck :D and still kind of heuristic based i admit!) to make certain sections read only or remove executable mappings etc. (got/plt/stack/...)

Linux or more specifically ELF files also have an easy vector to allow injection by having a dynamic table entry for debugging purposes which can be trivially overwritten for example. "ibc.so" :(). I'm not sure anyone uses that entry validly... especially since there's better/less awkward debugging interfaces than injecting a debugger DLL into something :') at least in x86/64 Linux land. (ELFShell sure was fun tho!)


If you're talking about LD_PRELOAD, I used it for an integration test suite of low level system components.


im not sure how the implementation of ld preload works, but this is a linker directive if i am correct. the thing i am on about is the DT_DEBUG dynamic table entry. its an entry meant to allow a debugging dll to be loaded. you can overwrite it and point it to a malicious dll (with a bit of difficulties) to get injection going. its then hardcoded into tue binary by your modification. admittedly maybe ld preload is easier if thats allowed on a system


LD_PRELOAD is handled entirely by ld.so and includeed inside your ELF by the linker. It not only handles that but also all dynamic linking inside your program.

By writing a custom linker you can easily incercept all dynamic linking done at runtime and provide whatever you want to the program.

gcc -Wl,-dynamic-linker,/path/to/my/linker myprogram.c -o myprogram


>How is the proposed solution better than the original thing?

I'm only assuming here, but maybe because it won't crash explorer and it's just a few lines of self-documenting code?


Haha, well fair enough the crash is bad indeed, good point! This isn't intended behavior though and presumably, it doesn't crash on in cases of this technique being implemented in uninstallers. (a bit of a guess i admit!)


The fact it injects into another process means they can't know if it'll crash or not. You're just one Explorer update away from things changing enough for the hack to crash it.

I guess they could do this more robustly. I.e pause the entire explorer process, save all its state, remotely allocate new memory to inject their code, remotely create a new thread, run only that thread using the injected code, restore all the process' state and finally start it running where it left off. A script would be easier though.


The problem isn’t that the injected thread is racing explorer - indeed, pausing the entirety of explorer to run your uninstaller would probably be strictly more dangerous than what they’re doing - the problem is that the injected thread is using function pointers that do not exist in explorer.exe. Most likely, the reason is that the uninstaller itself has been “detoured” by yet another program to patch calls to certain functions, and it’s copying the detoured addresses instead of the addresses to the real functions.

Both detouring and remote thread injection are supported on Windows, but fall into the category of gray-hat techniques; there are some legitimate uses but quite a lot of illegitimate uses, and using these techniques correctly (without crashing anything!) can be a real challenge.


Agree completely.

I would have assumed (naively?) that they could just copy their uninstaller into a temp folder and run it from there, and just rely on the OS to nuke it in due time, but as a consumer I appreciate the thoroughness of an uninstaller that leaves no trace.


honestly your idea about the tmp dir i think is better than the no trace option. especially given the hacky nature of self deleting stuff. if you have so many uninstallers around that the disk fills up the tmp.dir cleaning would fix it, which is a common first thing to clean when cleaning the filesystem. i like your idea :)


You find the instructions to sweep your floor cumbersome so you reprogram your neighbor's Roomba to come clean your floor. Sure, it may well go back to their house and no harm done, but it's hacky, socially unacceptable, and no matter how hard your broom is to use it's not OK.


The technique could have been made a little more robust by calling GetProcAddress to get the function pointers in Explorer's context, assuming GetProcAddress wasn't itself detoured.


Cybersecurity is pretty much all bad heuristics with the belief that if you use enough of them they average out to an ok determination of maliciousness. It works alright, sometimes.


It's more about how it does it: injecting executable code directly into the stack so that some other code unwittingly transfers control to it. Stack-smashing is a lot more malware-ISH than a few lines of shell script.


They inject something into Explorer. I would assume that to be some DLL that is injected?

---- Neither code injection nor detouring is officially supported. I can’t tell who did the detouring. Maybe somebody added a detour to the uninstaller, unaware that the uninstaller is going to inject a call to the detour into Explorer. Or maybe the detour was injected by anti-malware software. Or maybe the detour was injected by Windows’ own application compatibility layer. Whatever the reason, the result was a crash in Explorer. ----

I'd think the anti-malware guess here would be correct, and that (DLL) injection was stopped, and thus some crash happened. Thanks for your reply.

The DLL would execute something from its stack so it can be somewhat dynamic (perhaps some path or something is generated before the injection or so - really little to go on here...) and not need to make a heap allocation within Explorer.exe. (this is perhaps a bit too much to assume idk.)

Thanks for your insights!


>I can’t tell who did the detouring

I was thinking hmm and so it continues that game of whack a mole.

If a new OS could be designed from scratch, there must be a way to prevent this sort of stuff.


i am making an os from scratch. but admittedly, i am so far from this stuff i will never ever have anything even closely related to such a problem hah. its so much work :'( (fun tho!)


Maybe it’s less likely to be flagged or interfered with by antivirus? Antivirus uses all kinds of shitty heuristics, seeing that my Go executables built with -ldflags="-H windowsgui" are flagged as malware by Windows Defender and co. all the time. It’s maddening.


that's fair enough. perhaps they can more easily in AV land make some kind of signature that whitelists this self-deletion javascript method because the script is more readable. though i'd expect a good AV to be at least worried if some executable on my system runs a script file via a scripthost.


Why do Windows programs need special installers/uninstallers? Why isn't this handled by Windows itself?


Windows has had an installer as an OS component since the late 90s (called Windows Installer). As a sysadmin I'd prefer apps use it. Many application developers do not. It's maddening. (Doubly so when Microsoft themselves don't use it-- newer versions of Office, Teams, etc. Microsoft suffers from too much NIH.)

I get unattended installs and uninstalls "for free" when well-behaved applications use Windows Installer. Patching is included, too. Customizing installations is fairly straightforward.

On the developer side it has historically used a very quirky proprietary file format (MSI) with a fairly steep learning curve and a ton of "tribal knowledge" required to make it work for all but the most trivial cases. (Though, to be fair, most installs are the trivial case-- copy some files, throw some stuff into the registry, make some shortcuts.)

Worse, it allows for arbitrary code execution ("Custom Actions"), at which point all bets are off re: unattended installs, removal, etc. Some Windows Installer packages are just "wrapped" EXEs (Google Chrome, for example).

I've packaged a ton of 3rd party software as Windows Installer packages. It's an ugly system with lots of warts and legacy crap, but if you need to load an application on a large number of Windows PCs reliably unattended it's decently fit for purpose.

There is reasonable free and libre tooling to generate MSI packages from plain text source (the WiX toolkit) and it can be used in a CI pipeline.


Can confirm. I would be considered by most to have been a Windows Installer expert at one point. Installshield / Wix / Whatever else.

It is intentionally obtuse at times (MSIFileHash table uses rearranged MD5 hashes for example), and also many features made sense for the late 90's/Early 2000's era where bandwidth was low and connectivity limited, and lots of stuff was distributed on CD's. The look on people's faces when you explain advertisement to them the first time... How their unrelated app can get stuck in a loop of repair for a piece of unrelated software...

It was deprecated by the newer AppX/MSIx/AppV format which uses sandboxes, binary chunks/streaming and no executable code to install stuff.

For my own desktop computing, I prefer MSI packages because I prefer having control post-install to tinker with things if I feel like it. Also, I have the skillset to modify the installer to my whims if I so choose.


> It was deprecated by the newer AppX/MSIx/AppV format which uses sandboxes, binary chunks/streaming and no executable code to install stuff.

I can offer a little perspective on MSIX, having devoted months of my life to it in a past job.

MSIX is nearly unusable outside the Store. It will work in a tightly controlled environment, but when you try to deploy it to a wide variety of users you will run into 1) unhelpful errors that basically can't be diagnosed, 2) enterprise environments that cannot/will not allow MSIX installs. I get the impression that the MSIX team is uninterested in solving either of those issues.

It's not a coincidence that virtually no first-party teams use MSIX to install their product outside the Store. Office tried for a while and eventually gave up.

Despite all that, there are still a few people at MS who will tell you that MSIX is the future. I don't really understand it and I assume there's a weird political reason for this.


MSIX can be made to work in that context. We've done it, although it required writing our own installer EXE stub that invokes the package management API rather than using Microsoft's own "App Installer" app, and doing lots of remote diagnosis to solve the mysterious bugs you were hitting. I would indeed not recommend anyone try to use it with Microsoft's own tooling.

Still, when you finally make it work you get a lot of benefits. MSI is clearly dead end tech which is why so many MSIs are just installer EXEs wrapped in the file format. It doesn't have any clear path to modern essentials like online updates, keeping the OS clean, sandboxing and so on. If you were on the Windows team, what would you say the future was?

For enterprise environments it's actually somewhat the opposite: MSIX packages can be installed without admin privileges due to their declarative nature, and it's very easy for admins to provision MSIXs to Active Directory groups because they don't have to do any repackaging work. Yes, some admins have hacked Windows to stop it working because when MS launched the Store they didn't provide any way for admins to opt out, but these days they have the knobs they need. Also, because they're just zips you can always just unzip them into your home directory to get at the app inside. It won't auto update that way, but as long as EXEs can run from the home dir it can work.

Products like Office and Visual Studio have entire teams devoted to nothing but their installers, which is clearly going too far in the opposite direction. Most products won't want to do that.


Orca ftw


If you go way down the rabbit hole, you end up at modifying OpenMCDF.


> with a fairly steep learning curve and a ton of "tribal knowledge"

Yes, people preffer to debug their own code rather than spend shitload of time to understand Wix/MSI.

Microsoft deciding early on to not produce low cost tools for Windows Installer also didn't helped with the adoption.


The joke is, Microsoft devs even now use NSIS for things like VSCode rather than deal with MSIs lol

But there is the modern implementation of AppX Bundles which was later extended to create MSIX which allows app distribution without the windows store. There are still drawbacks to using MSIX usually because you want to touch Windows in ways you can't inside the sandbox.


To my understanding, MSIX today supports the full MSI catalog and will do entirely unsandboxed (unattended [0]) installs if you want it to. But you need to understand all the same complexity of MSI to build installers in it. The biggest remaining difference MSIX and MSI is an MSI is a strange nesting doll of ancient binary database formats that is tough to build (which is why WiX exists and is part of why WiX is so complex) whereas MSIX is "just" an ordinary ZIP bundle of your files plus XML manifests. With the final punchline being that those XML manifests have yet another dialect from WiX's ancient MSI-database-influenced XML and of course it also isn't as simple as deleting the WiX compiler and just zipping a WiX project.

In my experience, you can pretty easily write the nice sandboxed MSIX manifests by hand, it's not too bad, but general MSIX doing weird MSI things you still want better more expensive tools to build them (and of course Microsoft themselves still don't exactly provide that and will point you to plenty of expensive third party installer studios for options, many of which are the exact same ones people have been overpaying for decades).

[0] The one complaint I'm aware of is that you can't do custom installer UI and "attended" installs with user choices. There's one MSIX UI and it is barebones but acceptable. That's all you get.


> Doubly so when Microsoft themselves don't use it

Often, as you mentioned, Windows Installer packages are wrapped by an executable (in WiX this is called a "bundle" because you may also choose to add redistributables like the C++ runtime).

However, what you see in installations like SQL Server, Office and Visual Studio is that the installers are bundles as well - of a large amount of MSIs that need to be installed and uninstalled in a specific order. A single Microsoft Installer package is transactional and can be rolled back on failure, but bundles are not as well defined and left open to the implementation of the bundle. Windows Installer does not reach beyond the single msi package.


As soon as Office 2007 didn't use MSI the format was doomed.

I assume the Here in NIH refers to an individual team, not MS as a whole.

Teams is entirely NIH https://github.com/Squirrel/Squirrel.Windows for updates to the Electron app.

I would use winget, but MS made it weirdly hard to run as a script on multiple computers, it installs per user, because... who knows.

So still using chocolatey


To be fair, Squirrel came from GitHub and early Electron before Microsoft bought GitHub, so it wasn't Microsoft's NIH that built Squirrel originally.


True, I used NIH in the opposite meaning accidentally, I mean it was not invented at Microsoft


I had a friend who worked for a company that specialized in Web browser bars and MSIs. In other words, they were a shop to put all kinds of malware into these things. It was a viable business model for a company of something like 50 people.

The whole story and ideas put into Windows installing programs are a stupid joke. It's designed by project managers who have no idea what they are doing and no imagination and is executed by the cheapest least motivated programmers from South Asia Microsoft can possibly find.

A lot of people who do useful stuff for Windows try to stay away from as much of Microsoft's stuff as possible, and integrate it as little as possible with anything else coming from the system because it's just maddening how horrible everything there is.


A lot of weird things in windows are reflections of the gestalt in the 90's and early 2000. People went all in on all sort of OOP-derived weirdness, like CORBA, COM.

"Plain-Text files for configuration? what do you think we are? savages? no, we need a hierarchical registry! every serious program must use some opaque binary format to store stuff!" seem to be the general animus at that time. Nowadays, even if you really hated the idea of a text files for configuration in your home direction, people would probably do something more straight-forward like using a SQLite db.


Agreed re: some of the Windows "strangeness". I think there was some amount of needlessly "Enterprise" architecting going on at MSFT back in the day.

There were also very practical solutions incorporated to accommodate the constraints of the hardware of the time that come off looking like needless complexity today, too. (There's also, arguably, some laziness in the old binary file formats that were nothing more than dumps of in-memory structures. That's a common story across a ton of old software-- not just MSFT.)

Rob Mensching, who worked at Microsoft on Windows Installer pre-release, has a nice blog post about internals of MSI.[0] He goes into some of the overwrought architecture in MSI, as well as quirks to overcome performance limitations in a world of floppy disk-based distribution and much smaller memory capacities. It's a good read.

[0] https://robmensching.com/blog/posts/2003/11/25/inside-the-ms...


Wix is part of the problem. It's basically making money for the developers who offer consultancy for it.

Therefore the documentation is poor , like the absolute worst I've ever seen. Opening issues for doc issues never results in anything. Pointing out UX issues is usually shot down. Finally, until this year you needed .net 2 installed to build it, which does not play well with windows docker.


I don't think any major desktop OS handles this well.

I suspect the final form for software installation is probably where iOS and Android are going in the EU, where there's a single means of installing software to the device so that everything can be sandboxed properly, but the acquisition/update process can be pointed to a URL/Store that the user has pre-approved.

macOS comes pretty close to what I'd ideally want in an OS with regards to installation - independent packages that are certified/notarised, but I'd like to see the OS allow for user-specified authorities beyond just Apple. That being said, I'm not sure I'd ever use them as it's part of what I'm paying Apple for, I'm really thinking more of Linux there.

A kind of flatpak/snap approach, but that has signing of the package and centralised management of the permissions for the sandbox at an OS level would be ideal in my view. That way it's still free-as-in-speech as the user can specify which notarisation authority to use (or none at all).

I really don't understand why seperate programs are handling removing their mother program in 2023, that's registry spaghetti messy.


Everyone is pointing at Windows but there are still installer software on MacOS. Normally crusty old corpoware like Citrix that needs to extend its tentacles to the whole system.

On Unix/Linux land the prevalence of pipe curl to bash type installers is not much different.

I normally keep both types away from my computers.


> On Unix/Linux land the prevalence of pipe curl to bash type installers is not much different.

This is a problem but only if you install software on Linux by manually going to the project page and copy-pasting whatever curl they have there, I think the difference is that mostly you're encouraged to go the package manager route, whereas on windows downloading .exes directly (ala the curl example) is the norm.


It seems to be increasingly the case that package managers just don't have some software - or have a version that's years out of date. Perhaps the number of different ones available has become self-defeating.

Directly sudoing a curl-ed script is like running a binary on Windows with admin permissions and with Defender turned off, which makes it somewhat more scary to me.

On Windows I use Chocolatey when I can, and if I can't (or it looks dodgy anyway) I'll either just not install it or try it in a sandbox. Things that aren't choco-able are generally commercial software obtained from the vendor's download page, we theoretically trust those things somewhat. YMMV.


> Directly sudoing a curl-ed script is like running a binary on Windows with admin permissions and with Defender turned off,

Most people would just say yes to any prompt they get, those wise enough not to aren't running random curl scripts either.

As for Defender being any kind of protection, I have my doubts.

> it seems to be increasingly the case that package managers just don't have some software - or have a version that's years out of date.

This is entirely distro dependant, some are very up to date and have most things you'd want, especially if you include the likes of AUR in that. But then there's usually a Flatpak or an AppImage that you can use in the odd case that they don't.


Actually no, the problem with curl | bash is that it can be detected on the server, so if the server is compromised, it can serve you malware and you will never know about it. It is safe(r) to curl > file, inspect the file, then execute it under bash.


The result of inspecting such a file is usually a series of disgusted shudders, "this will do WHAT do my machine"?


Sometimes a smile at the clarity and simplicity of the authors shell code, sometimes.


A rare delight but it does happen


Only installers I’ve seen are the .installer bundles, which leave behind a manifest for automated uninstalling.


On Unix/Linux land the prevalence of pipe curl to bash type installers is not much different.

True, but saying so will likely to earn you downvotes from those committed to this unhygenic practice ...


You are basically describing what Windows has as appx/msix. The decentrialized notarization authorities are the code signing certificate providers.


I had not seen this, but it absolutely does (on the surface) seem like a solution to this problem. Thanks!

I’d need to educate myself a bit more in terms of whether there are third-party authorities beyond Microsoft for the packages.

Found this introductory video for anyone else interested:

https://www.youtube.com/watch?v=phrD081sMWc

Note: I didn’t intend the Surface pun above, but it happened and we can all be glad that it did.


Yes there are a few certificate authorities. For example DigiCert, SSL.com and others. You can also create your own e.g. for enterprise deployments. Or you could even set up a public CA if you wanted to, the process is standardized.

So whilst Microsoft will sign for you if you distribute via their store, otherwise you pay per year for certificates and can distribute outside the store.

There are problems with the system (cost, bugs, usability problems) but it is decentralized.


> macOS comes pretty close to what I'd ideally want in an OS with regards to installation - independent packages that are certified/notarised, but I'd like to see the OS allow for user-specified authorities beyond just Apple.

It's easy to run unsigned binaries/app packages on macOS: right click on the .app, hold down Option, then click Open and confirm the warning.


That is not a user-specified authority.


I would also like this option. I see why Apple finds it undesirable though. Software installation safeguards are a game of whack-a-mole with (e.g.) support scammers who ask grandma/Lee-in-accounting/Cindy-next-door to naively click through all the warnings.

The closest Apple comes to this capability is achieved via device Supervision and MDM, which might be comfortable for some of us here in this forum but obviously isn’t practical beyond more technical circles.

Baddies keep ruining all the fun for the rest of us.


And being the only authority also happens to be conveniently aligned to their financial incentives.


> Baddies keep ruining all the fun for the rest of us.

IMHO the blame rather lies with our politicians who are unwilling to take the steps necessary to cut the baddies off from the Internet. Let's see just how fast India, Pakistan, Turkey and other scammer hotspots clean up their act when the US+EU threaten to cut them off from the Internet and SS7 unless the scam callcenters are closed down for good... the amount of corruption regularly exposed by scambaiters on Youtube is insane. Billions of dollars of damages each year [1] from that bullshit and our politicians don't. fucking. care.

[1] https://www.vibesofindia.com/fraudsters-in-india-cost-americ...


I’m more than a little skeptical that scams would be less of a problem if specific countries cracked down on large operations. For one thing it’s not clear how you’d ever get the whole world on board. Pressuring India is hard enough, try Myanmar, a place that doesn’t get along with the West at all and is already a hotspot for phone scams targeting Chinese speakers. And if centralized, relatively open operations overseas were no longer possible, it would likely become more like other types of fraud run by local gangs. So I’m all for pressuring India to crack down on scammers, but I don’t see how that would reduce the desire to tighten software controls on PCs.


> For one thing it’s not clear how you’d ever get the whole world on board.

You don't need the whole world. The Western world is enough - no Internet and phone service (both easily enforced by requiring providers to reject ASNs / phone country codes) means a lot of lost business for an affected country.

> Pressuring India is hard enough, try Myanmar, a place that doesn’t get along with the West at all and is already a hotspot for phone scams targeting Chinese speakers.

Honestly, that's China's problem to solve.

> So I’m all for pressuring India to crack down on scammers, but I don’t see how that would reduce the desire to tighten software controls on PCs.

When software vendors don't have to gate more and more features behind more and more obnoxious bullshit simply to whack-a-mole scammers, they won't.


they probably don't do it because it's a bad solution.


Is it? I prefer to tackle problems at the source, and its crystal clear that overseas scammers are exploiting corrupt local law enforcement in conjunction with easy access to targets via the Internet and shady telephone providers.


There is no Pareto optimal unicorn that provides both a democratized marketplace of software with low barriers to entry and an ironclad guarantee of security against compromise of personal user information. These two are fundamentally at odds. If anyone can produce and distribute software easily on a given platform, then so can people with malicious intent.


Or just run `sudo spctl --master-disable` one time; and it will change the allowed app sources to the invisible "Anywhere" option.


> I suspect the final form for software installation is probably where iOS and Android are going in the EU, where there's a single means of installing software to the device so that everything can be sandboxed properly, but the acquisition/update process can be pointed to a URL/Store that the user has pre-approved.

Basically how Linux distributions works since the beginning. Tough at the start the installation source was not remote but a CD-ROM things didn't change.

You have a repository of packages (that can be on a local source as a CD or remote source such as an HTTP/FTP server), that have some sort of signature (on Linux usually the pagkage is signed with GPG) with some keys that the user trusts (and the default are installed on the system), and a software that allows to install, uninstall and update the packages.

Android/iOS arrived later, but they didn't invent anything.


Android/iOS didn't invent this, no, however you're missing the sandbox part. Most Linux package managers don't sandbox anything.


iOS is the gold standard IMO. Apps are sandboxed, can only interact with the outside world via APIs (that the user needs to approve), one click uninstall and it’s all gone without a trace (at least in theory). Love it.


I think Android does it better with third party store and sideload support. It seems that iOS depends some security to their own the AppStore. (example: disallow dynamic code generation like JIT)


How could Windows handle it by itself?

If it provides a framework for installers/uninstallers, it'll be fighting the inertia of decades of legacy software, programmer habits, and old tutorials.

If it tracks file ownership by program, it might accidentally delete user files. How would it differentiate between a VSCode extension that should be uninstalled, and a binary compiled with VSCode for a user project? A false positive could be catastrophic.

If it restricts what programs can do to accurately track file ownership, you end up with Android. Which is fantastic for security, but is a royal pain in the ass for all parties:

- The app developers have to jump through hoops for the simplest actions, and rewrite most of their code in the new style.

- The operating system has to implement a ton of scaffolding, like permissions-restricted file pickers and ways to hand off "intents" between applications.

- The user is faced with confusing dialogs, and software with seemingly arbitrary limitations.

In the age of shared runtimes, auto-updaters, extension marketplaces, and JIT compilers, managing installed applications is harder than ever.

Edit: the answer above applies only to Windows, because of its baggage. Linux'es are in a much better position, for example, though their solution is still not perfect.


The same way any linux distro does?

Define a separate directory for program installations, that user processes cannot write to. Only program that can do so is the package manager, which other programs can call to install packages. Uninstall removes everything related to a program from this directory.

> In the age of shared runtimes, auto-updaters, extension marketplaces, and JIT compilers, managing installed applications is harder than ever.

The only reason these make things hard is that windows lacks any facility to deal with them. Solutions going forward: Outright ban having your own auto-updater, to auto-update you register your program and where to update it from with the package manager. Shared runtimes are trivial for package managers to handle, it's just a package that many other ones depend on. Extensions can be handled as packages.


I agree with you, now for completeness I should mention that Linux package formats usually allow packagers to provide arbitrary pre- and post- install shell scripts ran as root.

(which means that if you don't trust a provider, not only it's not safe to run the program, but it's also unsafe to install it)


>if you don't trust a provider, not only it's not safe to run the program, but it's also unsafe to install it

Isn't it same for windows right now? `.msi` and `.exe` can execute arbitrary code right?


The only difference is that you usually trust the repo in Linux, but that’s a pretty significant “only thing,” in the sense that the repo is already the source of your whole system, so it better be trustworthy!


The "elegant" way of distributing 3rd party software for Linux is to ask the user to add your APT/RPM/[...] repo to their system. And most Linux distro maintainers anyway don't vouch for software in the main repos, beyond basic install-ability. The Debian project for example definitely doesn't do in-depth security analysis of every package in the repos: they just check the license, re-package it, and keep an eye on security updates in upstream.


Yes, absolutely.


Right. You should generally never install a proprietary software package provided by the vendor in RPM, DEB, or similar. What keeps the use of those hooks safe is purely social convention and review internal to the Linux distribution, and vendors routinely use those hooks to do unacceptable things.

If you must install proprietary software on your Linux system, either package it yourself or use something like Flatpak or Snap (or even AppImage).

Hopefully in the future vendors will increasingly move to providing well-sandboxed Flatpak packages by default.


The packages are cryptographically signed, you have the option to abort the install of an untrusted package before it does something malicious.


> packages are cryptographically signed

packages are cryptographically signed by the packager, by the way on Debian you add the key when you install a new repository. The signature tells you "This package has been built by X and has not been tempered in the meantime", not "X and this package are not malicious, I promise".

> you have the option to abort the install of an untrusted package before it does something malicious

How do you do this in practice?

If I run apt install p or or dpkg -i p.deb, the thing is installed. APT asks you for confirmation if it has to install additional dependencies but that's it.

I don't have no guaranty such like for any package, I can install it without worrying something bad won't happen during its installation.

Of course you should not install untrusted packages, but still. The same could not be said if the package format didn't have anything to specify arbitrary install scripts.


> The same way any linux distro does?

I'm going to assume you are talking about rpm and deb packages since they are still currently the dominant installation packages on Linux.

> Define a separate directory for program installations, that user processes cannot write to. Only program that can do so is the package manager, which other programs can call to install packages.

Windows does this. Programs are installed in directories under "C:\Program Files" which is only writable with elevated system rights.

> Uninstall removes everything related to a program from this directory.

rpm and debs don't install all the files needed for a program in a single directory. They are scattershot all over the file system and in many of these directories comingled with files from other programs. Windows comes closer than Linux in this regard since it does create the directory under "C:\Progtam Files" which while unfortunately doesn't always contain all the required files usually contains the vast majority.


This is exactly how AppX/MSIX packages work, with C:\Program Files\WindowsApps (by default) being pretty substantially locked down. They even use filesystem/registry virtualization by default to isolate packages even further from each other. They also have solutions for framework packages and extensions though I haven't tried those out and suspect they have annoying practical limitations around edge cases.

Of course, a decade later almost nobody uses those because they botched the rollout by limiting AppX to the Microsoft Store and an entirely new poorly documented and very restrictive set of windows APIs and app frameworks. They've made huge progress on all of those problems with MSIX to the point that it's a reasonably good and easy to use choice for most apps with some neat benefits like updates only downloading the changes between versions. Of course if your app pushes the boundaries of the sandbox or capabilities or runs into a bug it becomes a huge pain.


I don't think MSIX is a good choice for most apps. With a decent-sized user base, you will have a lot of people who run into undiagnosable errors with MSIX or can't use it because they're in locked-down enterprise environments.

I think Affinity Photo's experience with MSIX is instructive; hundreds of negative results on their forum, eventually they had to back down and provide a non-MSIX installer (and at that point do ya really want to maintain 2 separate Windows installers?)

https://forum.affinity.serif.com/index.php?/topic/170529-ext... https://forum.affinity.serif.com/index.php?/search/&q=msix&t...


That was the first option, "provides a framework for installers/uninstallers".

But what would you do with the millions of existing programs, most unmaintained? And what about programs with strong opinions on update schedules, or built-in extension marketplaces?

It's easy to solve this problem if your first step is "replace every program".


If you care about this enough to abandon old software, they built that and called it Windows S and few wanted it.


Windows without backwards compatibility is a dead end because the only reason why Windows exists is backwards compatibility and the existing user base. As an OS it is decades behind all its competitors, with a 30yo filesystem, file locking ridiculousness (which is why uninstallers and updates end up being so complex and require reboots), an antiquated central registry for settings that ends up slowing the system down over time, and a security framework so broken that you need anti-malware software running and inspecting every little thing happening on your system or you're easily compromised (everything is executable by default).

The security situation is so bad at this point that you can't trust any Windows benchmarks anymore. The benchmark suite will run on a "bare" Windows system; probably with updates and Windows Defender disabled and many other system services stopped to maximize performance and prevent background services from slowing everything down. The reality though is that on a regular user desktop all these things and a whole lot more will be enabled, resulting in vastly degraded performance compared to the benchmarks. The end user experience sucks.

Now they're forcing ads down your throat and pestering you at every turn to use more Microsoft software (e.g. trying to get you to use Edge). They've also recently included UI changes in "essential" system updates that can't easily be reverted or undone, breaking people's workflows. It's anti-user insanity and it's all because Microsoft can't actually go back to the drawing board with Windows anymore because the alternatives are just too good.

After using a Linux desktop full-time for a while, going back to Windows feels like going from having modern plumbing to pooping in the woods.


You could provide the framework that well-behaved, maintained programs will use while still allowing the old installers to run.

By the way that's what we have on Linux, some programs come as a shell script that you run to install them. Most Java IDEs for instance.

(which can't be arsed to provide proper packages -- darn, what did I just write? :-))


>The same way any linux distro does? >Define a separate directory for program installations, that user processes cannot write to.

What about /usr/local/bin? Isn't that specifically for putting non package manager binaries into?


That's more for binaries and scripts manually installed by the administrator because they weren't available in the package manager or are custom.


> How could Windows handle it by itself?

In the same way 'Linux' (in the widest sense of the term, i.e. Linux distributions like Debian) handles this. User data is not touched by the (un)installer, configuration files are checked for changes from the package default and left alone unless explicitly purged. Files which do not belong to any package are left alone as well so that binary compiled with VSCode for a user project will not be touched:

   warning: while removing directory /splurge/blargle/buzz not empty so not removed
This has worked fine for decades and is one of the areas where those Linux distributions were and are ahead of the competition. It works fine because the package manager has clearly delineated responsibilities and does (or rather should) not go outside of those. Do not expect the package manager to clean up your home directory for you, that is not part of its duty.

> In the age of shared runtimes, auto-updaters, extension marketplaces, and JIT compilers, managing installed applications is harder than ever.

Most auto-updaters should be disabled on systems with functioning package management - Thanks Firefox but I'll update my browser through either the package manager as I prefer my executables to be read-only for users.

Some packages - the whole Javascript rats' nest being a good example - move too fast to be usefully packaged by volunteer maintainers so those are relegated to a different area which is not touched by the package manager. Other packages - anything Python fits here - are such a tangled mess of mutually incompatible versioned spaghetti that they are hard to fit inside the idiom of package managers so they get their own treatment - python -m venv ... etc. These are the exceptions to the rule that package management can be made to work well. By keeping those exceptions in areas where the package manager does not go - e.g. your home directory - the two can and do coexist without problems.


It's called MSI, and it's been in Windows for 20 years.

The issue is that MSI is very buggy when handling explorer extensions. If you're not careful, when you uninstall it'll prompt you to close explorer.

(I know because I shipped a product that installed via MSI and had an explorer plugin. The installer issues were more complicated than the plugin.)

In this case, the issue is that when explorer loads a plugin, it keeps an open file handle to the dll. This gives the installer two options: Restart explorer.exe, or somehow finish uninstalling when explorer.exe terminates.

The product that I shipped just restarted explorer.exe.


Oh. I thought MSI and WinGet (sorry, AppGet in fact) designed to solve these problems.


A VSCode extension would be installed and managed by the OS package manager. User created content would be not.


You, you want Microsoft to lose its total control over the VSCode extension "marketplace", don't you?


Really? Do you install Firefox extensions from apt-get?


It's not unusual to do this in the Nix world.

There are a ton of VSCode extensions in Nixpkgs: https://search.nixos.org/packages?channel=23.05&from=0&size=...

You can use them in combination with the vscode-with-extensions function to create a VSCode package that bundles in whatever extensions you declare: https://nixos.wiki/wiki/Visual_Studio_Code


I haven't used Linux in a while, but I do remember seeing browser extensions in the package manager.


Yes, there are a few that can be installed from the Debian packages.


It allows you to install applications from any source, not only the official store.

It allows for a variety of installers to exist with different features for different use cases.

It allows you to install the application in any location you choose.

It allows for portable installations and to run software just copied from other sources.


What is "it"?


Special installers / uninstallers and also the ability to install and run things outside the official OS store.


Many program can run as standalone .exe, or just unzip as a folder.

All the points you list does not need _Special_ installers / uninstaller.


Yes, that is what I mean by my last point: "It allows for portable installations and to run software just copied from other sources." You can think of decompressing from an archive as running a very simple installation program.

If the only installer available was one provided by the OS how long do you think it would take to make that the only way to install and run software. These things are being done right now on many platforms in the name of safety, security, and to a lesser extent convenience.

The more phone-like a platform is the fewer ways you have to install and run software on it. So far general purpose computers still allow you to install software in other ways than the built-in method (i.e. just unzip and place in a directory), but it's getting increasingly common to require executables be signed, and things are always moving to be more and more locked down.

Now the use of "Special" installers/uninstallers is from the original comment, I would just refer to them as "regular" installers/uninstallers. I do like the ability and freedom to have an ecosystem of these things, as I don't want the one OS method to be the only way to install applications.


>If the only installer available was one provided by the OS

There's the non-sequitur. OP never said that this is what should happen. It is strange to leap to this assumption while also wanting to define portable programs and archives as 'installers'.

In the context of Windows, 'special' installers means the programs you run to be able to use a different program that don't appear on other OSes.


I did not define portable programs and archive extractors as installers, just suggested the act of decompressing to a directory or copying to a directory would be considered as installing the program.


I guess "special installers/uninstallers"


In principle I have no objection with those options as I've had to use all of them given the nature of the Windows ecosystem.

The trouble is that MS never paid much attention to tracking and cleaning up after installations or after uninstallers has finished. Often this doesn't matter but when something seriously goes wrong untangling the mess can be almost impossible, it's often easier to reinstall Windows and usually much quicker (that's if one has a simple installation).

Unfortunately, my installations aren't simple so I take snapshots at various stages of the installion—stage-1 raw install with all drivers, stage-2 essential utilities, and so on. By stage-4, I have a basic working system with most of my programs. Come the inevitable Windows stuff-up I reinstall from a backup image, it's much quicker than starting from scratch.

Between those major backups, I use the registry backup utility ERUNT, it not only takes registry snapshots on demand but also automatically backs up the registry on a daily basis. This, I'd venture, is the most important utility I've ever put on a Windows computer, I cannot recall how many times it's gotten me out of trouble.

Just several days ago I had a problem reinstalling an update to a corrupted Java jre/runtime. Nothing I did would make the installer install as the earlier installation was not fully uninstalled, thus log files etc. weren't a help.

In the end I had to delete the program dir and other Java files I could find, same with registry entries. As expected, this didn't work, as I hadn't found everything.

Knowing the previous version number of Java I did a string search across the OS and found another half dozen or so Java files. Retried the install again and it still failed. I then ran ERUNT which replaced the registry with an earlier pre-Java one and the install now worked. This still meant that some programs that were added later, LibreOffice for example, had to be reinstalled to update the registry.

If I hadn't had ERUNT installed I'd have had to go back to reinstalling an earlier partition backup. And if I'd not had those then I'd have been in real trouble.

That's the short version. Fact is, Windows is an unmitigated mess when it comes to installations. Why can't I force an installer to complete even with faults? Why doesn't Windows remember exactly what happens during an installation so it can be easily undone?

_

Edit: if you've never used ERUNT and decide to do so, always ensure you shut Windows down and restart it after installing a backup registry before you do anything else—that's in addition to the mandatory reboot required to install the backup.

You may have multiple registry backups and decide the version you've just loaded wasn't the one you want. Loading another without this additional reboot [refresh] will blue-screen the O/S. You'll then have to install the backup manually and that's very messy.


It is, these days. Windows 10 onwards has a native package format called MSIX that somewhat resembles packages on Linux. They're special zips containing an XML file that declares how the software should be integrated into the OS (start menu, commands on the PATH, file associations etc). Windows takes care of installation, update and uninstallation.

The system is great, in theory. In practice adoption has been held back by the fact that it was originally only for UWP apps which almost nobody writes, and also only for the MS Store. These days you can use it for Win32 apps outside the store but then you will hit bugs in Windows. And packages must be signed.

Still, the feature set is pretty great if you can make it work. For example you can get Chrome-style updates where Windows will keep the app fresh in the background even if it's not running. And it will share files on disk between apps if they're the same, avoid downloading them, do delta updates and more. It also tracks all the files your app writes to disk outside of the user's home directory so they can be cleanly uninstalled, without needing any custom uninstaller logic.

One interesting aspect of the format is that because it's a "special" (read: weird) kind of zip, you can make them on non-Windows platforms. Not using any normal zip tool of course, oh no, that would be too easy. You can only extract them using normal zip tools. But if you write your own zip library you can create them.

A couple of years ago I sat down to write a tool that would let anyone ship apps to Win/Mac/Linux in one command from whatever OS they liked, no harder than copying a website to a server. I learned about MSIX and decided to make this package format. It took us a while to work around all the weird bugs in Windows that only show up on some machines and not others for no explicable reason, but it's stable now and it works pretty well. For example you can take some HTML and JS files, write a 5 line config file pointing at those files, run one command and now you have a download page pointing to fully signed (or self signed) self-updating Windows, Mac and Linux Electron app. Or a JVM app. Or a Flutter app. Or any kind of app, really! Also IT departments love it because, well, it's a real package format and not an installer.

Writing more about this tech has been on my todo list for a while, but I have now published something about the delta update scheme it uses which is based on block maps, it's somewhat unusual (a bit Flatpak like):

https://hydraulic.dev/blog/20-deltas-diffed.html

The tool is free to download, and free for open source projects if anyone is wanting to ship stuff to Windows without installers:

https://conveyor.hydraulic.dev/


> For example you can get Chrome-style updates where Windows will keep the app fresh in the background even if it's not running

Considering the ability to update itself is a requirement of Cyber Resilience Act in EU, I foresee a big uptick in usage (and app stores usage of course).


that's a cool project, will definitely try it out later


Besides the "special uninstaller" thing. One of the things I hate the most with Windows filesystem management compared to Unix-like OSes.

On Windows, opening a file locks it. So you can't delete a program that is running, you will get an error. It means of course that an executable can't delete itself without resorting to ugly tricks like the one mentioned in the article. That's also why you get all these annoying "in use" popups.

On Unix, files (more precisely: directory entries) are just reference-counted pointers to the actual data (inode on Linux), removing a file is never a problem: remove the pointer, decrement the reference counter. If the file wasn't in use or referenced from elsewhere, the counter will go to zero and the actual data will be deleted. If the file is in use, for example because it is an executable that is running, the file will disappear from the directory, but the data will only be deleted when it stops being in use, in this case, when the executable process terminates. So you can write your uninstaller in the most straightforward way possible and it will do as expected.


I feel like this is some stupid question but aren't exexutables and their libraries loaded to RAM? If yes then why can't it just delete itself (from disk)?


I don't know the details but I think executable files are mapped into memory, and needed sections are loaded on demand. In case the system is low on RAM, little used sections can be evicted, to be reloaded the next time they are needed. This requires the file to be present on disk.


One thing I like about Linux package managers is that you can query any file to see which package owns it. How does Windows not track this?


Except they all leave files everywhere in ~, ~/.cache, ~/.config, ~/.whatevertheyfeellike


The ~/.whatevertheyfeellike is an antipattern (that is annoying) but the others are well defined in the xdg_desktop spec[0].

Personally I appreciate knowing where the config/cache for each application is. (Though it does annoy me when programs don't follow this as in your third example)

[0] https://specifications.freedesktop.org/basedir-spec/basedir-...


Why does the XDG spec have authority over software?


It usually doesn't, and it's mostly a good standards recommendation that even the most GPL of GPL codebases doesn't always follow (looking at you, emacs).


Emacs has respected $XDG_CONFIG_HOME for a while now. There are worse offenders (e.g. not likely to see the end of .mozilla any time soon).


GNU emacs was created at 1984. XDG Base Directory spec was started around 2003..


Also Emacs will reapect files being placed in XDG directories, it just doesn't put them there...


Software specifications are usually adopted by convention and implemented to minimize surprise and make things interoperable. They are not authorities and cannot make anyone do anything. One of the most common software failure modes is to implement a specification too tightly or in a way that nobody wants although the reverse is a problem as well.


They don't. XDG specifications are recommendations. Their only power is that your software will integrate poorly with other software (specially desktop software) if you ignore their guidelines.


Those files are user data, not part of the software package.


I would disagree, files that the user cannot edit or should not edit should not be going into their home directory. Things like cache files should go into a system wide cache directory instead.


Cache files might contain user's sensitive data. Makes sense to keep in them in the user's home directory in those cases.


File permissions?


There's no other path that the user is guaranteed to have write permissions to (except maybe /tmp, I guess).


Isn't that very anti-linux though, to have a directory owned by root but populated with subfolders owned by other users? /home is the only exception I can think of that does this.


Anti-linux I don't know, but it was not uncommon in unices to have home directories in /usr/home.

And there is no written or unwritten rule about that. In fact, /home is a subdirectory of / which is owned by root.


True, but /usr/home is no longer a common place to store home directories. It used to be, particularly in Bell Labs Unix. (Does FreeBSD still do this?)

The Linux Foundation’s File Hierarchy Standard puts user homes in /home, but it’s by no means mandatory.

/home being the *nix home folder directory isn’t written in stone, but plenty of software expects it. Of course you shouldn’t hard code things like that, but that has never stopped anyone from doing it. (Not that we should reward that with de facto standards necessarily.)

I understand the various reasons why a root file system hierarchy isn’t part of the Single UNIX Specification, but it might have been nice.


/tmp


And /run/user


also mail and cron


If I uninstall ssh I still want to have have my authorized hosts. If I uninstall some firefox version firefox I want to keep my profiles. XDG defines a thumbnailing hierarchy followed by multiple libraries, uninstalling any of those shouldn't clear thumbnail caches.


Persistent user-specific state needs to live in a persistent user-specific location. You could choose not to use the concept of a home directory, but you would be doomed to reinvent it.


Why would you want that?

If you have separate partitions, would you really want user data to go to the system partition? Or a third partition?

Do you find having more places that user programs can write a benefit?


I would favor a /var/user/something directory.

The fact that nobody does that is pretty much a consequence of the difficulty of coordinating multiple projects that do not have a common authority, not because it is a bad idea.


Again, what do you prefer about that?

Maybe the reason no one does that is simply that no one shares your preference.


Having a clear separation between actual user data files / documents and stuff like cache for different reasons:

- easier to cleanup/wipe without risking deleting works/personnal files

- backup solution doesn't have to have a town of entries in an ignore/exclude file

- same as above for syncing software

- tier storage separation possibility

- disk space allocation separation depending on data vs volatile stuff


Should that count towards user disk quota?


I agree cache file should not go into their home directory, however I don't agree they aren't user data and that they would be part of the software installation.


That is not part of the software itself so it is still correctly installed/uninstalled.

Now I believe all software should have a manpage, dialog and a cli argument that describes where all the files[1] generated by default go but that is another subject.

[1] cache, config and even default save


That's a feature so that users can keep configuration files and even move them across systems.


Try opening C:\Users\%USERNAME%\Documents\My Games


MSIX packaged apps do support this, Windows redirects file writes outside of home dirs and other user locations to a package-specific directory that's overlayed back onto the system so the app thinks it's writing to wherever, but it's actually a package-private location.


> you can query any file to see which package owns it

Presumably you mean something like using dpkg/apt for a Debian-style system?

I think that only works if a file is actually installed from within the framework. As soon as you've installed a file via npm, flatpak, pip, snap, specialist plug-in, standalone binary, that ancient program you had to install by hand, or one of the other squillions of ways of getting an executable program, you're out of luck and have to figure it out manually.


Ok, I see what you're saying here, still, Linux's way is better, I'd rather have my system cluttered with useless files of deleted programs than be exploited because of something that was solved decades ago.


> Why do Windows programs need special installers/uninstallers?

This is supposed to happen using MSI-based installers. It's a windows component.

> Why isn't this handled by Windows itself?

Now, here's where things get tricky.

In the article, the issue is an explorer plugin. MSI is notoriously buggy with installing and uninstalling explorer plugins. If you don't jump through hoops, your installer will have a bug where it prompts the user to close Explorer.exe.

I know because I shipped a product with an explorer plugin. The installer was always a thorn in our side; and the workarounds, ect, that we had to do to install / uninstall / delete our plugin were more complicated than the plugin itself.


When the subject is Windows, and the question includes a “why,” the answer is always “for historical reasons.”


It's hardly specific to Windows. All the major Linux distros have excellent package management systems, and yet many, many packages and applications choose to ignore these in favour of party solutions, scripts, or even curl https://not-malware.trustme.lol | sudo bash style hodgepodge.


I had never heard of detours before, but I guess it isn’t any different that a good old fashioned LD_PRELOAD


it's a little more general, I think, since one common use case for it is to use it on your own process in order to intercept calls to stdlib/OS code from libraries you don't control.

For example, in the bad old days I used detours to virtualize the windows registry so that I could do "fake installs" of COM components inside of a VB6 app, allowing it to run without an install and without administrator permissions. This worked by detouring all the Win32 registry APIs, which was sufficient to intercept registry accesses performed by the COM infrastructure.


> it's a little more general, I think, since one common use case for it is to use it on your own process in order to intercept calls to stdlib/OS code from libraries you don't control.

This capability is intrinsic to how ELF linking works. The main application or even any library can interpose a libc function just by defining and exporting a function with the same name, and that definition will be preferentially linked in both the main application and all subsequently loaded dynamic libraries and modules. Your definition can then use dlsym(RTLD_NEXT, "foo") to obtain a function pointer to the next definition, which would normally be libc itself but may be from another library. A running application could actually have several implementations of a function, all proxying calls onward until the terminal (usually libc) implementation.

Basically, the way ELF linking works by default is that the first definition loaded is the preferred global symbol used to satisfy any symbol dependency with that name. It follows that there's normally a singular global symbol table. Though there are features and extensions that can be explicitly used to get different behaviors.

There's nothing magical about LD_PRELOAD within the context of ELF linking. LD_PRELOAD support in the linker (which is the first bit of code the kernel loads on exec(2)) is very simple; all the linker does is load the specified libraries first, even before the main application, so symbols exported therein become the initial and therefore default definition for satisfying subsequent symbol dependencies, including in the main application binary, and even if the main application binary also defines and exports those symbols.

All of this is basically the exact opposite behavior of how PE linking works on Windows, for better and worse--depending on your disposition and problems at hand.

Also note that all of this is different than so-called "weak" symbols, which is a mechanism for achieving one of the same behaviors--overriding another definition--when statically linking. Otherwise, when statically linking, multiple definitions are either an error or it's difficult (i.e. confusing, especially in complex builds) to control when and where one definition is chosen over another.

[1] Though main application symbols aren't usually exported by default, so you need to explicitly mark a definition for export or build the entire main binary with a compiler flag like `-rdynamic`, which is the main binary analog to the `-shared` flag used for building shared libraries. The Python and Perl interpreters, for example, are built with -rdynamic as the interpreter binary itself exports all the implementation symbols required by binary modules, rather than defining them in a separate shared library against which modules explicitly link themselves against at compile time. (This is also why when building Perl, Python, and similar language modules you have to tell the compile-time linker to ignore unresolved symbols.)


For those of us who don't Windows, can you explain what a detour is?


You essentially replace a function with your own. The project is at https://github.com/microsoft/Detours.

I’ve created a PowerShell module that wraps this library to make it easier to hook functions on the fly for testing https://github.com/jborean93/PSDetour. For example I used it to capture TLS session data for decryption https://gist.github.com/jborean93/6c1f1b3130f2675f1618da5663... as well as create an strace like functionality for various Win32 APIs (still expanding as I find more use cases) https://github.com/jborean93/PSDetour-Hooks


> as well as create an strace like functionality for various Win32 APIs

Yes please. Thank you for this


Detours is a library for instrumenting arbitrary Win32 functions Windows-compatible processors. Detours intercepts Win32 functions by re-writing the in-memory code for target functions. The Detours package also contains utilities to attach arbitrary DLLs and data segments (called payloads) to any Win32 binary.

Detours preserves the un-instrumented target function (callable through a trampoline) as a subroutine for use by the instrumentation. Our trampoline design enables a large class of innovative extensions to existing binary software.

https://www.microsoft.com/en-us/research/project/detours/



And my more sophisticated library, https://github.com/stevemk14ebr/PolyHook_2_0


Interesting. Has anyone done the same thing on Linux?


I use and recommend subhook[0].

[0] https://github.com/Zeex/subhook


Imagine if Windows just allowed DeleteFile() even if the file was open, like unlink() on almost any other OS...


It does. But this issue arises because of file locks. Running an executable holds a lock that prevents deletions (but not renames).

Many OSes have file locks, though they often don't use them as liberally as Windows.


Imagine all the "WinBLOWZ is bullshit, I deleted 200 gigs of shit and my C: still has no more free space" posts if Windows started doing soft deletes


Dropping wscripts is a good way to get malware profiled too, and no way to code sign it or verify it's integrity before executing.


If your program creates the script and executes it, is verification necessary? This would be like verifying your 1st party scripts in a webpage that you wrote. It won't really hurt anything, but I'm not sure there's a point.


Reminds me of a simple app I made for Windows 95/98 to add every directory including System32 to the uninstallers' lists. No AV, neither Norton or McAfee, saw that timebomb comming. Good times.


Well maybe if Windows applications were packaged similar to MacOS (one of the few things I like) with the application data and user data for the application in 2 folders then it wouldn't be such an issue.

Most Windows apps sit under program files, some sit directly on drive root. But all spray configuration/user data files for themselves all over the damn place, requiring unique uninstallers.

MS, build app install/uninstall into Windows directly...


This tracks, I’ve flagged the nvidia uninstall for hours work because it code injected and flagged behavioral consistent with malware


And today I learned that Windows supports running Javascript as shell script. huh


Malware delivered as an email with a link to a zip file containing a .js file is one of the most common methods of delivery, right behind word macros. The "map the .js extension to notepad.exe" is a common security trick with a measurable, immediate drop in malware in large orgs. You can deploy it via GPO or InTune.

Personal promotion, I built this as a better alternative:

https://github.com/technion/open_safety

Note the built in .js parser hasn't basically ever updated, if you're writing for this you're writing like you're targetting IE5.


> It creates the file "example.com" in the same directory containing the EICAR test string. This should set off appropriate alarms

Huh, neat!


It is very common for malware to contain java script payloads that try to obfuscate themselves like like this:

Seemingly_random_code(seemingly_random_string)

The seemingly_random_code decompresses/decodes whatever is in the seemingly_random_string and hands over control to it. Interestingly the decoded code is another version of the same with different code and string. This goes on for ~100 layers deep then at the end it just downloads and executes some file from the net.


It’s amazing how much we haven’t moved on since iloveyou.txt.vbs


> This goes on for ~100 layers deep then at the end it just downloads and executes some file from the net.

I understand doing one layer. I guess I could maybe see two layers. But why would it bother with 100 layers? Either the antivirus or reverse-engineering tool can grab the final product or it can't.


Typically scanning tools have some limit to how much they probe complex formats, to avoid stalling the entire system while they're scanning. It's very much conceivable that a scan tool will try to resolve code like this for 10 layers, and then if the result is not found to be malicious, consider it safe.

This is similar to how compilers will often have recursion limits for things like generics, though in that case it's easier to reject the program if the recursion limit is reached.


Because of potential false positives, and the speed at which files need to be analyzed at runtime (suspend process executing it and then analyze it), having files which take a long time to unpack and identify can cause these to be allowed to run. They get offloaded to a sandbox or other systems to be analyzed while the file is already being executed. The sandboxes are too slow to return a verdict before the main logic of the file will be executed. IF those dynamic systems cannot identify a file, an engineer will manually need to look at it.

In very strict environments or certain systems it might be practical to block all unknown files, but this is uncommon for user systems for example where users are actively using javascript or macro documents etc. (developers, HR, finance etc.) The FP rates are too high and productivity can take a big hit. If all users do 20% less work that's a big loss in revenue (the productivity hit can be much more severe even!). perhaps this impact / loss of revenue ends up being bigger than a malware being executed depending on the rest of the security posture/measures.

technically its possible to identify (nearly?) all malware by tracking p-states/symbolic execution/very clever sanboxing etc.. but this simply takes much too long. Especially if the malware authors are aware of sandboxing techniques and symbolic execution and such things as they can make those processes also take extra long or sometimes even evade them totally with further techniques.

I wish it _was_ possible to do all of the cleverness that malware researchers have invented to detect things, but unfortunately, in practice this cannot happen on like 90+% of environments.

If you run like a DNS server or such things, it's possible to do it as such a system would not be expected (ever?) to have unknown files. (gotta test each update and analyze new versions before deploying to prod). As you can imagine, this is also kind of a bummer process but imho for such 'static' systems its worth it.


With enough conditional evals() with dynamic inputs you can make the search space unsearchable big.


The search space is linear as the algorithm is linear.


This stuff is mostly done to make static analysis harder.


Been using this for years. Mostly really useful. Sometimes tricky to get right since the available APIs are semi-well documented and it's JScript, which is some sort of old Internet Explorer-ish version of JavaScript.

By the way, there are also HTAs, which are Microsoft HTML Applications. You can create a simple double-clickable GUI with these using only HTML and JScript.


Pretty crazy how Microsoft basically invented the Electron app as HTAs all the way back in 1999. Of course we browsers weren't as capable as they are today, but "I just want a HTML+CSS GUI" had been a solved problem for over ten years when Electron first came out.


Yes, and XULRunner allowed this too, using Gecko, Firefox's web engine, to render HTML-like markup specifically designed to build native-like GUIs.

Apparently XULRunner was first released in 2006, but Thunderbird, which uses (used?) the same technology, was released as early as 2003, and maybe this was existing in the Mozilla Suite even before.


Thunderbird never quite used XULRunner, I think; they always built their own binary (though at some point quite a lot of the shared stuff moved into the XRE stuff). Think of it as they had a fork of Firefox (much like Firefox had a stripped down fork of the SeaMonkey stuff).

Also, I think one of the Start Menus (might have been XP‽) was kind of HTA-ish? Not sure about that part, though.


> they always built their own binary

> Think of it as they had a fork of Firefox

Yep indeed, you are right.

Notable projects using actual XULRunner included Songbird (a music player) and BlueGriffon, an WYSIWYG HTML editor (a successor of Nvu and KompoZer, themselves succeeding Netscape Composer). Both released after 2006 indeed.

I liked XUL, I strongly believe Mozilla could have dominated the market taken by Electron, had they pushed XULRunner more, and perhaps make it transition to pure HTML, like they did to Firefox's core, because that's what people know and because XUL was a maintenance burden. I think XUL tags made more sense than HTML to build UIs, though, and with XUL, Gecko have had a CSS flex-like mechanism for a long time by the way.

[1] https://en.wikipedia.org/wiki/Songbird_%28software%29

[2] https://en.wikipedia.org/wiki/BlueGriffon


There was an experiment back in the hazy past around that time called Entity that did something similar. It was never complete enough to be a competitor to XULRunner, but it was fascinating for two reasons:

1) You could write event handlers in multiple languages, including C. If you wrote them in C, it spawned gcc and compiled it into a library, and dynamically loaded it... The overall idea of a polyglot runtime like that was fun.

2) #1 is only really weird because this could be done at runtime. One of the demo apps was an editor for the GUI itself, where you could add buttons to the editor, then write that event handler in C, and have it compiled and loaded into the editor itself...

It was a fascinating starting point, though full of heavy duty foot guns, and I'm still sad nobody took it further.


> The overall idea of a polyglot runtime like that was fun.

Active Scripting, which powers scripts in both WSH and old-school IE including HTAs, is polyglot and extensible. It’s why Active{Perl,Python,Tcl} are called that—the original headline feature (IIUC) was that they integrated with it. It’s also why you could write VBS in IE: IE just passed the text of the script along with the language attribute to AS and let it sort things out.

Nobody did ever a C interprerer, though, I think—perhaps because you basically have to speak COM from Active Scripting, and while speaking COM from C is certainly possible it’s nobody’s idea of fun. (An ObjC-like preprocessor/superset could definitely be made and I’ve heard that Microsoft had even entertained the idea at the dawn of time, but instead they went with C++, and I haven’t been able to find any traces of that project.)

That’s not to say AS is perfect or even good—the impossibility of caching DISPIDs[1], in particular, seems like a design-sinking goof. And the AS boundary was also why DOM manipulation in IE was so slow.

[1] https://ericlippert.com/2003/09/16/why-do-the-script-engines...


the best epub reader for desktop I've ever encountered, was epubreader (pre-WebExtension version), I used to launch it as a standalone app with XULRunner.


Well, you can use IE 9 in HTAs - that browser is plenty capable. :) Been using this as a Windows-only Electron alternative for years.


For the curious: Here's a completely unfinished guide to how you might start developing such an application: https://marksweb.site/hta/ From HTAs, you have access to the file system, the network, the registry, the shell - everything. It might be a bit different than normal web dev, but it's not too bad either.


Wow, that's so cool! I played around with making HTAs as a kid and never thought those could be that powerful. (I quickly moved on to topics more exciting to a teenage hacker, like making WinForms apps with some PHP RAD IDE.)

Wondering what would it take to port mshta (with all the ActiveX goodies) to other platforms. Maybe it's a little bit late for that, but sounds like it might be a fun project to me.


You're brave, putting "ActiveX" and "fun" in the same sentence.

Wine Gecko supports ActiveX, supposedly, so if someone implements all the common ActiveX components, that could be a cross-platform method of running HTAs outside of Windows.

That said, I'm afraid the Electron API is the closest thing we have to a cross platform HTML application these days. On Manjaro, several packages are already implemented by installing Electron next to the application specific code, so that would be the closest thing to a modern HTA alternative that I know of.

PWAs work fine if you don't need integration with the system itself other than file prompts, for chat apps for example. They're not really alternatives to HTAs to be honest.

It should be noted that HTAs are a common way to infect computers (because they're executables that aren't usually recognised as such) and they're disabled in many security conscious environments.


To be honest, in my ideal world, mshta, Electron and the like would be discontinued and, instead, there'd be a cross-desktop-platform HTML/CSS/JS app-runtime (_not a browser!_). This runtime should support a sensible, large subset of modern Web APIs plus a set of cross-OS and OS-specific APIs so it's easy to work with for developers. To be easy to use for users, it should be installed by default on all major consumer-facing OSes. So yeah, it's probably not gonna happen anytime soon...


how do you feel about PWA?

https://web.dev/progressive-web-apps/


This feature has existed for more than 25 years.

My concern is more than Raymond Chen suggest that using it is still the recommended way. So much malware came through WScript.


Scripting is normal functionality for an OS to support. I don't know why people pretend JScript/WScript are evil but Bash is fine.


Well, he did warn you it would be indistinguishable from malware…


Yes, the same way one could write VBS (Visual Basic Script).

I think Windows 98 already had this ability. Possibly Windows 95 as well. It's a variant of the language called JScript, which is what was used in old versions of IE too.


It was about Windows 98 that Windows Scripting Host ended up prominent.

WSH btw allowed you to run any language that you had interpreter for - they had to support necessary COM interfaces (and to be truly usable, allow you to call COM objects), and register their interpreter class with ActiveScripting (WSH internal) engine.

Then you could use them not just for desktop automation, but also for scripts inside Internet Explorer (essentially, classic IE used WSH engines to implement scripts, iirc)

I've seen WSH (including HTAs) used with Perl, Python, Tcl, Rexx...so long as you install the interpreter with compatible COM service, you could use it.



It's technically JScript.


about what a sibling mentioned 'JScript' - not javascript; the infamous Microsoft EEE (the 2nd part) It has been there for decades.


What could possibly go wrong?


And it's even funnier that the solution the author gives is "hey execute this javascript code that uninstalls a program and deletes itself afterwards"

like, really? can't you write that in C? I don't think most Win32 apps use JavaScript for their installers.


> can't you write [a self-deleting executable] in C?

The point of the exercise is that, on Windows, you can’t, because Windows won’t let anyone delete executables that are currently in use (try it, you won’t be able to delete one either). Upgrading shared DLLs in the face of this fact is why installers for Windows programs often have to have you reboot the system (and in more civilized times asked you to close other programs before installation to reduce the probability of hitting a locked DLL). It’s also why there’s a registry key[1] containing a list of rename and delete actions to be performed on next reboot (usually accessed via the MOVEFILE_DELAY_UNTIL_REBOOT flag to MoveFileEx).

You can’t (straightforwardly[2]) make a self-deleting batch script, either, because the command interpreter parses a command at a time and so wants the batch file to exist. The Windows Scripting Host, on the other hand, will parse the whole file at once, close it, and then forget about it, so you can write self-deleting WSH scripts.

The workaround used by the uninstaller under discussion is instead for the executable to inject some code into the Windows Explorer (on the assumption that it’s always running and the user has to have access permissions for it) that accomplishes the deletion through return-oriented programming, so that the stack it’s executing from can then disappear into the wind (apparently? I’m not seeing how they plan to clean that up).

On a POSIX system you are explicitly allowed to delete any open file—including an executing one—making it languish in a kind of system-managed limbo (and take up disk space, invisibly) until it’s closed. The tradeoff is then that it’s impossible to ensure you’ve opened the same file as somebody else when all you have is its name. (I think you can at least check for success, provided you also have the device and inode numbers for it.)

[1] https://superuser.com/questions/58479/is-there-a-registry-ke...

[2] https://stackoverflow.com/questions/20329355/how-to-make-a-b...


I wonder why Raymond Chen suggest a WSH solution. Isn't PowerShell the official scripting language for Windows nowadays?


PowerShell has weird restrictions where it'll refuse to run scripts unless they're signed and stuff.


If the sysadmin chooses to, otherwise PowerShell can be run arbitrarily


The key is that unsigned scripts are opt-in, not opt-out. Chen is not going to suggest a solution that requires all users of the software to configure their computer to be less secure.


It's not really a security measure in that sense. It's a "safety feature" that prevents accidentally running such a script. Anything can trivially disable the protection using a bat script (or anything else) to bootstrap.

E.g. `powershell.exe -ExecutionPolicy Unrestricted`


I still long for the approach many software used on the AmigaOS - the app is a folder, the folder has the main exec and any assets it needs (libraries, images, etc.) and documentation and... That's it.

Install? Copy the directory to where you like. Uninstall? Delete the directory.

And if you wish you could keep any files used/generated with such an app in the same folder, making it 100% self-contained.

I remember being rather grossed out when I learnt Windows has "a registry" (that was a long time ago). "Why would you have a global registry? Whatever preferences a piece of software has they should live where the exe is".

(and yes, I am aware AmigaOS had an installer and dir structure not that unlike of Unix, with `sys:`, `devs:` and so on)


To be fair, Windows applications can be designed to be installable this way: a single executable, with everything it needs sitting next to it in the folder. Even better, a single executable with no other dependent files at all! Lots of little utilities used to be distributed this way. But many developers deliberately choose to structure their monster such that it needs to spread its tentacles all over the filesystem for it to work.

And for legacy/backward compatibility reasons, once MS allowed this behavior to go unchecked, there was no way to put the genie back in the bottle and stop it, without giving up backward compatibility. It didn't help that Microsoft software tended to be the "tentacle" kind as well.


It sounds great but there are simple use cases where the "portable" app isn't enough. For example, if you want multiple users to be able to use the program and have their own settings, you need something to be saved to the user folder. Or, if you want any basic interaction with the system (run on startup, run from a browser address, etc), you need to start messing with the registry.

So in theory apps could be distributed portable .exes but in practice Windows doesn't any ways of interacting with the rest of the system that are that nice.


I still love most aspects of the Amiga user experience, but a lot of Amiga applications would need libraries installed to Libs: and deleting the application's "drawer" would leave those libraries behind. (Having said that, by default libs: is assigned to sys:libs but you could assign extra targets, so that libraries would be sought from application-specific directories.)

Also, it suffers from the same problem as Windows here, in that you can't delete a file or directory which is currently open. The executable itself wouldn't be open after launch is complete (with the possible exception of overlay executables, but they were pretty rare) but the directory would be locked for as long as it was the program's current directory. If a subdirectory with app-specific libraries was assigned to Libs: that would also prevent deletion.


This is how a lot of apps on MacOS still work.


Sort of. They still leave garbage behind in ~/Library though.


Does that means the corollary: "Any sufficiently advanced malware is indistinguishable from an uninstaller" would be true as well?

I mean can you write a simulation of an uninstaller to create havoc on target's system and still remain "the good guy, the OS is at fault" type of situation when you write a malware?


I've heard this before, about cryptolockers. It's hard for the OS to know if you're encrypting all of your files on purpose, because you might actually want to do that.


I had an old hacky community program basically ruin my Windows install so I agree, it was a BLP viewer it automatically added previews on windows explorer, but if yo u removed a file while previewing a blp it'd crash your explorer and all your open tabs would close, really annoying.


I've seen Malwarebytes flag uninstallers a few times.


Any time I see a Microsoft link with a cheeky title, I assume it’s a great Raymond Chen deep dive. Haven’t been wrong yet!


As an aside every time I come across a Raymond Chen article I remember this post from Joel Spolsky - https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...

I remember very distinctly this quote about him:

> The only person in the world who leapt to my defense was, of course, Raymond Chen, who is, by the way, the best programmer in the world, so that has to say something, right?

So in my mind I've made the connection that Raymond Chen = best programmer in the world since then haha.


This is an interesting article, because it's a product of its time: modern languages solve a lot of these exact problems. I think this is a resounding success that they have correctly identified a genuine problem that people used to struggle with (safe types, exceptions) and make it standard and correct and ergonomic.


Hah, I had the same experience. Saw microsoft.com and thought "it's gonna be a Raymond, I can feel it"


It is a very provocative title. I guessed Raymond Chen as well. Of course he delivers an interesting deep dive behind the title.


You can't rely on JScript being present unfortunately. It can be disabled.


It probably should be disabled on most machines. The last time I heard about it was @swiftonsecurity complaining about it being an easily overlooked malware vector.

I'd be surprised if this capability is only available from jscript though. (and sad, I don't think jscript has been updated in years)


Can't spell unfortunately without fortunately.


What can you rely on then?


Uhm, for uninstallers? How about Windows Installer?

If you mean in other contexts... I think the point is you're not intended to be able to do this? Outside of uninstallers, running code that only exists in RAM is... the type of thing malware typically wants to do more than anything else.

But in terms of what's physically possible, I suppose there's the command prompt, PowerShell, and scheduled tasks? I'm not sure if all of those can be disabled.


Edit: I forgot about this, but there's also the official solution of MOVEFILE_DELAY_UNTIL_REBOOT. But (as with scheduled tasks) the delay can cause problems: https://marc.durdin.net/2011/09/why-you-should-not-use-movef...


funny because any application without a sufficiently advanced uninstaller should also be considered malware.


If you can just delete its directory (or its single file) and everything works, that should be fine?


I guess so... but then you're assuming that the user isn't saving data in that directory :-P

but honestly, please Windows Dev's.. use MSI's please

it make me love you and i have so much love to give.


Yes. But if you stick everything into a single .exe, the user can't interfere.


> Yes. But if you stick everything into a single .exe, the user can't interfere.

Alright cowboy, so where are you storing preferences and settings?


Well, you stick that in some user directory that would stick around even after a 'normal' uninstaller runs.


In the .exe, obviously.


Self-modifying code is the most exciting type of user preference!


I’m probably missing something but why is an uninstaller allowed to inject code into explorer.exe? That seems like a massive security flaw?


Everything is allowed to do that. You're right that it's not good security-wise which is why Apple blocked that sort of thing years ago. On Windows unfortunately the whole Win32 ecosystem is very dependent on programs injecting things into other processes, the API makes it quite easy and there's lots of sample code for it. It's a major source of stability and crash bugs there.

For example, antivirus products do this all the time, as do many video drivers and other system utilities.

Also, Explorer has various plugin interfaces where it'll load third party code and run it in-process since the very first version.


It's never a security flaw that a program running with administrator privileges is allowed to do something.


And this has been the case since Windows NT in 1993 according to a presentation I saw from Sami Laiho where he strongly argues that you don't need and should be a admin account as default.

From microsoft's documentation [1] > Administrator-to-kernel is not a security boundary.

I recommend the talk https://www.youtube.com/watch?v=Y09nAxZFKzc

[1] https://www.microsoft.com/en-us/msrc/windows-security-servic...


The security flaw is the administrator.


It's a security flaw that too many programs have too many privileges. Windows should have pervasive fine grained permissions like any other modern OS.


Yes and no. Windows has a very fine grained permissions system, including at the admin level. The problem is that it was designed for multi-user systems in the 90s, so the permission and security systems are mostly concerned about keeping users safe from each other, and having administrator roles for managing those users and doing system-wide tasks. Preventing a process from injecting code into another process by the same user running in the same session just isn't in the original threat model, since it's just the user screwing with their own stuff.

The shift towards protections from malware happened mostly as a consequence of Windows XP. There are now better controls, like assigning low-trust processes like a browser's renderer a low integrity level to prevent them from doing that. But it's also late enough that it's hard to rock the boat too much without breaking existing applications. Microsoft tried to make a clean break and offer more sandboxed applications with a user-friendly package manager (called the Microsoft Store) but this wasn't well received by app developers: most didn't use it at all, and those that did often opted out of the sandboxing.


Windows pervasive fine grained permissions is better than UNIX, it goes all the way down to OS resources.

The OS isn't to blame when people give root access left and right.

Actually this is why macOS got SIP.


Windows has a very fine grained permission system. But as you can see, the issue isn't a lack of this system.


What’s the difference between an uninstaller and any other process running as root?


Uninstaller likely running with admin privileges or as system. Despite it's huge surface area, explorer is not a super privileged binary on windows and is essentially a userspace program that has tons of open ports to plug in for many many apps. Locking it down like this would likely cause a lot of apps to break. I know for a fact microsoft keeps very close eyes on crash reports of their core apps such as explorer, and if your app causes a fault in explorer and you're registered on windows developer portal you'll get notified of this when they roll out beta insider preview releases.


Fight fire with fire


Why is the recommended javascript way of doing it

    var fso = new ActiveXObject("Scripting.FileSystemObject");
rather than

    var fs = require('fs') // (or the appropriate ES6 incantation)

?


The latter is not "javascript" in general, it's just a Node.js (a specific v8 Javascript runtime + native integration and libs like fs bundle) API.

And the former windows version is JScript which is an implementation Javascript just using an different name for trademark reasons, but tied to an old version of the JavaScript standard (which is confusingly called "ecmascript" officially), plus some Windows-specific integrations, like ActiveXObject and COM/OLE support.

(There's also a later .NET version of JScript to add to the confusion).


It’s JScript. Not JavaScript and definitely not Node.JS


Because JScript was introduced back in Windows 98 and the modern APIs were introduced 10 years later (in the case of Node.js).


Windows doesn't ship with node.js.


Because it’s JScript, which was abandoned prior to ES6.


I think much grief would have been avoided if microprocessor architectures, from the get-go, had separated data and instruction.


That would be less fun though


There was a bug in an uninstaller, once, that deleted all of the files on the disk. Sounds like malware to me!


Someone found a reference to this! Wasn't quite all the files on disk, "just" C:\program files\, but still!

https://arstechnica.com/civis/threads/i-tried-to-uninstall-s...

https://news.ycombinator.com/item?id=37566895


There was a bug in some Linux driver installer that accidentally /usr, the whole thing.


EVE online's patcher once deleted boot.ini (a file configuring their launcher that they stopped needing) without checking the working directory which caused it to delete C:\Windows\system32\boot.ini which was important to booting the then current Windows version.


It's impressive how incredibly bad the code must have been for that to happen


System32 is the default directory for cmd. Might have something to do with that.


Why does a third party app have rights all files without asking? The Android app rights system integration into linux and windows is way overdue


It's a driver that probably includes a kernel module. It's expected that its installer runs as root.


I'm fairly sure Android apps running as root (like this driver installer probably was) will also have access over all files without asking. That's the danger of root.


Android apps themselves can't run as root. The apps that do perform operations "as root" do so by spawning a shell process running as root, usually using `su -c`. That's on a rooted device. Normally, all privileged actions are done by system daemons that run with appropriate permissions and that apps communicate with through the "binder" IPC mechanism. Those daemons also handle access control — both the regular Android permissions and the god mode for "system privileged" apps.


Thanks for the explanatin, I'd never looked into it.

> The apps that do perform operations "as root" do so by spawning a shell process running as root, usually using `su -c`. That's on a rooted device.

So in a scenario like what we're discussing here (a dev deciding to build an uninstaller their own way), on a rooted device (as per the GP's context of "this coming to Linux and Windows"), the app in question could still run as root, and this permission set from Android would solve nothing?


An app that has root access can grant permissions to itself. It can as well remount /system as writable and make itself privileged by copying its apk to /system/priv-app


"Totally uncool dude!!!"


Why do we still need to install stuff? Why can I run webpages just fine without installing anything? Installation shouldn't exist, at least not from the user's point of view.


Because the tools we currently use (mostly programming languages) are too insecure to allow you to run any random code from the internet (that has full access to all your computer resources). And most users would be incapable of keeping proper security practices needed in this case.

Installed software is considered to have a distributor, ie. a legally responsible entity that can be punished for any shenanigans.

I believe this is an outdated model and that with better tooling we could do what you envision.


>any random code from the internet

Like installer.exe?

Installation-free software doesn't have to be web software.


You had to install a web browser to run those web pages.


More than that, you have to "install" the code for every webapp that you use at least once (unless it's cached).


How do you differentiate between download and install? Is self-modifying code repeatedly installing itself?


There's no good distinction, but given that browsers have caches, I think there's at least a very blurry line there.


Then perhaps the distinction should be the programs called 'Windows installer' type in Explorer that TFA is talking about and not a blanket inclusion of all software that isn't ASM written on the device running it.



Installation of the OS and the browser can be amortized away over a large number of app installations.


Lots of people need to work offline sometimes.


I'm not sure if the suggestion is that all work should be online, or if all desktop apps should be portable.


My company doesnt let data leave the premises. It makes for some painful moments, but the data is worth tens of billions of dollars.


The only cases I can think of are games (which are huge and GPU-intensive) or stuff that deals with filesystem a lot (dev tools, torrent clients). Those areas are still anemic in browsers. Other than that no reason.


It more or less doesn't in MacOS, and has NEVER really been a thing.

While there ARE some tools that require a more invasive "installation" process (e.g., VMWare Fusion), the overwhelming majority of Mac software is installed by just dragging the application bundle into /Applications.

(App bundles are just special directories, more or less, so you're moving more than just the file, but it presents as a single thing.)

To remove an app, you just drag that bundle into the trash.

This usually DOES leave behind things like local user data or preference files, but those are inert text files and don't impact perf or machine behavior in the long run. Users typically waste more space on cat pictures.

The tl;dr is that "installers" and "uninstallers" only exist because Windows needs them. I have seen MANY MANY FOLKS come to the Mac and be VERY confused by this. "But you NEED an installation process! This can't work!" Nope. Windows needs an installation process.

I joked, in the 90s, that despite all the monopolistic chicanery from Redmond, their real lasting awful legacy would be the degree to which they lowered people's expecations about how computers worked, and I stand by that.

(Something else not needed on sane systems: "clean up" software. If you don't litter the file system with files when you add a program, you don't need special utilities to run that shit down and delete it later.)


It's a bit weird to act as if macOS didn't have installers and uninstallers. For one thing, there's the App store. And then, probably most developers use something like homebrew.

What you describe works for a specific kind of app, the ones that can be easily sandboxed and don't have shared dependencies.


Sounds like you don't understand that Mac world very well. That's okay.

D/ls from the store do what I describe above. The overwhelming majority of Mac apps going back to the pre-MacOS / OSX days work exactly as I described above.


The App Store literally just places an application bundle in /Applications.

Homebrew and MacPorts are used for Unix software that requires a Unix-style package management.


Most of my apps in my daily driver Mac were installed this way.

Notable exceptions: Microsoft office, Google Drive.


Also, heads up for anyone on MacOS, here's a fun experiment!

Go to your Applications folder. Right-click any app. Click "Show Package Contents." Take a good look around.


Are you trolling or just unaware of this thing called 'data security' and 'software ownership' and 'IP' and the like.


A lot of people in the finance world prefer native apps to web apps. It's bizarre.


Load a sufficiently large and complex spreadsheet in Excel vs Sheets and you'll have your answer. Sheets will freeze and Excel will open in seconds.

There's also a feature gap that Google will probably never close, because why bother if you can't load the volume of data those power users are working with in the first place.


Aren't Excel sheets still limited to 1M rows or so?


I love articles by Raymond Chen. very good.


I can only imagine the Win32 API team meeting prior to this...

A: So, people are resorting to injecting code in Explorer to delete in-use files in such numbers that it shows up in our top-100 crash report reasons

B: Well, maybe we should add a public API to Windows to support this incredibly common functionality that apparently has been missing so far?

A: Nah, let's just write a mildly condescending blog post that recommends using an unreliable workaround that is pretty much guaranteed to trigger any client-side intrusion detection software, that will set them straight

B: Right on!

(Meanwhile, somewhere, a third-party developer is gearing up to ship a kernel-mode driver to directly manipulate file system structures from their uninstaller, since their old solution kept crashing: can't wait to read the postmortem once the crash dumps from that one hit the Microsoft servers!)


Raymond Chen writes great stuff but gives a very one-sided picture of compatibility -- he doesn't mention all of the times that it was the other way around, with Windows doing something lame and application authors having to work around it.

Like, for instance, the time that they decided that LaunchAdvancedAssociationUI(), the previous officially recommended way to show UI to allow the user to associate file types with a program, just wouldn't work anymore in Windows 10. Instead of opening up the Default Programs UI in Settings, it just displays a dialog telling the _user_ to go there -- which is even modal so they can't even refer to it while doing so. No compatibility shim or grandfathering for old programs, they just broke all applications that used this like they originally said good programs should do for Windows 8.

Or the case of Dark Mode in Windows, which for some reason they've dragged their heels on implementing barely any Win32 support at all for -- even just simply a call to query whether it is enabled. The current silly recommendation is to obtain the foreground color through WinRT and do a dot product on it to compute luma and determine if it is a dark or light color: https://learn.microsoft.com/en-us/windows/apps/desktop/moder...

Or the fact that the official way of reporting bugs on the Windows APIs is the Feedback Hub, which is completely unsuitable for task.

I don't have sympathy for the Windows team anymore. Their lack of developer support is partially responsible for all of the hacks that applications have to do to ship.


This. The deep dive is fascinating, but really feels like a distraction from a product management failure. Microsoft has been 3 decades in the OS business. Surely someone must have noticed that package management should be a core OS feature.


Please don't inflict more Microsoft package managers on us. All that needs fixing is there should be a supported way to delete "in-use" files just like you always could on Linux. People may make theoretical arguments about the virtues of locking files but the Linux behavior is so clearly the right thing in practice. Over the years I've encountered approximately zero problems with that behavior on Linux and dozens of problems with the Windows behavior, both as a user and as a developer.

I'm sure it would be hard to add such a thing to Windows while minimizing compatibility problems but I'm equally sure that it's possible and that it's worth the effort.


At this point the Windows package manager omnimisery has metastasized and now idiot developers are insisting on writing their own installers even for platforms where it's totally un-necessary (harmful in fact!)


Since Windows 2000, however Microsoft isn't Apple, in being a dictator regarding OS API adoption.


Oh, they have, many times! That’s why we have msi, app-v, appx etc.


Well, they kinda had to rush that meeting because they spent too much time on the previous meeting listing all the reasons why the microsoft store is a functioning package system and not at all a din of inequity.


There are already many services and APIs for doing this. It’s more like:

A: There are a dozen ways to do this correctly. Which right way should we pick?

B: I don’t know, I found this code on the internet. Should I just use it?

A: Sure, if it’s on the internet it must be the right way.


> There are already many services and APIs for doing this

I... don't think so? The particular problem here, is that an uninstaller executable needs to delete itself from disk after doing its main job.

Other than using MoveFileEx with a NULL destination file and the MOVEFILE_DELAY_UNTIL_REBOOT flag, then suggesting/forcing a reboot, I can't think of a straightforward solution.

And that solution instantly lights up your JIRA with 2 tickets:

#1: We MUST not suggest/force a reboot! Users hate it!

#2: CRITICAL BUG: uninstaller.exe still present after uninstalling product

So, then you try things like 'create a Task Scheduler job to delete the file', which then adds:

#3: PRIO 1: uninstaller.exe crashes if Windows Task Scheduler disabled

#4: SHOWSTOPPER: uninstaller.exe still not always deleted after uninstall. Why is this so hard?

Et cetera, ad absurdum. This then escalates to code injection (as described in the linked article), and (you heard it here first) kernel-mode drivers. So, if you're aware of a reliable solution, feel free to share in a comment here, for the betterment of the world!


Resolution: this is just how Windows works, deal with it


>Other than using MoveFileEx with a NULL destination file and the MOVEFILE_DELAY_UNTIL_REBOOT flag, then suggesting/forcing a reboot, I can't think of a straightforward solution.

And what's the problem with this?


One of the comments on the article gives one example:

> When our product’s uninstaller sees an undeletable file (possibly a DLL loaded in another process), it uses the MOVEFILE_DELAY_UNTIL_REBOOT flag to mark it for deletion, and warns the user “Please reboot as soon as possible to remove the remaining files.”

> However, some user uninstall our product, just to be able to reinstall it later, to the same location. And of course they ignored the warning. Once they reinstalled it, everything works, until a reboot.


> And what's the problem with this?

That solution instantly lights up your JIRA with 2 tickets: #1: We MUST not suggest/force a reboot! Users hate it! #2: CRITICAL BUG: uninstaller.exe still present after uninstalling product


A new API would only work in fully updated versions of windows that include the API.

Programmers would hesitate to use it, because they can't be sure the user of the uninstaller will be on a fully updated windows install.

The JScript workaround might seem like a bit of a hack, but it has an advantage of working on every version of windows all the way back to Windows 98.


Schedule a cronjob / win equiv


This is from the almighty raymond chen tho.


Uninstaller should not exist. Installers should not exist either. Every OS should have a package manager. The name clearly implies what it does: it manages packages; this task should be left to a specialized software, not to the user.

Linux distros were pioneers in this aspect and "stores" of the modern world are just a clothed versions of package managers. of course, a power user should should have the right to change software sources (repositories). It should be possible to add decentralized package managers like it is possible with flatpack.

Installers and uninstallers are a relic of a time were most software people used were proprietary running under a proprietary environment. Many windows users are still locked in that proprietary closed world. If "Any sufficiently advanced uninstaller is indistinguishable from malware", I'd say that most (all?) microsoft software comes with extremely advanced uninstallers.


The difference between a "package being installed by a package manager" and "an application being installed by an installer" can be as small as you want. Linux packages have "installers and uninstallers" in the sense that inside a .deb file for example there are scripts doing necessary tasks for installing and uninstalling. And whether they make a mess is only dependent on these scripts behaving in a way that doesn't make a mess.

So having a package format or package repository doesn't necessarily prevent packages from doing arbitrary or stupid things (there is no difference between a .deb file or an .msi file in that regard). The app store ore repository might add a level of human vetting, but technically there is no difference.

I would argue that it's better to have most of the vetting be technical: ensure that the package/installer format doesn't allow making a mess. Sandbox everything, prevent writes in the wrong places etc. More modern formats like flakpak/msix/etc are of course better in this regard.

And the key thing about them is that they prevent a mess on the technical level so that the human vetting (and thus repository) isn't really required.


You are right. Linux distros were pioneers and it used to be great, but it's gone off the rails now. Running kde-neon I have pkcon, apt/aptitude (for when I forget pkcon or one of its commands), flatpaks, AppImages, and the usual rigamarole of uninstalling snap and then setting up app-level installers for the things that previously only had snap candidates (lookin' at you Firefox). On the RH side I have both yum and dnf? Mokay.

Back when I was installing KDE on top of another distro, I'd also have the Discover front-end sitting next to Gnome's Software Center (or whatever it was). That should be somewhat expected and is more an indication of the bloat that major window managers have embraced.

The front-ends are decent about showing what source a piece of software came from, which is nice, but the last time I used them they didn't stop one from installing the same piece of software from two different sources.

All to say, back in the day it was so convenient to have yum or aptitude just take care of things installing software, resolving conflicts, etc. Now we're in a bit of a mire.


> The name clearly implies what it does: it manages packages; this task should be left to a specialized software

Yes, but the modifications that are possible are limited by what the API of the package manager provides. Like Windows Installer does. If that doesn't work you create custom install uninstall logic, which has the potential to be less robust/reliable than what the package manager provides.


I mean all Linux package managers allow running arbitrary code on install/uninstall and it seems to work well enough.


It has also failed enough times for me, but luckily it often works like "on error resume next" do the package manager doesn't get stuck in a bad state.

Regardless, installation on Linux is often just dropping some files somewhere and perhaps a modprobe.

In Windows you have so many things you can do: filesystem, registry, COM registration, GAC, file associations, etc.


It's not really that dissimilar. People like to act that Windows is so complicated and convoluted when in time the Linux desktop has invented all the same concepts.

    registry -> dconf
    com registration -> dbus objects
    gac -> shared libs
    file associations -> dconf settings
    windows scheduler entries -> systemd user session


Right, but most of these entries are still (1) only files or (2) are almost never touched by most packages.

And the GAC isn't really comparable, perhaps WinSxS is comparable, but the GAC requires a specific API and isn't a matter of just dropping a file in /usr/lib.


It only works because packages are distributed via a carefully curated, centrally-managed repository with socially-enforced norms.


I find that Linux software culture also leaves stuff behind and leaves it up to the assumed system mastery of the user to clean things up.


That's too broad and misleading.

Package manager tracks every single file installed by a package, preventing overwrites by other packages.

Strict permissions prohibit software from littering all over the place.

What are you taking about?

Do you want package manager to remove yourb personal data of it was created by an application your decided to remove?


Maybe I do! Many Windows custom uninstallers have an option to remove "settings" while uninstalling.

I sure have lots of junk in ~/.config I don't use anymore and probably never will.


It should certainly be capable of removing everything the application created that wasn't manually exported by the user or saved into a user data directory. E.g. everything it made in XDG_CONFIG_HOME, XDG_STATE_HOME, XDG_DATA_HOME, XDG_CACHE_HOME, etc. Probably not anything from XDG_DOCUMENTS_DIR or the other "user data" directories.


> Uninstaller should not exist. Installers should not exist either. Every OS should have a package manager

What is a package manager, if not an (un)installer shared between projects?


It’s a single, standardized uninstaller governed by the creators of the OS.

Sharing between projects is not how I’d frame this. Using a shared/standard OS facility is closer. This matters quite a bit when it comes to establishing trust.


well you would normally trust the package manager that comes from your OS/Distro provider, as opposed to an uninstalling script from a developer anonymous to you


It's not proprietary or not, it's the level of discipline about installing software. iOS is completely proprietary and has a very disciplined system for installing and uninstalling apps. The Windows non-system of installers/uninstallers is really anarchy except to the end user it looks like a system because the installers/uninstaller mostly look the same, mainly because

https://en.wikipedia.org/wiki/InstallShield

became a defacto standard endorsed by Microsoft and everybody else tries to look like it.

The real Windows "quirk" that this article skirts around is that Windows won't let you delete a file which is open and of course if you are running an executable that counts as an open file. This of course can be a huge hassle if you need to delete something and can't find the process which is holding the file open. It's particularly annoying for things like software builds where you really want a script that automatically and reliably clears a lot of stuff away so you can make a fresh start.

Now, POSIX has the opposite "quirk" that you can "delete" an open file and it completes right away because all you did was delete the link from the directory to the file. The file still exists because it has a link from the process that has it open, it really gets deleted when that link goes away.

That can get you in just as much or more trouble than the way Windows does it can, for instance if some series of events caused your system log to fill the disk, you can "rm" the log and the disk is still full. To really free up the space you need to "rm" and then restart the log daemon.

As for Linux it's got the problem of a proliferation of package managers in the sense of things like flatpak and snap and generally the (dumpster fire I think) of Docker images (like the place where they thought Docker would help stabilize their Python installations but somehow our data scientists kept finding strangely broken Python images that were real nightmare fuel for me; e.g. default character encodings that I didn't think anybody actually used)

Windows has realized the ability to install software in C:/Program Files/ and D:/Program Files/ and often in a user's home directory (if the installer supports it.) 20 years ago I thought rpm's sucked and thought "any software worth installing is worth building from source" where I could do

   make-install --prefix=/usr|/usr/local|$HOME
that is I didn't have to beg for my sysadmin to do something like

   apt-get install nethack
I am impressed with the comparative speed of installing from a package manager, but I still can't install a local copy of software with one, and I should be able to.


Installing / uninstalling / updating software should be a service provided by the OS. Letting vendors do it themselves just gives them an opportunity to mess it up, and they frequently do.


Windows provides various installer systems (.msi and the later iterations for UWP applications). It's unfortunate that MSI files aren't used more often, because they're more reliable than their uninstall.exe counterparts. They're not unlike .deb/.rpm files, except they integrate with the system GUI better when it comes to prompts and variables.

I believe MSI files were originally supposed to be almost atomic in use (software is either installed or uninstalled), providing transactional operations for installing software and limited UI options. I remember (but can't find) an old quote from someone on the team that designed the file format, basically saying something among the lines of "giving developers the option to execute random commands from MSI installers was the worst decision we've made".


I can't speak for whether Apple gets it right, but my experiences with the various package managers on Linux have not been any better than my experiences with installers on Windows. I've settled for avoiding system packages for anything I can build from source since system packages are always outdated and often Strange, and I try to avoid third party package sources and weird stuff like Snap or Flatpak since it's also historically been a source of problems for me. Maybe life is better outside of the Debian sphere though, since I've only dealt with Ubuntu and Debian.

Windows does have installer/uninstaller infrastructure called MSI (https://learn.microsoft.com/en-us/windows/win32/msi/windows-...), but ultimately it's up to developers to choose to use it.


I don't quite get this, I've been using Debian-esque Linux since like 1997 in various forms and have had problems with apt-get/apt maybe five times since then in total and it's always fixable with a little work. I've seen this a lot and I've haven't really understood the problems.

I saw Linus from LTT brick his installation (in Pop_OS! I think?) but that was a clear user error.


>but that was a clear user error

A user error that can happen to any user who isn't Linux savvy and just wants to paly games, and not learn how a package manager works, and that it can uninstall your desktop environment if you aren't proficient with Linux, is no user error but OS error.

How many MacOS or Windows users expect that going through the installation steps of Steam, it can uninstall your desktop environment? For that user demographic, that is a clear OS issue.


There was a Steam package error, the error warned it might be temporary, the installer said it wouldn't continue because that would remove "popos-desktop" amongst other things.

So, he opened a console (like any user?), then used apt-get ... which had a WARNING ... "This should NOT be done unless you know exactly what you are doing!". He then had to type "Yes, do as I say!" in order to "do something potentially harmful" ... and then, what a surprise it did something harmful!

That's a user error that will only happen to people who are cocky, people who are idiots, or people trying to firm up their long held stance that 'Linux isn't for gaming'.

I saw the LTT video the following day to release, loaded a VM up, installed Pop_OS (my first time) and installed Steam, no issues. Very simple. All button clicking.

>going through the installation steps of Steam //

That's mis-characterisation, he went through the Steam install steps (click install by the Steam icon), the OS told him there was an error and refused to do it. End.

Then he went sudo-ing and ignoring warnings. "You can delete System32 this OS isn't ready for users!".

You'll tell me now installs never fail on Windows, presumably, despite having experienced them myself.


That would happen to any other user trying linux for the first time because not every user knows they should first update their packages before trying to install something because why doesn't the OS do it automatically like any other OS?

It's 100% an OS UX error you're trying to spin into an user error.


It's an application error (a transient error with Steam), MS Windows doesn't even offer general application updates.


The problem isn't "apt" so much as "the system apt packages do weird things and as a result I can't build this open source package" or "I updated Ubuntu and now varnish and znc don't work even though I was using the system packages" or "the system apt package for mono is just plain broken and it conflicts with one I build from source, so I have to uninstall it"

mundane gripe: Uninstalling an apt package is too complicated and it is beyond me why it isn't a single command in 2023. I have to skim stackoverflow answers every time I need to do it (multiple times a year).


YMMV. I just bricked a Debian install last month by using the package manager to install display drivers then trying to uninstall it to try a different driver. Had to go editing files in the console on a 4k display and just cross my fingers.


I think part of it is that Debian and Ubuntu try to ship a good, usable default config, which is not necessarily the upstream default. I think it makes it hard to debug config issues.


This makes me think of the recent changes to pip on Arch Linux that recommend installing modules with pacman.

Pacman is ok, but I recently destroyed the Ruby installation on a computer while trying to use Vagrant and other Ruby packages. I still haven't figured out how to fix it. Lesson learned, stick to AUR and pacman repo.


Virtualenvs are your friend. I use pipx for Python utils not in my OS repos, never had a problem with that. (`pipx install yt-dlp` creates a venv in ~/.local/pipx/venvs, installs yt-dlp there, then symlinks the executables to ~/.local/bin.)

For per-project dependencies, Poetry is pretty good, although there are other options too.

For Ruby, I think Bundler is the venv/Poetry counterpart. Not sure what you could use for installing global tools though.


I keep forgetting Windows-isms that won't allow you to delete the executable file of a running process. I guess that's also why the arcane .dll upgrade process / WoW is so necessary.


It is probably one of the reason windows ask so very often to be rebooted for any minor change.


>won't allow you to delete the executable file

Any open file


> Any open file

Any file that was opened without specifying FILE_SHARE_DELETE in the call to CreateFile[1] (the Win32 equivalent of open(2)). Unfortunately, most language runtimes that wrap CreateFile tend not to pass that flag.

[1] https://learn.microsoft.com/en-us/windows/win32/api/fileapi/...


indeed - also reminds me that languages like go[0] and java[1] did disagree to even attempt using it

[0]: https://github.com/golang/go/issues/32088#issuecomment-53759...

[1]: https://bugs.openjdk.org/browse/JDK-6607535

So to me it's just not there...


Apparently some parts of this are quite recent, huh[1]:

> jstarks commented on Jun 18, 2019:

> [I]n the most recent version of Windows, we updated DeleteFile (on NTFS) to perform a "POSIX" delete, where the file is removed from the namespace immediately instead of waiting for all open handles to the file to be closed.

[1] https://github.com/golang/go/issues/32088#issuecomment-50285...


Notably Rust's standard library does allow deleting files it opens by default. https://github.com/rust-lang/rust/blob/735bb7e5df185cc24e565...

While full Unix-like behaviour is only available on Windows 10 for the past five or so years, you can still have the old win32 behaviour on older systems (delete once the last file handle is closed).


Can a running executable start with this flag, so that its file can be removed?


Probably not, since Windows uses the executable file as backing for memory mapping.


I don't think that's true any longer. Windows now defaults to setting FILE_DISPOSITION_POSIX_SEMANTICS

https://stackoverflow.com/q/60424732/1362755


But there must be an API to unlock files that programs like "The Unlocker" use? Or do they just enumerate the other processes handles to that file and close them?


> Or do they just enumerate the other processes handles to that file and close them?

AFAIK, that's exactly what they do. And it can cause problems, see for instance https://learn.microsoft.com/en-us/previous-versions/technet-...


Tne kernel actually allows for that, but Win32 apparently does not.


Reading stuff like that, just makes me love package managers even more :)


I love how it goes from hardcore x86 assembly right into beautiful JavaScript lol.


Software shouldn't uninstall or update itself. There should be a package manager to do this.

Then at least you have one attack vector less per application.


When I read the title I had to remind myself what an uninstaller even is. It's been a while since I touched something like that.


I think package managers are the single most impressive feature Linux distros have over Windows. Everything else I use has some kind of equivalent; for a dev environment you can sort of get by with MinGW and MSYS. But if there's anything on Windows remotely as capable as apt or pacman, I haven't heard of it.


> But if there's anything on Windows remotely as capable as apt or pacman, I haven't heard of it

winget: https://learn.microsoft.com/en-us/windows/package-manager/wi...


chocolately too


And who uninstalls or updates the package manager?


The upshot of a gated garden is that it is walled and gated. The downside is that it is walled and gated.


I don't feel gated at all by my linux installation. If I want I can customize everything, but at least I don't have to deal with a thousand installers and update processes on a day by day basis.


I love dnf and apt, but I dread attempting to uninstall anything using them.


I am using pacman since over 10 years, only used apt and dnf for a couple of weeks.

Yes, also on servers I run arch... let's me sleep better :)


Anyone can create a compatible repository to any open package manager. No gates, no walls.


There are custom repositories and the ability to install packages from disk


And package manager provides hooks to run custom logic when uninstalling. Your arbitrary constraints doesn't solve the problem. Also system wide package manager almost never have the option to install easily multiple time the software.


Software shouldnt update itself? What? Thats crazy

If browsers werent updating itself we would have huge security mess


The package manager updates the browser, just like it updates everything else.


Why would software want to rely on some external software?


The guys at MS are smart enough to decompile assembly but still not enough to have a proper inode-based filesystem where you can delete files that are in use.


It has little to do with the filesystem. Windows has OS level locks. In the case of a running executable, the mapped memory holds a lock on the exe file to prevent deleting it. This is intentional. If it didn't hold the lock then it would be possible to delete the exe file on modern versions of Windows.

Edit: since there still seems to be confusion I'll try to be clearer. On NTFS you can delete an open file. This is a solved problem. The DeleteFile API even does this by default now. The thing that prevents deletion is an OS lock. This lock only prevents deletion. Renames are still allowed.


I didn't know about the lock. What happens when you delete an open file in NTFS? Is the data still accessible for programs that have the file open? If yes, then why the lock. If not, then that's a problem.


[flagged]


There’s a Raymond Chen article (or maybe it’s in his book) about people that complain “why doesn’t windows just let me do…”, and he then goes on to give several examples and show what some of the downsides are if you allowed it. This article looks similar, but I’m not sure it is[0].

The name calling and lack of actual examples shows that you probably haven’t even thought about potential downsides. I haven’t either, but it’s because I don’t really care about this particular issue. I just care enough to point out that the “mentally-challenged foot soldiers” at Microsoft may have actually run into some exploits or malicious behavior because they did “allow useful things” at one point.

[0]: https://devblogs.microsoft.com/oldnewthing/20050607-00/?p=35...


I see no reason not to allow this.

That's why people at Microsoft made those decision and not you. The most obvious reason would be to use the executable itself to back the memory which the comment you replied to already hinted at. Instead of loading the entire executable into memory on application start, you just create some memory mapping entry. As the code executes and accesses different parts of the executable, the page fault handler will load the required pages on demand. If you only access a small part of the executable, only a small part of the executable will ever be loaded into memory. If you run low on memory, you can just use the page frames holding the executable for other things, you can always load pages from the executable again if needed.


I think maybe you should familiarise yourself with how e.g. ext4 works.

You can unlink a file and have a process still hold a reference to the inode. This allows you to continue reading (or executing) a file which may not have been fully mapped yet even after its last filesystem reference is gone.

There really isn't that much of a good reason to disallow deleting the files (assuming NTFS is capable of supporting a similar situation) except maybe for the fact that loosening a guarantee is still breaking an API in this scenario (someone might be relying on an executable file not being able to be deleted for some bizarre program feature).

I do wish Microsoft allowed more tweakables like this (so you can disable legacy behaviour if you feel you don't need it).


1. I know how Ext4 works in this case, and I don't care.

2. I already explained how I would like the system to respond. Ext4 or any other filesystem have nothing to do with it.

3. Processes don't hold references to inodes. That's nonsense (simply because there's no such thing as a "reference to inode" in any system interface). Process doesn't get a reference to the file from which its executable code is loaded at all (unless you count the first argument -- but that's the file's name, not the file). Who put this idea in your mind I don't know... and any filesystem, be it Ext4 or anything else has no role to play in this. Now, the loader may decide to store the inode of a file or not -- yeah, whatever. A filesystem may even decide to change the inode of an existing file, if it so wants -- since there's no system interface that relies on inodes, it's all a fair game. For example, Shell will usually read the text of a script it's executing a block or maybe even just a line at a time. If you modify the file during its execution, it's usually going to cause syntax errors, but, if you are lucky, you can even write self-modifying programs in Shell. Now, will Shell perform like this if inode changes? -- Knowing Linux tools, I'd say, probably not. Will probably fail in some spectacular way. But do I care? -- Nope. I want this power to delete the executable file as it's being executed. I see no reason not to have such power.


I wasn't replying to you.

Also, #3 is just unnecessary nitpicking.


I know how Ext4 works in this case, and I don't care.

I think the comment to learn about Ext4 was aimed at me.

I already explained how I would like the system to respond. Ext4 or any other filesystem have nothing to do with it.

As I explained before, Windows - and also Linux - made the reasonable decision to back the memory of a process with the executable file so that the memory can be easily paged in and out. This decision forces you to make the executable file non-deletable while the process is running.

This is true for Windows and Linux, neither of them will delete the executable file while the process is running. Some file system including Ext4 and NTFS support initiating a delete while the file is still in use, which will make the file look deleted and eventually also delete the file. But this does not truly delete the file until the process exits, i.e. you can not reuse the disk space of the executable file immediately.

So depending on what you want from deleting the executable file while a process is running - do you only want it to look deleted and eventually disappear or do you want to reuse the space of the executable file - the file system might give you what you want even if processes are backed by their executable files. Or it might not.

Processes don't hold references to inodes. That's nonsense (simply because there's no such thing as a "reference to inode" in any system interface).

It would of course be a bad idea if processes would know about inodes, that is an implementation detail and not all file system use inodes. That does however not mean that processes can not hold references to inodes, a file descriptor is a reference to some file which will be implemented as a reference to an inode for Ext4.

Process doesn't get a reference to the file from which its executable code is loaded at all (unless you count the first argument -- but that's the file's name, not the file).

The process does not get that reference but the executable file is loaded as a memory mapped file, so the virtual memory manager will have a file descriptor for the executable file and indirectly reference an inode on file systems using them.

I want this power to delete the executable file as it's being executed. I see no reason not to have such power.

You can have this if you give up loading executables as memory mapped files. Or, if it is good enough for you that the file looks deleted and eventually gets deleted but the disk space does not become reusable until the process exits, if you use a file system that supports deleting open files.

Take some embedded system, you habe 4 MiB RAM, 8 MiB flash, 7 MiB executable. The only way to run this executable is by loading it as a memory mapped file, you can neither load it completely into memory nor can you create a big enough swap file.


What happens when you run an executable from a FAT formatted partition under Linux, then I would guess Linux also no longer allow deleting the file while it is running, right? In the end this is a feature of the file system, can you delete open files?


On Linux you can unlink open files even on a FAT formatted partition (I just tried it with busybox).

It's actually quite intriguing how this works.

When you delete a file on FAT it seems to remove the directory entry, but does not update the free space information and doesn't free the clusters. If you run a fsck.fat on a filesystem in this state, it frees up the space (likewise, if you just let the process exit and unmount the disk normally, this space is also freed).

Presumably the information about where the file exists is only kept in RAM once you delete the directory entry. This should be sufficient to let anyone with an open file handle etc to keep reading it (including the kernel reading additional pages of the executable).


So essentially the same as with Ext4, either sans the orphan list or maybe they even store one somewhere so that they can clean up while mounting in case the system crashed.


This is reasonable, but this does not necessarily preclude the file from being deleted from the directory.

If an open file is deleted, it's no longer enumerable in the file system. But its pages / blocks / whatever are still there on the disk, marked as used, and can back the page cache.

Once the file is closed, the filesystem notices that the blocks do not belong to a file accessible from outside, and garbage-collects the data blocks. It all closely resembles reference-counted GC.


As the original comment mention, this issues doesn't exist on Linux because the FS is inode based. The file content will be removed only when the last user has been removed.


So what? Building an operating system necessarily requires making countless design decisions, trading some things for other things. Over the decades the best choices may even change as technology advances but you might be unable to change course because you have a huge number of installations and would cause a lot of issues.


> So what?

So your entire comment was completely wrong. It’s not shameful to admit you’re wrong, it is shameful when someone else points out your mistake and you pivot to a “so what”.


No, you are confusing things - reasons are not necessities.


This is exactly what linux does too, inodes just let users delete/move the file on disk while the executable is running.


I have very little respect to people at Microsoft when it comes to technical side of things. Whenever I have to deal with their "creations" it's like they were motivated by malice, or part of their frontal cortex gone missing.

> The most obvious reason would be to use the executable itself to back the memory which the comment you replied to already hinted at.

Imagine that I've already read that, and I still see no reason to do that. So what? The code of the program will go missing? -- What's the big deal? Kill the process. User wanted to remove it anyways... Let the user decide what to do: system shouldn't second guess me.


Most users do not want a running program to corrupt / crash on them - even if they deleted the backing exe.


In practice, this isn't really true? Like, when do users delete programs manually anymore? Its almost always some uninstaller doing it from some installed appstore type program (Steam, Adobe Cloud, etc). And usually the user is deleting the whole directory, which almost always has additional files that the program needs to actually work. Those files will be deleted so their install already gets corrupted either way.


So what? I want it, and I see no reason not to let me have it. Linux was designed with this main goal in mind: let people take ownership of their computers and do whatever the hell they want with them.

If your Linux doesn't do it -- I don't care. I'll make my Linux do it. Because that's how I want it to work.


There are some brilliant people at Microsoft. Maybe not in the product end of things, but there are very few brilliant product people anywhere ;)


Obviously I cannot know everyone working for Microsoft. But I do know few people. Also, since I would never work for Microsoft, at the times I spent many years working for the same employer and either wanted to practice interviewing because I thought I might be headed for the door soon, or just to keep things fresh, I'd apply for jobs with Microsoft.

And invariably I'd hit the most bonedeaded interviewers I'd ever talk to. Arrogance is common to all interviewers in the tech giants, Microsoft is no exception of course. But in the context of glaring lack of any kind of ability, arrogance hits a lot harder...

What also stands out is that programmers working for other companies would typically know something about the platform their company didn't use. Like, if you talk to a person at Google RSE department, they'd still know something about how things are on MacOS or on MS Windows. Microsoft programmers are completely oblivious to anything outside Microsoft. They believe that whatever garbage they came up with is the best thing there is never trying alternatives. But, in rare cases when they do, their knowledge is very distorted and superficial.

Here's a real case of someone who worked for Microsoft for about ten years and branded himself a "visual C++ programmer". This guy started about the same time with me in a company which was transitioning from a bunch of Windows tools to Linux. Part of the transition was to move away Perforce and to Git. It was the time when even small companies would self-host Git, and the typical way to do that was to create Linux users on the server hosting Git repo and give programmers SSH access to it.

The sysadmin there was new to Linux, but we were kind of friends, especially since I had more Linux experience, so he'd come and chat with me about his job. One evening he comes to me and starts asking about how'd I set up Wine. Intrigued by why he'd need that... soon I discover that the "visual C++ programmer" guy requested that the sysadmin install an FTP client on the server, so that he can access Git repo... and that FTP client has to be some Windows-only junk, and so the sysadmin though that maybe it would run on Wine.

It was so many stupid things and of such magnitude all at once... I almost had a spasm laughing. And that's, basically, how my experience with Windows programmers usually went. I never had a moment of "hey, this guy may be OK, not like the others" or anything close. Somehow it was always bizarrely otherworldly comedically stupid.


Of course they're smart enough to have a proper inode-based filesystem. They're probably just not smart enough to swap out the file system their customers are using without their customers getting mad at them.


Allowing you to have inconsistently valid data, where a file can both exist and not exist depending on who's asking, is the opposite of smart.


A file exists for those who have the access rights, and does not for everyone else.

In that case, the file exists only for the process which has already opened it.


I just imagined the file being part of the mob, and somebody coming to ask if it was there. The body guard just responding with a “who’s asking?”


That's why Dijkstra was against the idea of explaining CS concepts by anthromorphic analogies ig.


With proper capabilities (the correct way to do security), this is just normal. If you don't have the capability, the object does not exist. Capabilities can be revoked at any time too.


it is a trade off


What do you mean, "smart enough to decompile assembly"?

That's not exactly rocket science. Anyone can do that, there's tools for that.


NTFS was doing B-trees in the 90's far before btrfs was a twinkle in someone's eye; unless some other UNIX was doing it.


NTFS was not an innovative file system. It has just used several innovations introduced earlier by HPFS.

The High-Performance File System, a.k.a. HPFS, has been included in OS/2 1.2, which was launched in November 1989.

HPFS was a very innovative file system (its chief architect was Gordon Letwin). It has introduced the Extended File Attributes. It had directories implemented with B+ trees, like IBM VSAM (introduced at some time between 1973 and 1978). It had long file names and Access-Control Lists, like the Multics FS (1965). It used cylinder groups like the Berkeley FFS (1983). It used file extents, like the SGI EFS (1988).

Among the UNIX file systems, the SGI XFS (1993, almost simultaneous with NTFS) has been the first which has added all the innovations introduced by HPFS.


Apple's HFS used a B*-tree starting in 1985.


IBM had used B+ trees for implementing the VSAM "catalogs" (IBM lingo for directories a.k.a. folders) already about a decade earlier.


I love the reference to the Arthur C Clark quote

(It's weird that this comment was upvoted twice, now down voted twice)

This is the quote I'm referring to: https://en.m.wikipedia.org/wiki/Clarke%27s_three_laws

It's the third of his 3 laws


Imagine knowing where each file in the system comes from.

Imagine having a package system, instead of that crap.


Imagine having 2,000 different config file formats you have to edit within a terminal instead of a central registry hive.

Imagine a file explorer that doesn't simply prompt for admin rights when you need them, instead silently failing.


Imagine having a central registry hive, yet have programs do whatever they want, including saving config files in random locations and formats.

Imagine using a file explorer you don't like.


Snark aside, when it comes to nix boxes, I've personally basically given up - for all of the "system" software I trust apt or other package managers, whereas in the case of any "services" I want to run (mail servers, web servers, backup services, databases, APIs) everything is run in containers with custom bind mount directories, to not pollute the host file system with crap that might get left over when removing a service or putting it on another node.

So something like "/var/lib/postgresql/data" in the container becomes something like "/app/my-postgres-service-12/var/lib/postgresql/data" on the host.

I have just one directory to backup, I can also move it to different nodes entirely and OS upgrades don't break anything either due to the running software being decoupled from the OS somewhat.

But my Linux and Windows desktops? It's absolute Wild West over there and I just reinstall the entire OS every few years - they're beyond saving.


For someone who is in the OS business since ... 1990? ... it's kind of sad that Microsoft didn't make package management a first-class citizen.


Well I like the "workaround" that Microsoft proposes for a limitation of Windows that shouldn't really be to this day.

UNIX resolved this decades ago: the filename is just a link to a structure on disk. You can delete a file even if it's used, since the file is there but you only delete the reference (filename pointing to the inode) that it has. Programs that have the file open can continue using it. When the file is no longer open by any programs the reference count is checked: if it's zero it means that the file can be safely deleted.

Not a difficult thing to implement, since they did it in the 70s... still Widnows doesn't get it.


There's pros and cons both ways, it's an implementation decision.

Under Linux, I have occasionally had the challenge of figuring out what process is holding open a large file that has been 'deleted'. This starts with recognizing that you're getting disparate values from different tools you might use to look at disk utilization. It's confusing as hell the first time you run into it without some established context as to why that might happen.

On deletion of a file that is open by another process, Linux does not warn me about that. Windows will give me a clear error message that the file is open by another process. Linux chooses to let the problem sit because it might just take care of itself once the process ends, but that decision kinda sucks when your disk is full and you're scrambling to get some space back. Windows forces me to deal with the conflict right now. That adds work in some cases while avoiding the confusion and risk factors of multiple perspectives about a volume's utilization.

It's not "Widnows (sic) doesn't get it". Windows made different choices. As another comment expressed, Windows has all it needs to do this at the FS layer, but chooses to use OS locks on top of that.


Isn't that what the second line in the JavaScript snippet does: delete itself while the script is still in use (i.e. running?)

In any case, I am not sure the Unix way is the best API design here. Sure, it does help in these scenarios, but now you can have hidden files lying around in the filesystem being used that I can't easily tell are even there.


Honest question, how come I can install Steam, Signal, Teams and other "modern" apps on linux without the same issues that Microsoft Windows has?

I know RPMs and DEBs follow a fairly strict patter, in that they keep track of everything installed so it can easily be cleaned up. Why can't Microsoft launch its own set of standards that achieve the same goal?


These days Windows itself is becoming indistinguishable from malware.


At work, I opened a link, despite firefox being my default, it opened in Edge.

OneNote also hijacks your filesystem, giving Microsoft full control over it via the internet.

Not to mention, many of the windows defaults are impossible to uninstall, and if you figure out how to, it will reinstall.

Seriously, Windows has gone full malware/bloatware.

I didn't want to, but I switched to Linux. The strange part, despite linux being 'rough', I'm saving a ton of time. Multiple times a week Windows would automatically reboot, causing me to reopen everything(which is slow), figure out what to do with autosaved files, and I might have lost some data. On linux, I don't waste time on these reboots, and can spend those 5-10 minutes solving the 2 issues I ran into. Now those issues are solved and now Linux just works.


The idiomatic way write an uninstaller

1. Clean up whatever you can in the uninstaller. If you're a decent human being this includes the registry, appdata, temp, and a toggle to delete wherever your app spews its config.

The only thing left should be the uninstaller .exe and the directories leading up to it, since Windows doesn't allow you to unlink a exe/dll that's loaded in memory. Your uninstaller should be static linked so it doesn't need to clean up its own .dlls.

2. Figure out how to delete the uninstaller. This can either be some stack smashing butt clenching fuckery, like the guys from the article did or you can be a normal fucking person and just atexit() a batch script in that does

    SLEEP 5
    RMDIR program_dir /whatever /flags /needed
The SLEEP is to ensure the uninstaller is closed

You can put the batch script in %TEMP% and hope something will clean it up, or if you're feeling extra nice you can set up an action to get it cleaned up at next boot.

After that you're also equipped to write self-deleting malware, which is a far more applicable skill.


I did that in my first years of creating uninstallers. But I was unhappy with the solution. Too much convolution, extra stuff to take care, etc. So then I was like "why can't I just delete myself at the end of the entire process?". I mean you can definitely unlink yourself (see Unlocker utility) using same API's. Except I started to run into faults. For some reason, on some systems, this was a bust. How I am writing my self-deleting uninstallers nowadays? Very simple. Create a temporary removable drive in memory (see RAMDisk), write the 2nd part of the uninstaller there, then at the end of the process corrupt that disk intentionally. It will "nuke from orbit" and leave no trace.


That is pretty much the suggested solution at the end of the article. There, the author also removes the temporary script.

The solution really isn't that hard, it might just feel a bit "dirty" (using a scripting language to do some work). But all that "stack smashing butt clenching fuckery" surely is much worse and less stable, so I'm not sure why one would even go that way...


Programming language elitism.


> You can put the batch script in %TEMP% and hope something will clean it up, or if you're feeling extra nice you can set up an action to get it cleaned up at next boot.

If you can set an action to get something deleted at next boot why not use that to delete your uninstaller itself, rather than adding an extra layer of indirection?


As far as I remember the thing you use to delete the file on next boot does not run arbitrary code, but is a special "rename-like" action that you can't use to recursively delete the folder of your program.

edit: https://learn.microsoft.com/en-us/windows/win32/api/winbase/...

It's the MOVEFILE_DELAY_UNTIL_REBOOT flag. Honestly maybe you can call MoveFileA on the entire directory tree you want to delete starting from the .exe and spare yourself the script, I don't see an obvious reason why it wouldn't work.


or why not just make the uninstaller a stub executable that copies the real executable into the temp folder and run that instead?


On Windows any sufficiently advanced uninstaller is indistinguishable from malware, but so are many other things on Windows.


Kudos to MSFT for the blog post title of the year... Regardless of whether those GPU fleets came up with it or not :)


Why is the stack marked as executable? This is no longer 1995.


you could with anti-exploit techniques (or within windows itself) hook VirtualProtect and such functions to check if its not making the stack executable. Also you could check it on process startup. However, it won't solve much, because then you simply slap it on the executably mapped heap region and do it from there?

There are cases for allowing memory protections to be changed, and because of that, it's possible and people use it. (code-generation / JIT etc.)

the stack being executable used to be a big problem, for example in 1995 with things like stack overflows running wild, but I think those have been largely mitigated by having things like stack canaries/checks stop stack overflows from being done (stack smashing detected!). those canaries were introduced around 1999 or so, and yes there are ways around it (rops/jops etc.), but not through the simple overflows from 1995.

I am all for not allowing it, but admittedly, that's likely some impractical utopian stance on the matter regarding the current use of operating-systems.

Perhaps in 2995 :)


It's still entirely ridiculous that Windows doesn't allow you to delete open files.


interesting investigation and js. Though I'm wondering why does windows rely on the software to uninstall itself?


That’s kinda like asking “why does Linux rely on `curl | sudo bash`”

It doesn’t rely on it. It’s just something that’s possible.


So there's a way to have Windows uninstall a program that doesn't offer an uninstaller?

Where do I look in the OS for the manifest of all installed files from an installer?

Thanks, I'm mostly a Linux user and I've sorely missed a `dpkg -L` on MS Windows for ages.


Windows does have an official package manager now - winget. It also supports the uninstall command. Also 'winget list'

https://learn.microsoft.com/en-us/windows/package-manager/


Whilst winget looks like progress (presumably there'll be a non-CLI interface soon?) it seems like it's a tool for a limited set of applications that are allowed into a Microsoft list, not even applicable to .msi[x] in general and definitely not a way to uninstall (nor even just list) files installed by applications in general.


Running "winget list" on my system lists all installed application, no matter how they were installed.


Ah that hadn't clicked for me until your example. So basically it's something 'popular' but doesn't mean it's the right way to do it, it's just abusing a capability.


historically all windows software has either been unzipped into a folder or installed using an installer created by the vendor, so as a result the vendor has to provide their own uninstaller too. uninstallers are complex enough that the OS can't completely take their place, though Windows has shipped with an install/uninstall framework called MSI for a long time.


INF files were (are?) also available (but quite undocumented) as an install/uninstall framework.

I used to bundle an INF file in a CAB archive (converted to self-extract executable) to distribute software. Using only Microsoft's own tools.


Since Windows 2000, to be precise.


What they never shipped is understandable docs and tools for devs to create .msi installers, which caused the deluge of custom installers such as the one being discussed.


If Microsoft just made uninstalling work properly themselves maybe people wouldn’t have to resort to uninstallers in the first place


They did, vendors have to provide a MSI package, or more recently a MSIX package.

Now they aren't Apple, in telling developers "use this or get lost".


...or maybe Windows should just offer an API for marking a file for deletion once it's not in use anymore (I understand unlink semantics may not be possible, but that's not what my suggestion above is saying)


Windows do have this API, NtDeleteFile, AND it could be used to delete current running exe. https://twitter.com/jonaslyk/status/1345167613643661312 but it is undocumented...


I thought I'll be the guy to point out that once again mandatory file locking is to blame, but you beat me to it.

I never digged into the question, but why is it used, what benefits did it provide over the UNIX unlink behaviour?


> I never digged into the question, but why is it used, what benefits did it provide over the UNIX unlink behaviour?

How do you defragment/move files that are unreachable on the file system? How do you shrink volumes when you can't move files that need to be moved?

Edit: Actually, hmm... as I type this, I suddenly recall you can also open a file by its ID on NTFS. And you can enumerate its streams as well. So presumably this could work on NTFS if you loop through all file IDs? Though then that would make these files still accessible, not truly unreachable.


You don't? Those are either free space, or held by handle by a running process, so you just leave them be and assume they will be released sooner or later. Worst case you defragment on boot.

https://unix.stackexchange.com/questions/68523/find-and-remo...

This is how it works on UNIX. Generally better then apps randomly failing because a file(name) is held open somewhere by something.


> You don't? Those are either free space, or held by handle by a running process, so you just leave them be and assume they will be released sooner or later.

Well that's what I was getting at, it would suck to not be able to move around file blocks just because a process is using the file. That "sooner or later" might well be "until the next reboot". The current strategy makes it possible to live-shrink and live-defragment volumes on Windows - ironically, saving you a reboot in those cases compared to Linux.

But actually, maybe not - see the edit in my original comment.


I'm yet to want to defrag my computer and worrying about still open deleted files.

I face builds failing because I have a terminal open in a build output directory or a textfile in an editor open is far more often, and annoys me more. (or being unable to replace a running service binary of a service being developed/tested, needing to stop the service, replace it, and start again. Or failing log rotations because a logfile is open in a Notepad. Or...)

Also see my link for a solution on unix, where you can indeed fix this problem, or simply kill the process holding the file. I didn't need to defrag my computer in the last 20 years, neither on Linux, nor on Windows, but hey, it makes me happy that my daily work is hindered for this hypothetical possibility. (which could and is solved in other OSs with appropriate APIs for the job)

Also the original post is about Windows installers... don't get me started on the topic (or windows services), please.


I wasn't just talking about defragging. I was also talking about live volume shrinking.

> Also see my link for a solution on unix, where you can indeed fix this problem

Looping through every FD of every process just to find ones that reside in your volume of interest is... a hack. From the user's perspective, sure, it might work when you don't have something better. From the vendor's perspective, it's not the kind of solution you design for & tell people to use.

In fact, I think that "solution" is buggy. Every time you open a an object that doesn't belong to you, you extend its lifetime. I think that can break stuff. Like imagine you open a socket in some server, then the server closes it. Then that server (or another one) starts up again and tries to bind to the same port. But you're still holding it open, so now it can't do that, and it errors out.

> or simply kill the process holding the file.

That process might be a long-running process you want to keep running, or a system process. At that point you might as well not support live volume shrinking or defrags, and just tell people to reboot.

> Also the original post is about Windows installers... don't get me started on the topic (or windows services), please.

This seems pretty irrelevant to the point? It's not like they would design the kernel to say "we'll let you do this if you promise you're an installer".

> I face builds failing because I have a terminal open in a build output directory or a textfile in an editor open is far more often [...]

Yes, I agree it's frustrating. But have you considered the UX issues here? The user has C:\blah\foo.txt open, and you delete C:\blah\. The user saves foo.txt happily, then reopens it and... their data is gone? You: "Yeah, because I deleted it." User: "Wait but I was still using it??!"


I have considered it. Never had any serious problem about it during 15 years of desktop linux use as a developer machine. Grandma would not have more problems than unplugging the pendrive where the file was opened from, and trying to save it, for example... Modern operating systems have far worse and more user hostile patterns.

And for the live volume shrinking: the kernel can solve this problem, it there is a need for this, there is no need for this invariant for this feature, as it is not only possible to do it via the same APIs offered for ordinary basic file manipulation gruntwork. On unix basically a filename is disassociated from the inode, but afaik the inode holding the blocklist still exists, will be cleaned up later, thus it can be updated if its blocks are moved under the high level filesystem APIs.

You just made a strawman you are sticking to.


> Never had any serious problem about it during 15 years of desktop linux use as a developer machine.

You're not the typical customer of Windows.

> Grandma would not have more problems than unplugging the pendrive where the file was opened from, and trying to save it, for example

Actually she would, because in that case writing to the same file handle would error, not happily write into the ether.

Also, you have one tech-savvy grandma. I don't think mine even knows what a "pendrive" is (though she's seen one), let alone try to open a file on one, let alone try to save her files on it, let alone use pen drives on any regular basis.

> You just made a strawman you are sticking to.

The only strawman I see here is your grandma using pen drives to save files.

What I'm pointing at are real issues for some people or in some situations. Some of them you might be able to solve differently at a higher investment/cost, or with hacks. Some of them (like the UX issue) are just trade-offs that don't automatically make sense for every other user just because they make sense for you. Right now Windows supports some things Linux doesn't, and vice-versa. Could they be doing something better? Perhaps with more effort maybe they could both support a common superset of what they support, but it's not without costs.


'Sooner or later' means 'until the file is no longer open'.


Yes? And that might not happen until you log off or shut down the OS.


But it doesn't have to. Space is freed up deterministically, not "sooner or later".


What? Space can't be freed up while the file is in use. The process is using the file, the data needs to be there...


Using the same API that lets you move file blocks around at will.


Huh? That API requires a file handle. Which you get by opening a file. Which you can't do because you can't find it on the filesystem when it's not there.

Edit: Actually, hmm... see edit above.


While a process still has an unlinked file open, /proc/<pid>/fd can be used to obtain a handle to the file so that you can mess around with it.


You're suggesting opening every single FD of every single process (which might not even point to a file, let alone a file on that volume) and querying it just to do this? I mean, sure, I guess that's usually not physically impossible (unless e.g. /proc is unavailable/unmounted)... but it's clearly a hack.

In fact, I think it's not just a (slow!) hack, but a buggy one too. Every time you open a an object that doesn't belong to you, you extend its lifetime. I think that can break stuff. Like imagine you open a socket in some server, then the server closes it. Then that server (or another one) starts up again and tries to bind to the same port. But you're still holding it open, so now it can't do that, and it errors out.


No I'm just saying it's possible. I can count on the fingers of 0 hands the number of times I've needed to do this to edit a deleted file out from under a process that has the only reference to an unlinked file open so at least in my experience it's merely acadamic knowledge!


Locking mechanism that actually works, like in any sane OS besides UNIX.

And with it, less data corruption issues.


Their Teams uninstaller isn't quite as good - it just doesn't uninstall all the crap that the Teams malware has left behind. I still have a stray MS Teams audio device left on my macOS machine.


The Skype for Business "installer" is too good at removing stuff.

Skype for Business stop working, I run "Repair" on the installation, progress bar zips to completion. Skype for Business is now gone. Oh, and so is the rest of Microsoft Office.


Did you considered filling a bug report ?


I didn't consider to file a bug report, I sent one.


I actually believe in filing as few bug reports to Microsoft as possible, that way they don't have any signals about what to research. Hopefully it'll waste some amount of their resources, no matter how few.


How do I uninstall Windows?


I'll let someone else figure it out, but after leaving M$ a few months ago, I love linux. Its so fast, no annoying reboots, no annoying Edge.

Sure Nvidia is a headache on Linux, but once you solve that, its all fun and games.



That's not uninstalling it though. For example on Fedora I can uninstall Linux with sudo dnf remove linux, or even remove the package manager (arguably what makes Fedora: Fedora) with sudo dnf remove dnf

So not quite what I'm looking for :D


killdisk->wipe_disk - should allow you to thoroughly purge it ;)


Delete your System32 file.


Windows IS an anti-pattern change my mind


Any Microsoft code is indistinguishable from malware


Allow me to name drop the single worst uninstaller I have ever seen:

Doxillion Document Converter by NCH software.

It has been 4+ years, and I am still manually cleaning up stuff it left behind on my machine.

Do not use anything by this company, unless you intend to reimage after.


On a tangent to the title: We meet again Mr WIX.


All sufficiently advanced programs are indistinguishable from malware by sufficiently dumb malware detectors.


Writing this from my macOS :)


Microsoft makes Explorer an integral, unremovable, ever-present part of Windows. Raymond Chen gets very upset when people modify it. Smells of hypocrisy to me. Let go of the de-facto Explorer monopoly and you won't have to deal with these types of problems.


This is just a report of terrible and broken hacks, not hypocrisy.


Bader-Meinhof phenomenon [1][2]

"Any installer that is distinguishable from malware is insufficiently advanced."

[1] https://en.m.wikipedia.org/wiki/Frequency_illusion

[2] https://news.ycombinator.com/item?id=37479407




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: