Hacker News new | past | comments | ask | show | jobs | submit login
How Microsoft reduced Windows 11 update size by 40% (microsoft.com)
205 points by mips_avatar on Oct 28, 2021 | hide | past | favorite | 171 comments



Summary: Instead of downloading both the forward delta and the backward delta as part of the update, the system computes the backward delta and stores it on disk.


Yes, that's really all to it. Compute the delta from the current revision to the base and keep that delta on the user's computer.

It seems that because they haven't done that until now (keeping the delta to the base locally) they distributed these same otherwise locally computable deltas to every Windows computer in the world with every update!

If I understood it correctly they patented that "approach" and wrote this much (or more) to make it appear more clever and worth of a patent?

"1] The approach described above was filed on 03/12/2021 as U.S. Provisional Patent Application No. 63/160,284 “REVERSE UPDATE DATA GENERATION”"


Seems like a very simple change.


Most changes seem simple when you think about just the change itself, but then at the scale of Microsoft (I haven't dealt with that personally), things get very complicated, very quickly. Consider all the different setups people have, what can go wrong and so on, and suddenly something that seems simple could actually have a huge impact on a large amount of people/setups.


Are you a product manager?


Oh my....


The first idea -- generating the reverse delta while applying the forward delta -- seem obvious. The second idea is clever, but Chrome did it in 2011: https://news.ycombinator.com/item?id=2576878

Even if the ideas aren't novel, getting this thing built and rolled out for Windows is not at all trivial, so good job there. But calling it "new compression technology" is misleading.


For the second idea, updating references when the code moves is more directly used in a newer Chrome patch format: Zucchini [1]. Courgette does a complex process of disassembling, aligning labels, and sending a patch of the disassembly, so the patcher has to do disassembly, patching, and reassembly. Zucchini does a more straightforward disassembly pass to note where the references are, and then pre-patches them based on how it knows the code moved. Compared with Courgette, this involves less intermediate state so it's faster [2], the code is simpler, and because much more is implicit in the code motion the patches tend to be smaller.

That basic technique goes back at least to the TranslateAddress method in Exediff (1999) [3].

I've been meaning to write up an article on this stuff [4], Google doesn't seem interested in publicizing Zucchini themselves, maybe due to the patent kerfuffle around Courgette. Microsoft's document on delta compression [5] covers a lot of this ground.

[1] https://chromium.googlesource.com/chromium/src/+/HEAD/compon...

[2] Some of this can be avoided, I made some changes to Courgette for a significant speed increase here: https://bugzilla.mozilla.org/show_bug.cgi?id=504624#c39

[3] http://robert.muth.org/Papers/1999-exediff.pdf

[4] I did write up a bug to consider Zucchini in Firefox, with patch size comparisons, but ultimately we didn't switch from the simple power of bsdiff: https://bugzilla.mozilla.org/show_bug.cgi?id=1632374

[5] Their system can use info from the pre-linked objects and PDB symbol files for better alignment, I'd played around with seeding alignments like this in bsdiff and Zucchini but I don't recall it giving significant improvement. https://docs.microsoft.com/en-us/previous-versions/bb417345(...


I desperately wish Linux distros used this trick too. I am tired of waiting for updates to download and having to clean out some cache folder that apt uses to free space.


I cannot remember when a Debian update was slower than a Windows update. And that does not even mention Visual Studio updates, which pull Gigabytes of data. Debian updates are very fast.


Detecting and installing a Debian update is much faster. But as they are much bigger whether this is offset by the download time depends on your bandwidth.


Nowadays the visual studio updates are quite small for my workload with C++ tools.


But at 50 to 100 MBytes/s on a multi Gb/s line I find it better, it's less the download than the time to install that bother me.

Interestingly I found myself using ultra minimal distro vm, and installing nearly nothing on them (qubes os) and just shard the apps: the startup / update of the main gui linux is trivially fast, each vm only has a little subset of apps I care about, and when I trigger a many-vm update they do it in sequence and just update their own subset, meanwhile I can fully use the other vms.

Also I tend to use stable-like distros now like debian and barely see updates coming in anymore, compared to my fedora youth when there was several updates a day, some breaking everything.


> Interestingly I found myself using ultra minimal distro vm, and installing nearly nothing on them (qubes os) and just shard the apps

Isn't this basically the idea behind snaps? Or is that more docker-like?


Last time I looked (years ago), Fedora had delta updates - but updates were slow anyway.


That's more an apt thing, dnf can use delta rpms with parallel downloads and automatically cleans unused packages. I'm not sure how well it cleans the cache though.

On my fast home connection dnf is way faster than apt to update my laptop unless it's a really small update, then apt is faster because it refreshes the repos quicker.

I think there are alternatives to apt that can do parallel downloads and delta as well.


RPMs have delta updates


For what it's worth, Solus uses deltas since 2016. Updates tend to be amazingly fast. Shame that everything else in it is slowly going downhill.


I think a lot of this work is being doing in projects like rpm-ostree


Yep ostree is doing delta updates by default. Flatpaks are alo stored as an ostree, so they do as well - even across all runtimes an flatpaks installed on a system! :)


openSUSE (excluding Tumbleweed) had delta RPMs since 2005.


FWIW, RPM based distros can use delta RPMs


Have they improved the way the system figures out its inventory in order to know what patches it needs?


Windows NT (as a usable thing available to the naive general public) didn't exist before Microsoft built it, so it was technically correct to call it new technology technology. In the same vein, this is arguably new compression technology.

The above being said, my reaction to the article's key point was "you've been shipping forward and reverse diffs this entire time???"

I guess my point is that it's kinda sad that it's most correct to generally describe things like this like as "new technology", because customers who've never seen something like it before don't apply any distinction between that and the idea having existed for however long. </rant>


Well of course the naive public doesn't make a distinction as it would be completely irrelevant to them. Can't use it now? Then it's a theory. A new thing I can buy today? Now that's innovative technology to them.


I always thought that it was just VMS done poorly


The bits based on VMS (or more specifically the MICA project) were done quite well, it's all the Win32 crud on top which causes the problems.


The NT kernel is a nice design especially the (original) support for different flavors of syscalls to support both Windows and POSIX (and potentially others).

A lot of that got lost over the years.


Sometimes I feel like Microsoft should just put Windows into an image all by itself. One big file (VHD or similar). Then each application gets its own separate image file. After all that, have some kind of funky image mounting thing that merges them together in some filesystem view. Returning us to the good old C drive we know. User data lives separately and politely away from the OS and applications. There might even be a RW image for the OS so thats not even in the OS image itself.

Advantages? Just download the image. Could even delta from old to new in multiple ways. Easy revert as well. Security would then come from the usual deal around privileges but there's the possibility of exotic new approaches. Automatic revert could be a thing as well. OS images could actually have a version. Kind of like a ROM.

Disadvantages? Disk space. Filesystems have come a long way though. Plenty of tricks around that. But I'm sure there are plenty of other downsides I'm sure. Don't corrupt that file.

You can already boot VHD(x?) files stored on, eg C, drive so this isn't actually impossible.

EDIT: These images wouldn't be unpacked on each boot. There would be files inside the image and the image is mounted as a filesystem. This is old tech now in the 2020s and its definitely not exotic. If images really are too much overhead then change the word "image" to "partition". Or some combination thereof (eg image for each app). But in reality, I'm not really convinced the overhead is really that great. Disk encryption uses significant processing power already so accessing a filesystem from an image isn't that great a leap.



One problem I see with this is, even though the iso's we download are about 5Gb, they get unpacked into 15Gb+ worth of files. Thus we are either going to have 15Gb+ images floating around to avoid unpacking them on each boot, or each boot becomes 2 to 5 minutes to unpack and all those writes will make ssd sad.

I wish Microsoft just go make a new OS from scratch, that behaves more like other OS's (file/directory slashes, rgb formatting, fonts etc) and that have the core of the system immutable while it is running. Get all user data onto a separate partition. Have a root user + password by default. Have a fancy package manager like apt/dnf (where ALL software can be updated through), have a fancy bootloader menu where you can install system updates from, strip out all language packs, drivers and other useless features (xbox, weather app, phone app), have a new terminal (get rid of cmd + powershell and start from scratch), rebuild diskpart...... the list goes on. Make it lean, fast and make the choice to ignore backwards compatibility with windows. Don't even call it windows for that matter.

Having all that, you can have an isolated, immutable windows in less than 2Gb. All of the extra partitions can exist as VHD's that gets mounted at startup, that way you can copy an entire environment by copying 1 file.


>drivers

Network drivers are always useful.


Yeah, there is an irony that in attempting an efficient model of updates, they have something that is often dog slow and not even that small to download. If you separate things like librariy content, you could easily package most things as single archives like MacOS does and you know that you have something that works. You could probably also split c:\windows into, say, 10 different things so you wouldn't necessarily need to replace all of them at the same time.

Alas, the tech debt for MS is enormous so unless they had the ability to create a new OS with no backwards compatability (except most of the WinAPI if possible) then we have to live with the pain.


There's wimboot mode [1], but it's only for the initial install, updates are still file based.

[1] https://www.howtogeek.com/196416/wimboot-explained-how-windo...


This is kinda what you get from Windows on Qubes OS.


vhd is too expensive for something like this and you dont need to virtualize the whole disk, file system is enough.

They could achieve something like this but on boot level:

https://sandboxie-plus.com/


That’s pretty much how MacOS used to work until MacOS X.


This seems… obvious? I am rather surprised they didn’t do that before. Am I missing something?


> Not regress install time.

They seems to achieve size reduction by transmitting only the forward upgrade patches, and let the machine generate the downgrade patches during the upgrade. How do they manage to keep install time the same while this definitely uses more resources (cpu, io)?


Maybe they generate the downgrade patches as the upgrade patches arrive - but before they're applied? During that period one would expect the network to be the bottleneck in all but the most extreme circumstances. [edit] And it should be possible to do incrementally.


I wonder if perhaps "install time" contains "download time", and by taking some average internet connection speed, they're estimating that they save more in download time than they cost in actual install+downgrade patch time?

This would mean, if you have a fast link, the experience is somewhat worse, but if you have a slow link, the experience is better.

Just a total guess though.


My guess is to install an update windows has to apply each little patch to random files scattered across the OS necessitating a file read and write for each little change. But generating a reverse delta should only take a transform of the update file which I imagine is much less complex.


>How do they manage to keep install time the same

Do they? Or Windows updates are going to get even slower?


I knew it actually wasn't going to be 'removed all spyware and advertising" but still disappointed nonetheless.


Let me translate that for you in a way that makes more sense for Microsoft:

> removed major revenue sources


in the case of Microsoft we've proved that even if you pay for the product you're still not the customer.


it still puzzles me, who’s buying? how does microsoft turn my user statistics into cash?


I don't think they monetize telemetry directly, it's more valuable for them if they control access to it tightly.

But the original post was complaining about ads, too :-)


Some years ago Microsoft was spreading FUD about Linux. Now Linux users do the same about Microsoft.


That comment came from a Windows 10 machine, but ok :-))


It's not their size that was the problem. It's the frequency, and the fact that they take a long time to install and most of them seem to require a reboot. And sometimes they even reboot the machine when you don't want, potentially losing work.


The biggest problem for me is not the download size but the long installation time. Hope Microsoft can solve that point as well.


Maybe it just shows their overwhelming focus on the cloud but I don't see how anyone at Microsoft can have looked at their OS update system (particularly for Windows Server) and think "Yup, that's acceptable."

Our Windows Server 2016 VM's take over two hours to apply the monthly security patches even on a basic web or application server. The Sysadmin subreddit had a thread recently where some were claiming some of their 2016 machines take over four hours.

Not only are our Linux servers dramatically faster to update there's also more information displayed on progress for those inclined to look at console outputs. This leads to a lot less awkward pauses where you try to work out if it's permanently stuck or just thinking.


Are you patching all your machines at the same time, overloading your SAN? Are you running webroot or similar horrifying security software?

We generally see <5 minute downtimes on all our Server 2016 and 2019 VMs, never more than 10 minutes.


Which is still way more than most other os updaters need to download.

Also its incredibly slow. Updating a bunch of computers with i9, 64gb ram, nvme and gigabit uplink from an older iso install to recent still took half a day and 4 restarts on average.


jesus my updates take < 30 seconds on linux though I update frequently. but even on my systems where I don't upgrade for over a years they only take about 10-15minutes.


* by deleting the Xbox game bar

Just kidding - impressive how much you can safe in terms of size.


For people with fairly decent internet connection, more often that not the compute and files copying is taking way more time than downloading.

While it is important to reduce file size for the benefits of people with slower connection. I am wondering if there are ways we could trade with larger download size but much faster update overall.



Now can they reduce the installation time? It always surprises me at how I can update everything on linux in a matter of seconds, but Windows takes exponentially longer


First read about this address map diff instead of binary diff compression in Chrome updates. And how they reduced binary update size by 90%.

Guess Chrome updates still use this compression.


Ok cool now make it not bug out on simple things like night light being stuck to on even after I disable it. Best solution from the web was - "here try deleting these registry keys" (didn't work), "try this other weird unrelated thing hidden somewhere" (didn't work) and finally, from the community forums "reinstall windows I guess lmao" like what the fuck??? I was trying to disable this one setting and now I need to spend hours debugging this or reinstalling the entire OS? But hey solving cool engineering problems is more fun than fixing bugs I guess..

Windows, not even once.


How about they reduce the bloatware and spyware that is Windows11 by some percent. And how about not taking control of my computer to brute force updates?


Initially I read the headline as "How Minecraft reduced Windows 11 update size by 40%" and I was so intrigued. Oh well.


This is a great method overall, they definitely optimized the total size needed for the update.


Wish Apple would do something for macOS every little security update is over 2 gigs...


So.. not compression, just gitting updates? Am I missing something?


I was looking forward to something about how they re-thought architecture and bloat, and had the self awareness to contemplate why Windows is so much bigger than before while at the same time not doing that much more.


The only thought architecture in windows is telemetry. Everything else is patched.


Too bad they didn't make it easy to block OEM spyware from Windows Update. I bought an HP at Costco a while back that was on sale... reinstalled Windows using a clean install disc image, installed the latest drivers... next thing I know Windows update starts rolling back my drivers and installed HP spyware. I had to set a group policy option to prevent drivers from Windows update. I remember at one point doing a fresh install was the way to prevent OEM spyware, but now it seems like Microsoft is happy to include it and make it difficult to prevent.


But what you call "spyware" can be beneficial to other people. I for one welcomed when several OEM apps were automatically installed through Windows Update, because my laptop has some features accessible only by those OEM apps.

That said, I recognize that it might be better to give users a choice. But then again, isn't a group policy exactly designed to tackle this problem? Good defaults for normal people, and customizability for power users.


Recently purchased a used HP workstation and did a clean windows install. I didn't just get all the HP stuff installed automatically but it even changed my desktop wallpaper to an HP one which was then synced to any pc that uses my Microsoft account. Wallpaper may not be important but not something you'd expect.


Group policy wouldn't tackle the problem for everyone, as GP is only available in Pro editions and what not. You have to use registry hacks for Home. And no, they should have an option during install. Even if you enable it after install, it is usually too late since it'll start installing drivers during install and after... you'd have to stay disconnected from the Internet to prevent it.


Did you wipe the drive? If not, there is probably a hidden OEM partition that Windows uses to install all that OEM stuff back after fresh installs.


ACPI tables from the system ROM can give Windows a list of urls to fetch and install.

This is useful when the OEM isn't being a jerk. They can be sure you've got the drivers for the touchpad or whatever, regardless of how you install... But they don't always use their powers for good.


Aaaaa!

What's the table name, and can I read it from Linux?


It’s not a list of URLs, but an EXE outright.

The ACPI table responsible for it is WPBT.


Yes I wiped the drive. I didn't use any OEM partition to reinstall. I used the latest Windows ISO, including October updates that comes directly from Microsoft. This is a known issue that is well documented. You can find the HP stuff in the Windows Update catalog, as well as other OEM providers.


> I didn't use any OEM partition to reinstall.

If you wipe, its OK, but if you dont, the partition may be used by Windows, even if you install fresh from an image you prepared yourself.


This reads like dealing with a hardware compromised machine. At some point, why even bother?


From what I gather, Microsoft gets OEM info from UEFI and lets OEM manufacturers use that to provide software/drivers via Windows Update. In this instance, it kept rolling back the integrated Radeon driver directly from AMD to one that the OEM provided that was over a year old.


> At some point, why even bother?

Because a lot of people:

  - don't care about their privacy and spyware (a non-insignificant chunk of the populace, even i'm growing weary after decades of surveillance attempts)
  - don't know any better (perhaps the majority of the consumerbase)
  - only have certain software available on Windows (e.g. MobaXTerm, or have something like Sony Vegas fit their workflow and don't have the desire to change)
  - don't know how to use the alternative OSes well, or have bad experiences with them (Linux driver installation, anyone?)
  - feel like that's a fair tradeoff to make for being able to run their favourite entertainment software or video games (Proton will probably eventually be good enough, but yet doesn't have 100% coverage, which currently isn't enough to sway everyone)
Personally, i just run a dual boot setup with Windows 10 (since it seems marginally less horrible than 11 for the time being) for playing legacy games and using the software that i like, and Linux Mint (or Ubuntu, or Debian, depending on the mood/year) for development stuff.

Honestly, Linux is probably the better platform for many other things, everything from privacy to development (how it works with Docker and other *nix software is especially nice, no need for weird mapping, WSL/WSL2, MinGW or any other weirdness on Windows), but for a variety of reasons the adoption remains low.

That said, i actually recently wrote about some of my frustrations as a part of my blog article, "On technological illiteracy", which ended up talking a bit more about the driver issues and hardware support: https://blog.kronis.dev/articles/on-technological-illiteracy

Let's just hope that things continue improving in the coming decades!


Thats how I treat any new device: compromised, by the manufacturer.


Xcode could learn a lot from this.


Can anyone explain why the windows update process is so freakin slow?

It's just sits there scanning for ages, and then installs are really slow as well. Downloads seem fine. Is it not just asking for updates since x?


Yeah, honestly, I'll believe it when I see when it comes to faster updates on Windows. Whenever I have to update macOS even on a machine that's way, way behind, it's super simple and straightforward. On Windows, there's always hours of pain and installing endless updates over and over. Windows Update never seems to finish updating. Right after I finish installing everything, boom, it's found more updates. It's just absolutely horrifying and I wouldn't wish that on my worst enemy.


> Right after I finish installing everything, boom, it's found more updates.

I've always thought that's because the newest ones are dependent on some of the second-newest ones, so it couldn't install them before the dependencies were installed. And the reason it throws those at you immediately, "boom", is that it hasn't really "found more updates" after installing the first set; it knew about these ones too, from looking through the list it made for the first update, and just held them back for a second round.

Just my speculation, though, so could be totally fucking wrong.


On Windows an admin can selectivly apply individual updates, which isn't possible on macOS, I think. The core algorithm that calculates the dependencies is horribly slow. They are working around that with update rollups and cumulative updates since Windows 10. There are multiple layers of snapshotting going on during updates (VSS + Transactional NTFS).

Then for example "Update telemetry" scans all applications on the system and only gives you the option to upgrade to Windows 11 if it doesn't find anything incompatible. On Apple you just get the option to upgrade and stuff just stops working afterwards.


> On Apple you just get the option to upgrade and stuff just stops working afterwards.

Granted, they host an "incompatible apps list" which their installers are bundled with + download updates if they can. I have no idea what it's for and what it's doing, never found a thing on my systems.


Microsoft's done away with "individual updates". The current update for Windows 10 is "October 12, 2021 Update" and aside from a few exceptions such as CPU microcode updates, it bundles every previous update.


Which certainly is easier to coordinate as a company. Part of the reason Apple does entire OS patches as that it allows every aspect of the system to have a consistent view of each other part, so API (private or public) are all in sync between all the apps and components. It also means full QA passes are for a consistent set of versions as unexpected bugs can impact far away parts.


That process solves for the problem of only wanting some updates. The new way you get all or nothing. The situation that you bemoan is exceedingly rare by comparison to updating in general. The point is probably moot though in light of the changes to Windows update over the years. There are plenty of reasons to hate on Windows, I think update is one of the few things that works reasonably well. The contents of those updates is another story.


Because you're still using Windows 7


Last time I got frustrated by Microsoft update time, I created a PR for their update documentation while I was sitting there watching it that described how long it took. They merged it.

https://github.com/MicrosoftDocs/OfficeDocs-Exchange/pull/27...


That genuinely made my morning.


On what version of Windows?

Windows 7 and earlier had some issues where, as the size of the update catalog grew, the updater would start taking much longer to "scan for updates". This was eventually fixed in an update to Windows 7 -- but this still meant that you'd be waiting a very long time to install the first few rounds of updates to a fresh 7 install.

That being said, Windows 7 is EOL. If you're still using it, please stop.


Windows Server 2019, for a VM running on a server that had nothing else to do, PCIe 4 nvme SSD, over 10 minutes waiting for windows update to do its thing on reboot.

I don’t mind so much that it takes time checking, downloading and installing updates, because that’s not downtime. My beef is with what happens once you reboot.


I have a new PC with a NVMe drive and Windows 10. Everything is fast on it, but a ~300MB monthly Windows update can easily take 2-3 minutes, plus a reboot. Usually a software install of this size would take no more than 20 seconds.

This has been my experience with Windows updates for the last 10 years or so, on different PCs and different Windows versions. It's just slow (and it's definitely the installation phase, not the downloading).


Good old accidental exponential time complexity. Not caught in the initial release as there were not enough pending updates to notice the issue.


And but of course nobody's going to make the effort to internally test with super outdated installations (although a VM snapshot or two, or a new install, would've made it simple enough). Good point.


quadratic is the worst IME. will get through moderate, perhaps even large test suites, sit in prod quietly waiting for the data set to grow and only then blow up.


And no tests.


Windows cannot overwrite files which are in use. I guess this complicates a bit the issue. But they are working on GUI "improvements" which nobody asks so it will take a while until they improve the situation.


It can rename file that are in use though.


Last time I looked at the update performance in windows 10 the issue was that the installers for each of the patches replaced files sequentially, that sequential execution in turn got delayed by single-threaded antivirus.

It could be a lot faster if they parallelized the file operations or deferred the antivirus.


I believe that BigTech started releasing huge updates to normalise having an always-on internet connection during updates and normalised the slow update process for the opportunity it provides them to mine and analyse all data on your computer. Often an update is the only time when they can (and do) reset and toggle all the privacy settings back to their favour, and then do on device analysis (like collecting meta data of all your files or analysing your photos etc.). They can then send the data back to their servers to add to your profile they have on you. That's why I always make it a point to immediately switch off my router after downloading an OS update and starting an install.

(Both Windows and macOS spend a lot of time in indexing your disks and doing all sorts of on-device "AI" analysis to provide some specific feature. And both collect a lot of your data for this.)


i think this is not far fetched

what linux distro do you recommend?


After several years ... I have come to the conclusion distro wars are pointless.

These days, I just have an old XUbuntu installation that has been consistently upgraded since 16.04, and it just works. Some may consider Ubuntu-clones to be "beginner's distros", ultimately, though, I like how it gets out of your way, apt works well, and I can focus on being productive.

Doesn't mean you have to use all they do. I happily work emacs-based and in i3...


thanks for that, it’s been years since I tried xubuntu and “just works” is a big selling point.


I prefer FreeBSD and Debian.


When I noticed installing or upgrading software was slow, and checked resource monitor, in 60% of cases it was windows built-in antivirus consuming all CPU time.


Yes, I have noticed that too. But the windows update process itself also sits there for minutes using 60%+ CPU.


Even installing...just doesn't make sense why they're so slow.

I have a pretty respectable system. i9-9900K, 32 GB of RAM, and Windows 10 is installed on an NVMe that can read/write at up to 3 gigabytes/sec (Yes, gigaBYTES!).

A 200 MB update should take a fraction of a second. Even if it's actually applying a delta and not just simply overwriting files, I can't imagine it should take more than a couple seconds.


This is a massive problem for large windows hypervisors. We have to shutdown around 20 vms for about 40 minutes for the host to reboot after updates.

We are looking to move to a Linux host instead but that is a big up-front cost to get the same size machine, move the VMs across and remove the old host.


Live VM migration is a thing on Hyper-V and host maintenance is its primary use case (as well as load leveling)


It's gotten a lot faster recently. What's still slow, though, is macOS's updates.


Historically there were CPU intensive operations, I assume to trade compute time for bandwidth. I am not sure how much of that still exists though since updates have become all-or-nothing blobs.



Exactly. I can update linux in a matter of seconds, the longest part is downloading the updates.


My thought exactly. I would take a 2GB monthly update with a simple swap of the assemblies that would take 2 seconds and a reboot over the current agonising update process.


Honestly, only kernel updates should require host restarts. Linux and BSD have had this for ages. Maybe have a flag that security updates use to flag any process using a vulnerable DLL as needing an eventual restart, and then have an administrative setting for restarting vulnerable processes after a certain period of time, at a convenient hour.


> Linux and BSD have had this for ages.

Fedora/GNOME prefers to install updates during a reboot. So although most updates can work this way, it's not always the case.

There are some minor cases where a reboot is better, or where an update requires a bit of work. E.g. Firefox updates, but also I sometimes have that video playback suddenly breaks (not sure why; kernel/mesa/something). Only a reboot seems to fix the video playback and it happens in various applications once it breaks (Firefox, mpv). I install the updates via dnf via cron/systemd (forgot what I did), so not with the suggested reboot and so on.

I do appreciate the Flatpak bits, those update easily.


How would this work? For example there was an update in libXYZ and the office suite, browser, audio player and some system services need to be restarted to use the new version. Obviously the system can't just quit those applications, after all they are in use and there might be unsaved data. So should the users get a popup for every app that needs to be restarted? And what about the system services? E.g. when the display session needs to be restarted, all open applications would be lost as well, or restarting the audio server might mess with an ongoing audio recording.

Wouldn't it be so much more reliable, faster and simpler to just install those updates on the next restart?


Presumably the user would get a prompt telling them they need to log out and log back in to pick up security updates, with a "details" button that shows them which applications actually need to be restarted if they don't log out/log back in.

It's certainly simpler for the developer to pick up the updates only upon rest, but it's sometimes very inconvenient for the user, and probably slower to pick up the actual updates. For instance, I run a vanity domain from a Linux device at home. It would be inconvenient to run Windows on that machine and have to restart it following monthly patches or following emergency patches.


Interestingly I noticed KDE Neon went the Windows way, and now requires reboots for all their updates.

Before this the experience could be quite fragile post-update, so I tended to always reboot right away anyway.


Note that Linux can do kernel updates with a reboot.


I think you meant to say _without_ a reboot, no?


Yes :)


Windows used to support rebootless kernel updates too, but for some reason they stopped doing it.


kexec is for all intents and purposes a reboot though, if that's what you're thinking of. Granted, you skip the EFI/POST part but you still lose all of your state.

Otherwise, are there any distros regularly using ksplice?


Do you have a spinning rust disk?


It's slow even on my PCIe 4 NVMe drive. I boot into Windows once a week or two and I often see it spending more than 5 minutes installing a single update package. This is even slower than apt and dpkg, props to them.


Your installation of Windows is that by any chance the installation that came with computer? Or is it a Windows that you have upgraded from a previous version (and perhaps that was upgraded from yet another previous version)?

I never have these slow downs that I read others have. I always do a clean install from a USB stick (including deleting recovery partitions). I do this like once a year or 1 1/2 years when something is being released that is either a new version or what is we used to call a service packs.

Windows sucks at upgrading. There is always something strange going on after upgrading. Things that not happens after a clean install.

If it is looong time since you have performed a clean install, I recommend you to consider doing that.


I installed last year when I got the computer and drive, and it's still pretty clean since I don't really use Windows these days. Or was clean until I upgraded to 11 last week.

But this has always been my experience. I remember reinstalling Windows 7 and waiting hours for Windows Update to finish. Even checking for updates could take 20 minutes.


yeah, endless hours spend waiting for Windows Update. It is not so bad anymore.

The only thing I can think about comparing your setup to mine is that I rarely turns off my computer. If you only use your very infrequent and turn it off when not used. I wonder if Windows is doing all kinds of maintenance jobs every time you start your computer since it is off most of the time, and then when you manually hit windows update, it gets a bit busy working out the correct state of your computer

Those jobs are on my computer distributed over days / weeks but wit you they run every time you turn on windows since it couldn't run them at their scheduled frequency since the computer was off.

I am just speculating, but maybe try to turn on windows once in a while and leave it over night, to test if this improves things.


Not to diminish package managers, I don't think they are having to be quite as clever as Windows update, at least historically. I could be wrong, but I think they are more like dependency tree managers that also download and execute new installers.


No, they aren't clever. They're practical. There's no such thing as execution here. Packages are archives containing the entire files not differences. The manager downloads it, unpacks it, and adds/removes/replaces the package's files in the system.


They can also execute post-install or post-update scripts. And indeed, Linux package managers aren't as fancy as Windows Update is, but that fanciness isn't something I miss.

My distro's package manager probably breaks my system if it loses power during an update. Even so, it's been one of the more reliable and pleasant to use programs I've encountered.


That doesn't seem right.

I also have a PCIe NVMe boot drive, typically i don't even notice updates until i go to shut my PC down in the evenings and windows tells me it'd like a few mins to do the updates.

Same difference to me, i was shutting it down and walking away anyway


I notice them because I intentionally open Windows Update and install the available ones.


Windows updates are released once a month.


No, but where I notice it is on my work computer that is admittedly loaded down with corporate security nonsense. My personal computer doesn't have nearly as much of an issue, but on the other hand it's an always on desktop so it's doing most of its windows update work at 3 in the morning.


Interestingly, I still use magnetic drives as a second drive on my laptops for storing data.Unfortunately most modern laptops have only one drive (NVMe), excellent for performance and less so for durability. And you better not keep it off for a couple of months.


Interesting point on culture that most companies have tech blog titles in the form of "How we did X" but Microsoft's is "How Microsoft did X"


It’s a better habit in my opinion, especially for the benefit of people who read the headlines on an aggregator.

It immediately tells you who did what. I hope more companies would follow this cenvention.


Maybe it's because they know some sites will use the title as-is?


It got posted to a community forum website also heavily frequented by Microsoft users and partner companies. That forum may have posts about customers or partner companies, so it makes sense to refer to Microsoft proper in this way.

Note that this isn't the main Microsoft blog, and in fact there are lots of different blogs.


The whole article refers to Microsoft as a third party. If it was more marketing related topic I'd assume it was indeed written by a third party.


After the first section most of the article talks in terms of "We"


no issues with attribution on sites like HN which greatly prefer original titles in submissions this way



True. I thought the author was from outside MS, but he is a program manager at MS


>Interesting point on culture that most companies have tech blog titles in the form of "How we did X" but Microsoft's is "How Microsoft did X"

That's because Microsoft isn't a technology company. Microsoft is a marketing and sales company that happens to sell technology.


>That's because Microsoft isn't a technology company. Microsoft is a marketing and sales company that happens to sell technology.

As someone who worked at Microsoft in a technical role, I know that's the truth.

You can lie to yourself if you want, but it's still true


Microsoft refers to itself in the 3rd person?


Off topic, but if you have young kids you might find yourself starting to say things like:

  - Dad needs you to eat your food!
  - Dad is going to brush your teeth!
  - Give those scissors to dad!
  - Don't touch dad's laptop!!!
  - Dad needs to sleep now...
Soon you may be asked by friends why you sometimes speak of yourself in third person, long after the little one started understanding pronouns. You might also recover some sleep.


I hadn't really given the phenomenon much thought. You're essentially saying to the kid: you need to respect the "ideals" of what Dad represents and what a "Dad" in the abstract would expect of you in this situation. Sort of asserting your dominance through an appeal to a concept of an authority figure instead of you as an individual. Why is "I need to sleep now" potentially less effective?


Not OP -- but parent of three.

The primary reason for using specifics ("dad", <name of child>, <name of sibling>, ...) is that pronouns are one of the last things a child will pick up. Think about how a child will always hear themselves referred to as "you" -- and end up assuming that "you" is like a name form them and start using "you" instead of "I"/"me".

In reality, you will be using both specifics and pronouns, and often repeat the same sentence with both versions to teach that they are identical.


Very interesting perspective, thanks! I have no kids but I'm the oldest of 4 siblings (all boys...).


Thanks, you explained it perfectly!


It also might happen that you call your wife "mom" with your friends, out of context from your kids.

Speaking for a friend, of course.


I love that there are so many Dad's on this site.


There is no we in Microsoft.


…but there is in Wendows?


There's an "oso" in Microsoft. Is that a reference to Ballmer?

Of course, Linux mailing lists used to joke that "micro" and "soft" reference Bill, back when people had thick skin and cancel culture had yet to be invented.


> There's an "oso" in Microsoft. Is that a reference to Ballmer?

Does he have a beard nowadays? He didn't use to, back in the day -- and aren't they almost mandatory for "bears"?


i reduced the updates size by 100% using Linux, hehe


I upvoted you, because you know what: you're right. I shake my head at all the tomfoolery that Windows users are expected to go through.

Seriously, Microsoft blather on about compelling experience this, rich interface that, but the whole thing seems a nightmare to me. What's more, it's been "normalised". People go through all this garbage and they rave about how good Windows is.


Any concrete complaints?


I, for one, hate how disregarded the system makes me feel. The experience is much closer to a remotely managed computer, like the workstation I have at my day job, than to using a personal computer.

This applies to how the updates are handled, the telemetry stuff in general, the way the Control Panel and other settings are arranged, and so on. It's hard to pinpoint one specific thing, because the whole system is steered in this direction.


Wonder how Delta RPMs do this, I am a Fedora user.

This must be how delta rpms work right? Because why would they contain backwards data? They only contain new data, and the diff between old and new data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: