Hacker News new | past | comments | ask | show | jobs | submit login
Did Windows 10 slow down with each feature update? (ntdotdev.wordpress.com)
406 points by goranmoomin on June 21, 2021 | hide | past | favorite | 421 comments



Disappointed I didn't see fairly reasonable explanations around processor security bugs which impact broad system performance such as Spectre and Meltdown.

This has a significant impact on Linux and Microsoft has even outlined that these fixes impact their performance (there have been many more security bugs identified since): https://www.microsoft.com/security/blog/2018/01/09/understan...


What I find most disappointing is that even processors manufactured TODAY contain these flaws and have performance hits put into the firmware.

This issue is so fundamental to how CPU caches work, that there really is not a true performance-neutral fix.


> What I find most disappointing is that even processors manufactured TODAY contain these flaws and have performance hits put into the firmware.

The first (private) disclosure of the Spectre vulnerabilities was sometime in mid-late 2017. Partial hardware mitigations launched in 2019 for some Spectre variants, where the mitigations could be easily patched in to already taped-out designs.

Beyond those easy wins, even if foundries weren’t slammed by COVID delays, I don’t think it’s realistic to expect that we’d be any further along? Most of these sidechannels require deep rethinks of the execution pipeline and speculation engines, and fundamentally redesigning those systems, taping out those designs, and finally manufacturing those chips was always going to be a process that was measured in the better part of a decade.


An alternative interpretation is that there are no current "performance hits", just the legitimate performance is now (better) known. Those former "performance gains" were cheated, akin to Volkswagen emissions tests.

Edit: it was Volkswagen not Volvo.


It’s news to me that Intel, AMD, and various ARM licensees all knew about these problems (and vague “the ~truth~ side channels are out there” handwaves that were made for many years don’t count) when the requisite speculation changes were made 20 years ago and left them in anyway. And that still isn’t quite an equivalence to the Volvo emission tests, since the processors don’t try to detect if you’re testing them and change behavior.


It's my understanding that the bulk of the vulnerabilities in question are not present to near the degree on AMD and ARM. If AMD and ARM could do things mostly properly, why couldn't Intel?


There are a number of the vulnerabilities that affect other platforms than Intel x86 (including some POWER and ARM), though you're correct, a good amount of those disclosed are specific to Intel. But there has also been at least one that was AMD specific, too.[1]

I hypothesize it could have come down to "well, we made this type of decision this way before, and the world didn't fall down, so we can probably make it that way again, right?" Since these aren't formally proven systems, to some extent deciding whether an optimization is safe is going to be a judgment call - and if you go for "well, if this assumption is false, we're already doomed because we did X elsewhere...", the results could look like this.

Or maybe not, I've never worked for a chip design house.

[1] - https://mlq.me/download/takeaway.pdf


Or simply none of them were aware that the kinds of speculation they were doing were capable of producing exploitable vulnerabilities. It took security researchers a long time to find these issues, and the kind of speculation that the original Spectre and Meltdown exploited were well known outside of the chipmakers for over a decade. Intel was simply more aggressive at that particular avenue of performance improvement.


> former "performance gains" were cheated

why would cache performance be considered a cheat?


Because it makes several of the isolation features the processor ships with useless, without documenting it?


But isolation had never been sold as a security feature, only as a "user-space crash can't bring down the kernel", no ?


No. Isolation has always been a security feature, not a reliability feature.

Consider that local privilege escalations are generally (and I'd say correctly) treated as critical severity issues. Any general purpose process memory read-isolation leak is a quick route to a local privilege escalation, so how could isolation be anything other than a security feature? In its absence, you might as well run everything as root and there would be no point whatsoever to caring about privilege escalations.


> Consider that local privilege escalations are generally (and I'd say correctly) treated as critical severity issues. Any general purpose process memory read-isolation leak is a quick route to a local privilege escalation, so how could isolation be anything other than a security feature?

https://xkcd.com/1200/

> In its absence, you might as well run everything as root

this is what 99% of people with a home computer did until windows XP (and even then most people today run under an admin account of some sorts on their computers)

isolation between users is only relevant for clusters, it's a super minority usecase versus what is the main thing that matters, home computers which only have one human user anyways. But I guess you'd disagree with Stallman's paragraph at the end here: https://ftp.gnu.org/old-gnu/Manuals/coreutils-4.5.4/html_nod...


Yes, I 100% disagree with Stallman's goofy conflation of OS permissions segmentation with aristocracies, and think the idea that the permissions model of the early 1980s is at all workable in the internet era is profoundly, stunningly naive.

The XKCD argument misses the point entirely; if someone p0wns your browser, they p0wn your browser, but what we're talking about is allowing every VSCode extension and Docker image someone installs to read your GMail. It's an absolute dipshit idea that would make every current security problem with computing orders of magnitude worse.


> but what we're talking about is allowing every VSCode extension and Docker image someone installs to read your GMail.

I don't understand, this is already the status quo today. Any vscode extension can access your filesystem and connect to a socket, so it can just pipe your ~/.cache/mozilla to some server in russia, and even read the memory of any of your processes through /proc/$PID/maps. Ditto for any shell script you run, any build script for any software you clone from github before building it, unless you are running Qubes OS.


C’mon. Ability to read the cache is obviously a much much lower threat than the ability to read the browser processes’ memory – the browser doesn’t cache your GMail password.

Also let’s stop here and admire just how far you’ve shifted the goal posts: we’ve gone from “memory isolation isn’t a security measure it’s actually just for reliability” to “ok sure it’s obviously a security critical measure, but check out this boneheaded RMS rant and also what about the lower threat value data that gets cached, what about that. And do we even need security or can we just go back to the 1980s with our Amigas when everything was cool and advanced persistent threats didn’t exist, man”.

Pick a coherent point you’re trying to argue and some kind of intelligible threat model, at least. If we’re concerned about leaking filesystem data, there’s more aggressive sandboxing measures which several OSes employ, but they even more strongly rely on memory isolation, not less.

The point, which you seem to have conceded several posts ago, remains: Memory isolation is a security measure, not just or even primarily a reliability issue.

Also you don’t seem to understand what you’re trying to talk about, because:

> and even read the memory of any of your processes through /proc/$PID/maps

This requires root permissions (or for you to have explicitly changed /proc/sys/kernel/yama/ptrace_scope to a non-default value to allow processes to PTRACE_ATTACH to arbitrary non-child processes, which you shouldn’t do on a general purpose internet-facing machine unless you are a clinically insane person) for the obvious reason that this would be incredibly dangerous otherwise. Proving once again my entire point:

Memory isolation is a security measure.


Isolation (Rings) was first implemented in Multics (first in software, then in hardware..a bit like today (user-mode and kernel/hypervisor mode)), foremost for security second for stability.


For the record, it was Volkswagen that cheated the emissions test, not Volvo


Thanks! I wasn't sure I remembered correctly but didn't immediately see the answer in a quick google.


didn't immediately see the answer in a quick google.

Whether happening organically or through the magic of reputation management, that's pretty alarming, actually. Give it another few years and nobody will even remember.


If GP searched for something along the lines of "Volvo emissions cheating", then I hardly think it's alarming that the VW scandal didn't come up.


Just searched "emissions cheating" and uh, is like, every auto maker cheating on emissions? I remember the VW scandal, but this quick search leaves me to conclude that VW is only the most memorable one that got mass media attention of late.

But hey, we love our corporate overlords, they only want what's best for us, the shareholders.

(Not actually a shareholder of anything)


> Just searched "emissions cheating" and uh, is like, every auto maker cheating on emissions?

More or less, yes, they all did.

As an European I'd say it's a shared blame, though, as in the officials around Europe (both at national and at the EU level) who had very strongly pushed for diesel for the previous two to three decades (mostly through taxation/pricing policies) also deserve a very big part of this blame.

The Americans got it right (especially states like California) when relegating diesel to mostly commercial/truck/public transport use, but the hybris and the "we know better than the silly Americans when it comes to the environment" prevailed in the end, that's why it took Dieselgate and at least a couple of decades for those officials to reverse their past mistakes.



What I find most disappointing is that even processors manufactured TODAY contain these flaws and have performance hits put into the firmware.

Alternatively: if you don't care about timing side-channels (which is true in a lot of applications where the only code being run is trusted, such as with HPC clusters and the like), you can run at full performance, and if you do, you can load the microcode to increase the security at the expense of performance.


I've seen a research paper that talks about something called "DAWG" that is supposed to preserve speculative execution while mitigating timing attacks. With some performance hit, but less than what we have today. The graphs in the paper seem to suggest a hit of 10% or less.

Not my area of expertise, so no idea how promising it is.

https://people.csail.mit.edu/vlk/dawg-micro18.pdf


The Spectre/Meltdown issues are fundamental and "difficult" to fix - but mostly exist because we can observe state using very fine grained performance counters. Why not disable (or dumb down) the performance counters, and leave the bugs and better performance in place?

We could enable the counters (and the mask bugs and reduce performance) by normally-off a BIOS flag. I for one would rather have the performance and don't need the counters for day-to-day ops.


If this is really what's making the difference... I wonder if these mitigations shouldn't be automatically enabled for home users.

The chances of Spectre or Meltdown actually managing to acquire sensitive data seems incredibly low. Is that worth the very large performance impact?


An incredibly low probability multiplied by ~billion+ users means somebody is going to lose something valuable sooner or later. And then the odds of MS getting sued for “not even patching a well-known vulnerability” becomes 100%.


But that's also a billion+ users with much slower computers! So you need to compare the harm on both sides.

And then MS could add a switch in Settings for anyone who is concerned. (I realize not many people would find it, but it seems important for the option to be accessible, without digging into the registry or some such.)


Think of the average windows user. This person uses Chrome because they got a popup when they went to Gmail. The shortcut for candy crush is still in their start menu because it doesn't bother anything. I am not being insulting -- this is the average, median windows user. This is the person for whom Windows Home is designed. There is ZERO CHANCE that this user knows what a "speculative execution mitigation" is. There is a nonzero chance some cleaner site will tell them to turn it off because it "makes windows faster". They don't understand it can be executed from a website because they don't think of websites as programs because they don't have an icon on the desktop. There is no way this turns out well.


Yes, so Microsoft has to make the decision for them. I'm not sure "we're going to make your computer 30% slower because there's a 0.0001% chance someone could steal your password" is the right decision.

(These numbers are completely made up. Any decision would depend on getting the right numbers, but this appears to be easier said than done. I've seen very different estimates.)


An unpatched data exfil against the OS with the largest install base in the world? It doesn't matter it's a byte at a time, cryptominers have done more with less. The question isn't if, the question is who and how many. The fallout from SMB2 is still ongoing, and it was patched 10 years ago. The idea of not patching EVER opens up harms for the next twenty years for everyone.


> The fallout from SMB2

This made me wanna cry. Jokes aside, there's much truth to this. Attack vectors at this scale are seriously dangerous.

I'm still hoping we can prevent the namewreck fallout of SCADA systems, because at one point or another, 8200 is gonna make a mistake with their offensive cybersec operations.


What is the chance that this user has unbacked up data that someone could extort them for $500. Multiply this chance by the proportion who would enable a "go faster" option and then by 1.5 billion computers.

Let us say 20% and 25% * 1.5B. Maybe 75M targets give or take depending on how you want to do the math.


> But that's also a billion+ users with much slower computers! So you need to compare the harm on both sides.

Far more likely Microsoft would be legally exposed due to a skipped patch than a patch that slows machines down. Intel on the other hand, that'd be more interesting.

> And then MS could add a switch in Settings for anyone who is concerned. (I realize not many people would find it, but it seems important for the option to be accessible, without digging into the registry or some such.)

Any such switch would just be disabled via GPOs by corporate security teams anyway, so what's the point? The entities who would most "benefit" wouldn't even be able to do it.


To be clear, I am very explicitly thinking of home editions of Windows. No GPOs.


There's already third party tools (0) to disable those mitigations in windows. I suspect the small barrier to entry of needing to use a search engine is enough to keep the risk minimized to the general population. Hopefully only people who are knowledgeable of and willing to accept the risk use this.

0: https://www.grc.com/inspectre.htm


If you could trivially enable a "go much faster" setting in the gui that would in turn make you much more vulnerable you would see people sharing on facebook how they found the secret go faster option and 1/4 of machines would have this setting enabled rendering exploiting it much more profitable and you would see attacks become widespread after which people would ask why windows came with a suicide switch. This issue comes to a head when somewhere some important company or 3 is attacked via possibly poorly managed employee laptops enabling this feature after when the gui option is deleted. Then someone can have the same 5 years later after people have forgotten about the first go round.

How about we just skip a step?


I’m not suggesting Microsoft does this intentionally to push consumers to buy new computers because that is obviously not the primary motivator for the mitigation patches, but I’m sure they don’t mind.

Also your average user isn’t going to know or understand even if you gave them the option somewhere. It just doesn’t make sense.

If you are knowledgeable enough to understand, then you can disable the mitigations. This seems like the best approach to me for everyone.


Spectre and meltdown aren’t often used because of the mitigations in modern operating systems. The attacks are complex and the chance of finding a vulnerable machine is low. But if instead windows users were all vulnerable, you bet all sorts of scammy websites would go after these vulnerabilities.

And the attacks are really bad - they can read arbitrary memory. What are the chances another browser tab has a bank website open with active auth cookies, or a valuable email account? Really really high.


> And the attacks are really bad - they can read arbitrary memory.

But—and please tell me if I’m just totally wrong about this—my understanding was they can read random memory, right? The attacker can’t control what they get.

Out of the gigabytes of memory on a system, I’d expect only a handful of bytes to be actually valuable, and only then if certain combinations can be retrieved together.


The term “Random memory access” is a horribly confusing misnomer from before I was born. It’s used as a contrast with sequential memory access, to mean memory can be accessed in any arbitrary order.

My understanding is that these vulnerabilities allow reading the memory at arbitrary mapped pointers, from the perspective of the CPU, in kernel or user space. So you could use the kernel’s data structures to figure out what other programs are mapped, and where, and go read their memory.


Until there's an actual documented in the wild attacks against individuals, I see zero reason to enable these crippling security measures on my home desktop.


Isn't the fact that mitigations are widely deployed the primary reason why there aren't substantial attacks in the wild?


It's very unclear to me how much of a performance hit there is for most desktop use-cases.

My most recent CPU I use is a desktop i5-8500 which doesn't feel faster than my older i7-3930k aside from disk access [0], both of which are impacted by the vulnerabilities. I practically never use the i5.

Until recently my daily driver was a MacBook Pro that came out in 2013, and I don't really remember a before / after change in perceived performance or responsiveness – I haven't done any scientific benchmarks, but that computer never felt slow, and still doesn't.

There's also this Phoronix benchmark [1] that compares performance with and without `mitigations=off` on linux. There are some tests where there's a very big difference, but mostly, to me, it doesn't look that crazy.

In the Firefox benchmarks they do say that there's a big hit, but then I probably don't read the graphs correctly, nothing looks particularly worrisome to me.

---

[0] The i5 has some NVME as opposed to an older, regular SATA drive in the i7. Still, I run a lightweight Linux install on both, so drive speed is never an issue for me, everything can fit in RAM.

[1] https://www.phoronix.com/scan.php?page=article&item=spectre-...


> I use is a desktop i5-8500 which doesn't feel faster than my older i7-3930k

I mean, looking at the specs on Intel ark that's to be expected, a newer generation mid-range CPU is typically the same speed as a high tier CPU of old.

Faster bus speeds, higher turbo, some newer under the hood instruction sets for SIMD, and if the datasheet is correct, for roughly half the thermal output thanks to a process shrink.


That was my first thought. I remember reading an article before Spectre was patched, that it could slow down the speed by ~5%? I never checked back in on it.

I have noticed that the latest update feels faster than all the other ones, but it's always hard to tell since that's the only time I really reboot the machine.


You can disable the fixes: https://make-linux-fast-again.com/


Anyone finally willing to admit that w10 is unusable on Hard Drives while Linux and W7 work fine on them?


This has been precisely my experience. Windows 7 was fast on an HDD and blazing fast on a SSD. Windows 10 is unusable on an HDD and usable on an SSD. Still kinda sluggish, even then, for what it's worth.

Not to mention the applications. A slow operating system that uses so much resources merely idling plus Applications-That-Are-Actually-Web-Browsers make day to day usage almost innavigable for someone with quick reflexes used to a Linux CLI.


>almost innavigable for someone with quick reflexes

My few short years with Windows 3.1+ spoiled me for all future versions of windows. Instant closing, instant alt-tabbing, double-clicking the top-left corner icon to close, (nearly) instant opening of simple apps like File Manager and Notepad... not to mention the instant crashes.

Linux got close but still always felt clunky, while future versions of Windows and MacOS just kept abstracting themselves from my tactile sensation farther and farther away...


That is what I have been banging on about for years. Latency added, not just in Software, but in Hardware as well from Display, Keyboard, Mouse and even Network. The past 20+ years the whole industry nearly went all out on optimising for throughput.

Sometimes I look at macOS, Windows 10. Apart from some UI facelift I dont even remember a single user feature that was relevant or important in the past 5 years. Apart from latency added cause of feature bloat

The beauty of Gaming is that there are finally some incentives for people to work on the issue.

And a video from Microsoft Research on 1ms Latency

https://www.youtube.com/watch?v=vOvQCPLkPt4


>Sometimes I look at macOS, Windows 10. Apart from some UI facelift I dont even remember a single user feature

The user "feature" that comes quickest to mind for Windows 10 was the first run of calc.exe showing a popup notification begging for a 5-star rating in the Microsoft Store.

The fucking CALCULATOR. Literally the first and most basic function any computer ever had, which has had NO ui changes in the million software and hardware permutations for 70 years, because it's already perfect. Something has gone unfathomably, fundamentally wrong with how Microsoft works internally.


I installed the Win7 calculator, and it works like a charm. The no-hex functionality and splash screen really are terrible. I was used to start typing immediately after win+r calc ...


What rating did you give it?


I gave it one star and left a scathing review. What are they going to do, not include a calculator in the next Windows?


They might. It would not surprise me if a lot of people just type their calculations into the browser address bar or in their Internet search engine's input box. Or search for a calculator web application, and use that as their calculator.


They can not include it and instead provide a menu option that actually takes you to the store on first run :)


Don't give them any ideas please.


> The past 20+ years the whole industry nearly went all out on optimising for throughput.

It used to be that GPU drivers would queue up to ~10 frames just to get slightly higher FPS in benchmarks (while adding basically a quarter of a second input lag)... naturally, this is an extremely bad trade-off for gaming, but made them look a few percent (at most) better on paper.


You reminded me of "Computer latency: 1977-2017" https://danluu.com/input-lag/


"Finally" seems a bit myopic. The war against latency has been going on continuously since the beginning, even in gaming. Check out nearly any content John Carmack has ever written or spoken on the topic of gaming or VR, just as one example.


For those of us who grew up with 8-bit home computers low latency was the starting point. On my first computer you could detect a keypress, update video memory and (assuming the beam was in the right place) see the results on screen in a few microseconds.


Right? I opened a Windows XP VM inside my 10 box a bit ago to play an old game, and oh boy was that incredibly snappy compared to what we have now. Sure, it's running in anachronistic hardware, but where did we go wrong?

3.1 was before my time, but I can imagine.


To be fair, 3.1 was snappy because it didn't have the layers of security and anti-crashing catches that we now rely on.


The thing is... I don't care, and I'm sure the original Win 3.1 commenter doesn't care either. I am old enough to have experienced Win 3.1 (and making it slower with the "magic-whatever" software that made animated icons, anyone remember their rabbit/hat logo?).

Aaaanyway: Give me a "chrooted/jailed" environment where I have all my "dumb", "no internet" connection apps (Jails are 2000 technology, chroot is older), and be done with it, so that if anyone hacks my machine or if I quadruple-click on that virus it is only going to affect the chroot environment and doesn't see anything else.

Truth is, a lot of contemporary commercial OS sluggishness comes from ""features"" that are either some kind of telemetry, or just "security" preventing me (the user) from doing things the the company does not want me to do (like the asinine OSX "feature" of disabling write access to certain parts of MY hard disk by default)


I remember XP not being able to paint windows or move the mouse cursor at descent frame-rate under load. It was so bad compared to Mac OS X from the same time.


Neither does Windows 10, my mouse frequently stutters on the first few minutes after booting, and I don't use any particularly invasive customizations.


I opened XP on my old 900 mhz pentium 3, and it was just as snappy as you describe yours.


I started with XP with a pentium 2 then move to a pentium 4. It ran perfectly on both. It started even faster than my Linux Mint installation.


That’s wild. I remember when windows XP came out I was amazed it was so large. The OS was 300 megabytes or something - and it took up a significant portion of my computer’s hard drive. Despite all of Microsoft’s claims about “media enhancements” it seemed horribly bloated and ugly. It was the embarrassing stepchild of windows 2000 - which I still adore. How times have changed!


Windows2000 was the best windows ever, from the speed to the look, the simplicity...just perfect.


Mine was NT4... but without DirectX and no games


I'd be curious to know what you think of Haiku, if you've ever tried it.


Only ever read about BeOS and haven't never heard of Haiku :O


And it would constantly apply updates and reboot without a prompt.


Win10 is perfectly fine on HDDs. You just have to install Linux on the bare metal, and create a Win10 VM leaving at least 3GB of RAM to the Linux caches.

This is faster than W10 on bare metal with SSD and the same physical amount of RAM.


Do you have any benchmarks illustrating the difference?


Benchmarks? No. That's just my personal experience with different computers. (And by the way, the faster than SSDs claim does not include startup time.)

Can one still benchmark Windows? Most proprietary software requires explicit authorization from the seller nowadays. (I'm quite certain those clauses are illegal here, it's not because of it that I didn't benchmark, but I expect to find none on the web because of it.)


> Can one still benchmark Windows? Most proprietary software requires explicit authorization from the seller nowadays.

Yes, you can still find Windows benchmarks, they're all over the web in fact. The Phoronix test suite supports Windows for example, and Phoronix have a whole series comparing performance between Windows and various Linux distributions: https://www.phoronix.com/scan.php?page=search&q=Windows%20vs.

They've also tested NTFS vs. other file systems before, although this was using the Linux NTFS driver: https://www.phoronix.com/scan.php?page=news_item&px=Linux-4....

More recently they did at least one I/O bound test with Windows vs Linux, a SQLite insertion test: https://www.phoronix.com/scan.php?page=article&item=windows-... Linux + EXT4 crushed Windows + NTFS, but obviously I'd like to see a more well-rounded test. And of course none of these cover the question of whether Windows performance would improve running in a Linux VM.


> And of course none of these cover the question of whether Windows performance would improve running in a Linux VM.

Well, of course not. It's completely absurd that Windows runs better in a VM. When I tried it, I expected my work "computer" at home to be impossible to use, so I would go and fetch some hardware at work. I was very surprised that it actually performed better than the work hardware (on anything that isn't CPU bound). (And yeah, as much as Windows lags behind on those benchmarks, it's not nearly enough to explain what I've seen.)

There is some other comment hinting at the VM ignoring some fsync barrier. I've seen the VM caching a lot of disk IO, but there is probably some very stupid application level choice involved there too. But on Windows, it doesn't have to be fsync, as there is locking too to ruin your day.


Why is this?


I have a little idea. I know Windows does a lot of useless disk access¹, but I never thought it was localized before I tried a VM. I always assumed it was random, but it looks like it's kinda random, but all over the same small area, and Windows can't optimize it for some reason.

Anyway, if you do that, you better keep those ~3GB free. If you don't have enough cache things break down really fast.

1 - It is for cortana, file indexing, telemetry, and etc. If you take the time to disable them, your computer will get much faster... for about a week, until it updates again.


While we're spitballing, from my recollection, it seems to do a lot of small writes; probably different write cache policies, possibly including the VM not passing through fsync/write barriers.

And, you know, organizational failure to have this issue lingering.


I'm pretty sure a lot of this is telemetry related. Resmon will frequently show ETW (Event Tracing for Windows) files getting a detrimental quantity of I/O directed their way, especially at boot...


Ye sometimes my Win10 computer slows down alot and there is some telemetry service in the tasklist hogging like 5% CPU. The malware service is also a thief. I mean they could just make those run at like 1/5 speed and it would be fine. Add some sleep() ...


Totally.

Back in 2020, after one update, windows 10 was taking about 5min to boot on a thinkpad, if not more. Had to install a SSD, changed everything.

Pretty curious to hear what kernel developer or filesystem developers have to say about this, because it doesn't really make sense to me.

I can understand why something would stop working, but not to just become slow. In what world does a system change result in worse performance in certain cases?

Maybe deep inside, w10 stopped using any optimization that allowed it to be fast on HDD, considering SSD should be the norm, but I fail to understand if this is choice that implies a difficult compromise, or if it's just laziness and negligence.

Any linux dev could chime in?


I've heard that modern Windows does some aggressive disk-caching of driver states or something (?) when you shut down, so that it can quickly load them when you boot again and skip a large part of the booting process. Obviously the intention is to make boot faster, but maybe on an HDD it has the opposite effect?


More than just on boot. For the brief time that I ran Win10 on an HDD, what it felt like was that the whole OS was almost constantly touching the disk, for whatever reason, often blocking. Open start menu? Better do a bunch of disk I/O. Moused over anything? Touch the disk. Pressed a key? Hit disk. Doing nothing? Disk time!

Whatever they're doing is probably fine, for very liberal values of "fine" (i.e. it's not actually fine and is probably harming system-wide performance and UX in more subtle ways, even on SSDs) for SSD-equipped systems, but made it practically unusable on an HDD.


It actually is the opposite of what you say, in that the Fast Boot feature of Windows 10 is basically logging you out then hibernating the system, which is faster on HDDs but slower on SSDs (or at best the same speed of a cold boot, but at the expense of system stability in the long term since NT really likes its reboots).

It's really odd, because the first couple releases of Windows 10 were fast enough on HDDs, but post 1703 is when I started to really notice it.


It is possible to disable fastboot in the system settings. You need to do it if you want to dual boot linux.



Maybe Microsoft removed the optimizations for rotational devices? XFS is considering it: https://lore.kernel.org/linux-xfs/20200514103454.GL2040@drea...


As I read that email, it seems to be saying the opposite:

> We know when the underlying storage is solid state - there's a "non-rotational" field in the block device config that tells us the storage doesn't need physical seek optimisation. We should make use of that.

In other words, the changes under consideration should only apply when they know they're using solid state storage. Is there information elsewhere which contradicts that?

Indeed, it seems very surprising that XFS might consider removing optimizations for slower drives. It's a very popular file system in the enterprise, and anyone still storing large amounts of data is very likely to still be using rotational drives because of the massive cost difference.


Absolutely true. I've converted more people to GNU/Linux over the last two years than in the twenty years before that. The vast majority of people in my (third world) country do not have SSDs, so GNU/Linux is their only hope of having an usable and moderately secure machine.


Yep. Windows 7 would sometimes take a few minutes to finish whatever boot time disk i/o it wanted to do; for Windows 10 on a disk, it seemed to always be doing some i/o and never settled down.


Win10 is logging and tracelogging >100 separate useless things, just in case mothership would want to request them uploaded for telemetry purposes.


Every time I open Event Viewer I'm amazed at how many things are in there, and how exactly zero help with the problem I'm having.


People must have very different performance requirements than me. I've run 10 on nothing but HDD for years now with no issues. Of course, I also avoid electron apps, modify Firefox so it doesn't eat memory like an addict on a bender, and generally keep bloat off my system. But Windows, basic Multimedia programs and games all seem to run fine. It's probably not cutting edge, but I don't feel like it's dragging.


> modify Firefox so it doesn't eat memory like an addict on a bender

Are these Windows-specific modifications, or tuning Firefox settings that might be beneficial on other platforms? (I'm not saying that I'm looking at the insane UI changes for the upcoming Safari release on the Mac and investigating other options, but I'm not not saying that.)


> I'm not saying that I'm looking at the insane UI changes for the upcoming Safari release on the Mac and investigating other options, but I'm not not saying that.

You and everyone else. I'm just taking a wait-and-see approach, on the off chance the new tab categorization stuff makes the new design usable. Otherwise, I'll have to trade a couple hours of battery life and some overall system performance and switch to... something else. Probably FF or a light alternative Webkit browser. Hell, there's hardly a task I do that doesn't put me at at least 10 open tabs, which is already too many for the new design, then if someone comes along and says "hey can you take a look at X real quick", well, now I might be at or over 20 (I find multi-window browsing horribly obtuse unless I have 3+ monitors). And you know what? I'm not always great about closing all my tabs the second I finish a task, and I don't really want to have to do that.

We'll see how it works, but I'm not optimistic. It looks batshit crazy. I can imagine something involving a highly-capable quick-search-to-find-tab system making it better than what we have now, or at least not-worse, but dunno whether they'll be including anything like that.


I've found the add-on Panorama View a life saver: https://addons.mozilla.org/en-US/firefox/addon/panorama-view...

I don't want to save tabs, I just need them to be separated in arbitrary and temporary visual groups. Makes management so much easier when i can just switch to another group and hide the 25 tabs i had open while researching a problem.

I can't even think to work without this thing.


What modifications you would suggest to increase Firefox memory usage?

EDIT: "increase" in the sense of "make better". Whoops.


You are more patient than I am.


I have an older laptop with an hdd. It always was under powered but still usable for things like streaming to a tv every once in a while. The latest Win10 updates have it taking minutes to run a search or context switch. Feels like we are getting into phone territory of forced HW upgrades for everything now.


I'm upgrading computers in my University with 8GB of ram and ssd and poof, 2011 computers run W10 just fine once again.


This was my experience as well updating computers at a company I recently joined. Changing to an SSD had the most impact, although a lot of these machines already had 8GB of RAM. There are still a handful around that now have an SSD but still 4GB of RAM they came with; these are also better now, usable anyway.


My company did the same thing a while back. I remember two years ago having to get in the office at least 20 minutes before my z book laptop would fully boot.

They upgraded all the developer grade laptops with 32GB of RAM and SSD's. Still take a good 5-8 mins to boot, but compared to where they were? Light years better!


I did this to all my family members macbook pros a few years ago, back when apple let you do that sort of thing to your hardware. These machines are 10 years old and still running modern stuff just fine.


For what's its worth macOS had excatly the same issue. My previous $WORK laptop was a HDD based model and all of a sudden after a macOS upgrade it was slow as heck. I can't recall the exact release but it was certainty very noticeable right after I applied a major upgrade. I was so happy when $WORK finally upgraded it to a SSD based model. I suspect OS developers pretty much target SSD machines and don't really bother to ensure there's no regressions on HDD machines. I also notice that Debian performs much better on SSD than HDD. So certainty not a Windows only issue.


Definitely. Booting up Win10 on an HDD can take up to 10 minutes if you have Discord/Steam installed.


Why is that a Windows problem and not a Discord or Steam issue?


Well, in large part it has to do with how Windows loads libraries. For example, if you have two Electron apps running (eg. Discord and Slack), both will load separate libraries, effectively doubling their load on your system (scaling for each app you open). The solution is to enforce dynamic linking, like how Linux handles it. You can have Spotify, Discord and Slack all running on the same Electron library.


Aren't they using bundled/vendored versions? The OS can't help that; it's like Snaps on Linux.


Which still do not increase the booting time like that


It will if you set them to launch on boot/login, which is probably the real difference (Windows has a tradition of apps auto starting, *nix has less).


This has nothing to do with Windows and everything to do with Discord and Slack deciding not to use dynamic linking.


I mean, the same apps on Arch Linux don't slow me down at all. So in some ways it must be a Windows issue.


Steam doesn't slow Debian (LXDE) at all, because the desktop environment doesn't require two processor cores – even though Steam is mostly Electon-based now.


What's the advantage of having Discord installed instead of using it via browser?

It seems so odd to me to install programs that still need an internet connection anyway to work.


Yeah, Win10 is very drive performance dependent. You can run it fine on a Pentium 4, but you need an SSD.


As someone who manages a fleet of 30 windows machines that house PCI DAQ cards: Windows 10 runs on a Pentium 4, but I wouldn't call it fine. All of the machines run on SSDs, which is a godsend, but the older machines that run pentiums involve a lot of waiting around to do the most basic of things. It doesn't help that Windows refuses to us less than 1.5 GB of RAM. You can forget web browsing.


I gave them Lubuntu and their scanner app running in WINE. It worked fine but they refused to switch because "it's not muh windows". This is how Microsoft corrupts people for life...


The sad irony being that the Windows UI, starting from Win8, became a complete mess, far more different from say Win7 or even XP than any recent Linux desktop manager.

Most of my users, some even over 70 with little computer experience, are doing fine on Linux, but I "converted" them before they had to experience that awful mess, so it has been relatively easy: from XP or Win7 to a customized XFCE (its defaults are ugly and unpractical) has been a breeze.


> customized XFCE (its defaults are ugly and unpractical)

Correction: Its defaults are ugly and unpractical on Debian. It looks a lot nicer on Manjaro for example.


Maybe this is true, I installed Ubuntu on a hard drive not long ago and immediately bought an SSD. Apt installing build-essentials was a slog but takes like 20-30 seconds on an SSD. I guess you mean using the GUI but I don’t think I could go back to using any OS on an HDD.


I could be completely wrong, but my guess at the cause would be the registry. Every functionality and feature in Windows has a flag somewhere in the registry. Every query is probably a disk read, and it's a blocking operation of course. Sure, there's probably a cache, but that cache is only so large.

There's a program that I can't recall the name of that can trace registry queries in the program it's attached to. You can attach to basically any process in Windows and see a monstrous number of registry queries.


A lot of critical system configuration is stored in the SYSTEM hive which isn't explicitly backed by a file mapping and is loaded at boot via firmware services so this will be fast as it is non-paged and mapped in kernel space for the entire boot session. On newer builds of Windows 10 other hives are memory mapped into the usermode address space of the minimal Registry process. Whenever you do a registry read the kernel will temporarily attach your thread to the Registry process' address space and the read to the UM mapped section will occur which will naturally fault in the data from disk. The requesting process' thread will then be unattached and the information will be returned. Since non-SYSTEM/ELAM hives are memory mapped the kernel's cache manager and memory manager subsystems are the ones that "own" and control the mapped memory. The file cache is tuned based on the particular system's hardware characteristics to be as performant as possible. There are registry-specific caches in between to reduce the need to attach to the Registry process but this isn't going to be a disk IO speed bottleneck.

The program you're thinking of is procmon which is a part of the Sysinternals suite of tools.


The registry is plenty small enough to keep the whole thing in RAM


Not a perfect comparison, but I have a Windows 10 box I built with a small M2 NVMe SSD boot drive, but a lot of apps -- including notably Steam games, OneDrive, and the browser Downloads folder -- runs off a larger 7200rpm hard drive. I've never felt any slowness and everything is blazing fast. It's got 16GB RAM, though.

I know a lot of cheap machines have 5400rpm hard drives, I wonder if 7200rpm+ drives will offer a better experience. Also what role RAM plays in user experience.


It is absolutely unusable, I was shocked to see how good Linux was when windows forced me to try another OS a few years ago


This is true but honestly after using a fast SSD even linux on hard drive feels unacceptably sluggish by comparison.


Do people still think to defrag HDDs on windows? I wonder if that's an issue people are seeing.


You haven't needed to manually defragment hard drives in Windows in decades.


Yes you do. In fact in Windows/NTFS you can lose gigs of space from disk fragmentation.


The system does it automatically and has since at least Vista, probably XP. Furthermore, there's no benefit (and in fact active harm) to defragging an SSD.


He talks about HD's, on Windows 10 you cant de-frag ssd's just optimize, aka trim. But de-frag HD's is often needed on windows in certain work scenarios.


Windows 10, just like Windows 8 and 7, automatically defragments spinning drives once a week. You never need to manually do it.


And? Maybe it's needed to be more often or not at all. However in Windows 10 you cannot defrag (with internal tools) SSD's like you wrote.


I never said you could. I said that you didn't need to defrag manually, and that with the rise of SSDs doing so would be actively harmful.


Windows usually does it automatically as part of a maintenance routine. Unless you were writing and deleting lots of files causing fragmentation, there isnt usually much to gain with manual runs.


I used to obsessively defrag my drives and it made very little difference. So much happier switching to SSD.


Wow, tried it on friends laptop that I was going to put an SSD in and it was an absolute joke.


I don't have experience with W10+HDD, but note that modern HDDs have started to adopt SMR which can lead to slower writes. It's possible slowness comes from that direction and not just W10.


I have several DM-SMR drives and their write cache (when not completely full) is like 50-100 GB large. As long as you don't write more than that in one go it performs exactly like a normal hard drive of their stature. When they're idle they'll merge the write cache into the main SMR region. Sequential writes mostly end up going straight to the SMR region (there is no idle-reorganization after them).

SMR is also only used in fairly high capacity drives. A 1/2/4 TB drive wouldn't be SMR.


>SMR is also only used in fairly high capacity drives. A 1/2/4 TB drive wouldn't be SMR.

I thought so too, until I bought a 2TB and discovered it's SMR. It turns out there are quite a few 4TB SMRs, and also 2TB and even 1TB SMRs. e.g.

https://www.truenas.com/community/resources/list-of-known-sm...


Not sure how modern you are referring too, but I have 2009-era hard drives which are very sluggish.


but why we should care about this?

insanely fast NVMe M2 disks are cheap, let alone normal SSDs.


The OS is for the system, not the system the OS.

Besides, if everyone bought faster storage, would that get people to write more efficient software?


I won't disagree, or agree. If you are running spinning rust rather than an SSD, you are getting exactly what you chose. Just about any use case that justifies spinning rust can be solved with an external platter, thumbdrive, and/or SD card.


When I upgraded from 7 to 10 I had an SSD in my machine but was never given an option to install 10 to the SSD. It requires some contortions to get physical install media and then start over on a new install on the SSD instead of doing any kind of convenient system migration.

The "choice" I made was back in 2009 when I installed 7 to an HDD, which seems like a reasonable choice.


Resize and clone the install from the HDD to the SSD. I've had the same "install" of Windows since 2011, its gone through 3 different storage mediums, two different motherboards, and three different CPUs.

My go-to for a low level copy between two different drives is Clonezilla.

https://clonezilla.org/

GParted is also another live OS with good tools to manage partitions.

https://gparted.org/


Does windows handle it alright when the hardware (motherboard/cpu) it's running on changes out from under it?


On the PC I described above I have added and removed several additional hard drives over the years as I expand and they die. Recently I had my old Nvidia graphics card die and replaced it with an ATI.

I uninstalled the nvidia drivers then installed the ATI ones and it seems to work fine. No activation complaints.

It's actually been very easy to keep this thing running. Pretty much just plug in new parts when old ones fail. I think my spinning rust HDD is about to die though so I should take the suggestions in this thread and go through the process of moving the OS to the SSD.

At some point in the distant past I might have replaced a Core 2 Duo for a Core 2 Quad. Or I may have built it this way, not sure.


When I've done this, on first boot Windows detects the changed hardware and there's a little mini-installation screen that says "Reconfiguring Hardware" or something. Things seem to work afterwards. Changing the Motherboard may cause activation issues on OEM licenses. Changing from Nvidia GPUs to other manufacturers may require e.g. DDU to remove their drivers completely.


Yes, first "deinstall" HAL, then reboot.

Then deinstall all your cpu's (if the number of cores changed), thats no joke...deinstall your cpu's (all of them), then reboot.

Update all the drivers...that's it.


It won't break. Activation may or may not whine at you, and if it does whine it might resolve itself after a couple days.


I have never had an issue installing Windows 10 to a machine I upgraded to an ssd…I had hell with an old Dell Precision workstation because of the built in RAID controller…but that’s another story…and when I upgraded that machine to W7 from XP 64 bit, I bought new spinning rust for it. But W8 went smooth and I got W10 working by disabling the RAID during install (I used an SSD for boot).

That machine, with dual e5405’s is still in service for gaming at my kid’s place.

Physical install media for 10 is a matter of downloading an .exe and selecting the option for making a thumb drive as the installation disk and then booting from it.

It’s relatively straight forward but there are a few moving parts for sure. Typically f12 to select boot device.

2009 was a long time ago in tech terms.


This was the "free" upgrade to 10 from 7. It only offered in-place upgrades. I didn't want to go through the pain of an actual fresh install. I don't really understand why Microsoft couldn't (didn't) give the option to install to a new drive while upgrading.


The upgrade is still free…or at least they still are for me in the US as recently as earlier this year…I have an old Vostro that gets so little use it is faster to reinstall W10 from scratch than upgrade.

The same was true for the old Precision workstation a couple of years ago when I gave it to my kid.

And several other machines including a Win7 Thinkpad off eBay.

My advice is download the installation tool and give it a try.

Backup first as needed of course.

Think of all the complaints Microsoft avoids by keeping Windows 10 available without a bunch of time, money, and rigmarole.


At the time Microsoft made threats that the free upgrade path would expire.


Why do people still expect modern software to run at incredible speeds on hard disk drives?

Run old technology with old software. It is ridiculous that consumer hardware is still being sold with crappy 5400 RPM disks in 2021. No, I have no interest in optimising my software for speeds that are 1/10th of my broadband.

Unless we're talking archival or huge storage necessities, stop complaining about modern OS or games running slow on a technology that hasn't realistically been updated in 20 years.

Do you expect Windows 10 to run on a Pentium II with 512MB RAM as well, because some version of Linux does?


> Why do people still expect modern software to run at incredible speeds on hard disk drives?

Modern software usually doesn't run at incredible speeds on an nvme drive with top of the line hardware. Modern software is often just slow. What can I do on a modern operating system today that I couldn't do on a predecessor released 20 years ago? What justifies that Windows 10 feels slower to use then Windows XP while Windows 10 runs on hardware that is tens of thousands of times faster?

Under this lens, what really defines modern software is slowness. Take away the advantage of orders of magnitude of hardware improvements and you would be left with something unusably slow.


Exactly. I am of the opinion that anything that was possible in computing 15 years ago should be perceptibly instant today. There are obvious exceptions such as cross-continent communication being limited by the speed of light, but as a general rule. Instead, we get software that's written as if Dennard scaling were still occurring.


When I was a child, I had a reverent wonder for technology and all it might do in the future. Realisations like this replaced that with a dry cynicism. My inner child is disappointed.


When I was a child, I was thinking how amazing it would be if I could store all my games floppy disk on a single, network-accessible drive so that I could access them all without installing and swapping them all the time.

Nowadays I have a NAS that stores both the raw media and installation scripts for those games, and I have wine/dosbox/mame/mess startup scripts for many of them. Can't say that my inner child is disappointed about that, but building and curating that system took a lot more effort than I'd like to admit.

I don't buy games that require online activation, because I can't store and play them the same way. My inner child is very disappointed about that.


My desktop built in 2020 is perceptibly instant today. The slowest booting application is VSCode taking 2 whole seconds to start from cold.

None of the figures shown in the article have any relevance if one has a slight knowledge of their operating system and are not running on decades old technology.


I dunno, I think I disagree. I have much higher expectations of software. I have these expectations simply because most of what we do these days we could do decades ago, and using orders of magnitude fewer CPU cycles than the "state of the art" now (e.g. your example of VS Code). Most software falls far, far short of these expectations, which, IMO, is an ongoing embarrassment for the entire field.

Even your example of 2 whole seconds seems wayyy too slow to me. I don't even tolerate that sort of delay when working on my 10+ year old commodity hardware --- I've already tweaked my tools to be totally instantaneous so I can continue typing as soon as I release the short-cut key.


Am I the only person who has a laptop that boots as soon as I push the power button and also the only person who remembers when doing an OS update was an entire days work and involved babysitting the whole process to swap out disks and make sure nothing broke? My wife recently threw her hands up and said “Five minutes?” When I told her how long her over the air software update would take on her phone. There are plenty of things that could be better about new systems and I would love nothing more than to see more optimization of our software and hardware, but let’s not look too long through those rose colored glasses. The past was painful too. Maybe we’re stuck playing whack-a-mole, but maybe things will get better.


I haven't had hardware with a spinning disk in active use in years, so my experience might be coloured by the improved performance from SSDs and may thus be irrelevant to whether things are too slow with a HDD, but that's how I also see it.

Even if you don't go so far back as to require swapping disks, dist-upgrading a desktop Ubuntu or Debian system still took several hours ~10 years ago. Booting the system (again Linux, because that's what I had) took at least a minute, probably more. My laptop that's not bleeding edge by any standard is not booting instantly to the desktop, but it's definitely faster than that.

It is in some sense eye-opening to think that pure desktop latency might not have improved over the years. It makes you wonder if things could be much faster still, and being limited to old enough hardware might also change one's perspective. But to me, having a reasonably snappy desktop experience on a laptop from ~7 years ago without having to resort to lightweight desktop environments or older software is something that would have been outlandish for most of the history of PCs.


Being blunt: you're a terrible developer if you think like that.

What the heck does it matter if 5400 RPM disks are old technology or not?? Do you have any actual reason that your program runs slowly on such hardware?

It's very arrogant to assume that since you can afford to upgrade to the latest technologies, then everyone who can't should be left behind. There are probably billions of people using spinning rust in their machines. If your software does not run with acceptable performance then that's a problem with your software, not their hardware.


(Not the OP)

The issue is that the speed -characteristics- are so very different between HDDs and SSDs. Not just the raw speed. Decent performance of anything with IO on an HDD requires that the IO is specifically structured to minimize seeks and maximize sequential access. That takes time and skill. If very few modern machines don’t have an SSD then it is understandable to simply skip this difficulty. That time and effort is often better spent elsewhere, and this is a valid tradeoff to make.


Random access on SSDs is still slower than sequential access though, even if not as slow as on HDDs - there is more than seeking that comes into play.


I don't understand this argument, could you explain why modern software cannot run on old hardware?

It sounds like it's justifying planned obsolescence.


That depends. Many developers are now using Electron, .NET and other tech to develop software. Those are heavy and do not like old computers with limited resources."

My desktop products are made in Delphi (compiles to native code) and are very responsive and frugal on resources. Run fine on very old hardware. Desktop products done in C++ would be even better but I am not masochistic enough to develop GUI in C++. At least for now.


Is performance really comparable between .Net and Electron now? Neither is my specialty but anecdotally I see far more criticism of Electron for its slowness.


In theory .NET should be "better". In practice this really depends on way too many factors including how experienced were the programmers who implemented the software. In any way I am not an expert in this area.

I do use VS code as for some tasks there is no viable alternative for me and since my development workstation is a monster it runs fine. My own desktop apps however are native and I test them on really shitty hardware where one would not dream of running VS Code / Slack / whatever.

For fun I've compiled my main desktop product to 32 bit and it runs on a friggin old netbook just fine. Also from my point of view speed of development is not any slower when developing native applications. So unless client specifically requires it or I am doing some front end in browser my desktop apps are all native and coded in Delphi/Lazarus and my backends are C++.


I doubt you'll see any performance difference between Delphi and C++ (even with the most optimizing C++ compilers) outside of heavy number crunching anyway.


For whatever reasons most of my software (desktop and servers) ends up having this "heavy crunching" part in it be it. I usually do not create products that only serve as a simple conduit to a database.


Neither do i (in fact i use Delphi and Free Pascal for practically decades now and i never touched a database :-P) - most of my stuff are graphics and geometry related and yet i haven't seen much of a difference between my C or C++ and my Free Pascal (which AFAIK has worse optimizations than modern LLVM-based Delphi) code. Of course i refer to optimizations based on the generated code, algorithmic optimizations are another topic.

The only time i saw a difference when i made a raytracer benchmark[0] explicitly to benchmark the codegen where the Free Pascal codegen at its best was at 177% of the speed of the Clang 8 at its best. In my experience this is not a realistic metric though (according to the benchmark my C 3D engine should be five times slower if compiled in BC++ than Clang but in practice since it does a variety more stuff than just a single thing the performance difference is barely perceptible).

[0] http://runtimeterror.com/tools/raybench/


Can you explain why you expect modern software to run on old hardware first?

Should I be surprised that my turn of the millennium PC can't even run Slack? Yes, in absolute terms it's a bit ridiculous, but we're not talking philosophy here, Moore's law is a thing and software has been getting more complex as hardware got faster. Deal with it.

How is this news?


In a lot of cases new software isn't delivering utility over old software, yet consumes more resources. Users justifiably feel miffed that they're expected to buy new hardware merely to keep up.


There are Slack clients out there, which use a hundreth of the official one and they can run on these old machines, too. The problem aren't more ressource itensive features not possible before, but delivering a services we have for decades with way worse ressource usage because you can.


> There are Slack clients out there, which use a hundreth of the official one and they can run on these old machines, too

Uhh, care to share? Asking for a friend....


Maybe referring to Ripcord. I haven't used for Slack though, I don't know what Slack's position is on third-party clients - using them with Discord is liable to get you banned.


Ripcord is the most advanced, SailSlack for SailfishOS and lots of CLI clients.


> Can you explain why you expect modern software to run on old hardware first?

People are expected to run modern software for reasons of security, to retain interoperability with the latest standards, or to reduce support costs for the vendor. It is not very surprising that people don't want to upgrade their hardware for reasons that are outside of their control.


"Deal with it" ? How is that an argument? Things are getting worse and it should not be a problem then? If software is getting more complex, what kind of feature is it bringing or adding? I mean it's not more complex without a good reason.

If you quote Moore's law, let me quote Wirth's law:

https://en.wikipedia.org/wiki/Wirth%27s_law


Your comment went for me from borderline troll, to holder of the truth on your reply at https://news.ycombinator.com/item?id=27585139

As a C & C++ dev that has always cared that my software works on machines and networks that are a fraction of what I use for testing, I get so tired of careless programs that have clearly been designed for the latest beefiest machines that the devs had at that time.

Same for the web. It should be mandatory by Law to verify websites with the baseline of a 2G connection. (just... half kidding)


> Do you expect Windows 10 to run on a Pentium II with 512MB RAM as well, because some version of Linux does?

...Yes? What is it doing that wouldn't fit in those constraints?


I do not absolutely get this obtuse question.

It is not my place to justify why modern software is so inefficient, pointing the finger to Windows 10 in particular as if it's an outlier.

Why are YOU using inefficient languages, heavyweight virtual machines, GB sized binaries, Electron and Javascript to write your applications? Why is your boss asking to add feature upon feature upon feature? There is a reason we're in this place, but until then, my point is, stop complaining everything is slow on your old PC. You know exactly why that is, and it is your fault as much as mine.

It's because of us, software engineers, that everything is slow, so it is completely hypocritical that on HN of all places one has to explain that sadly to run modern software one needs modern hardware.

We've put ourselves in this situation and now there's a lot of surprised Pikachu faces around, complaining about Windows getting slower and alt-tabbing to their day job writing yet another crappy Javascript abstraction layer on their shiny M1 MBP.


It's a bit more complex than that. VMs are a necessity because we need to run the same code on a Mac, a Windows and a Linux machine. Since these 3 OSs don't want to agree to a common standard we are left with applications that ship their own JRE etc. In some cases this isn't terrible as a runtime environment can take up just under 100MB and performance isn't that bad


Where do people even pick this sort of stuff up to take with them into industry? Are they teaching Electron development now in undergrad CSE programs instead of C?


Speaking of C languages

C++ still does need more resources to compile programs than AAA games and keep running for 20min / 1h / 2h / 4h?

What were they teaching those compilers engineers and language designers back in the day? nothing about writing performant code, performance tests, regressions and yada yada?


This is actually true, LLVM had no performance regression infrastructure until recently, even though it was marketed as the fastest compiler.

C++'s main speed issue is of course that it's the most text-based language ever and many entire libraries are implemented as headers due to the fragile base class issue.


I am using an AMD Turion with 2GB of RAM with Void Linux and XFCE.

I don't care on software "engineers" if you are a bunch of engineer wannabes where in my country in order to be an enginner you should be able to write your own OS from scratch and learn lots of linear algebra.


Run old technology with old software.

That argument would be more convincing if Microsoft hadn't systematically pushed the owners of older PCs to update them to run Windows 10. How many millions of PCs dating from the early to mid 2010s are now stuck with the results despite their hardware still working normally?


I still can't believe they did that. Such disregard for the user


The same users who tried to keep Windows XP forever. Microsoft got burned once after maintaining one version well beyond its due date, and decided to go to the other extreme.


In my experience, it wasn't that people wanted to keep XP forever, it was that Vista was worse. When 7 arrived, the number of XP holdouts fell quickly.

The trouble with 7 in that situation is that either 8.1 or 10 needed to be the next good version, but they weren't. Do you know anyone who actually wants their system to change its UI every six months? Or to install updates whether they agree with them or not, even though those updates can have serious consequences if they go wrong and cause inconvenience even if they work? Or who likes having their own computer phoning home without their consent? Or having ads inserted into their daily user experience?

If Microsoft was producing an operating system people actually wanted, they wouldn't need to push dodgy updates to get people to use it.


> Why do people still expect modern software to run at incredible speeds on hard disk drives?

> Run old technology with old software. It is ridiculous that consumer hardware is still being sold with crappy 5400 RPM disks in 2021.

You pretty much answered your own question. Whether you like it or not, people are still using hard drives because hard drives are still being sold. Just because those people prioritize capacity over performance, for a given price point, doesn't mean they want performance to be ignored altogether.


My Linux box runs totally find on spinning drives. I'd like to upgrade but SSDs have gotten very expensive lately. A 2TB SSD is like $300 now.


My 486 ran totally fine on 1.44MB floppy disks as well, what is your point?

As I mentioned down thread, you can get 1TB Samsung SSDs right now for $80. Not the best one, still faster than a hard disk. NVMe are a little more expensive than that, and even faster. The price is not really a valid excuse.

Yes, my 2TB NVMe is $300, and does sequential read/writes of 3,000 MB/s, which is close to 100x faster than an HDD. It's not space age technology, solid state storage has been consumer technology for 2 decades already.


>> As I mentioned down thread, you can get 1TB Samsung SSDs right now for $80.

You ever think about the college kid who can't afford that price? What about the family of five living on food stamps who can't just pony up the money for better hardware? What about large swaths of the population who are on fixed incomes?

The way software companies are going and your general attitude is, "Eh, this is old technology, anybody should be able to afford it, what's the big deal?"

Totally tone deaf to the poor and people living on the edge and others living on a fixed income. My mother in law is pushing 80 and I had to build her a new PC since she couldn't afford to purchase a new desktop to do her taxes on since now our state requires you to file electronically and shockingly, her tax software no longer runs on her 8 year old desktop.


So instead of helping her learn one of the many (free) web-based tax applications, you built her a new computer to run a new (paid) version of the tax software she was already using? How does that even make sense?


She tried several of the "Free" web based tax apps, and always had problems with them. And yeah, I built her a PC that could run a current version of the software she paid for.

It makes sense when you want to help someone to use the software they already paid for, instead of pointing an elderly person to the web and saying, "See? They have FREE versions, go grab one and figure it on your own!"

Sorry, but that to me is kind of a callous solution compared to what I did.


At 80 many can't manage to learn anything new or different


> It's not space age technology, solid state storage has been consumer technology for 2 decades already.

You're exaggerating. The first really plausible SSDs to appear in consumer PCs appeared in around 2006 or 2007, so maybe 15 years tops. For instance, Dell offered a computer with 32 GB of SSD storage in mid-2007. This was already not much storage at the time, and the drive alone cost over $500. This was arguably not a consumer affordable product at that price, and the laptops involved were actually the Latitude series (targeted to businesses) anyway. Apple followed this up in 2008 with a 64 GB SSD, but this was a $1000 upgrade.

Suffice it to say that a vast majority of people were only buying (and could probably only afford to buy) computers with rotational drives well into the 2010s. Many of the computers they bought are still running today. Many affordable computers that shipped with Windows 10 still had rotational drives. It's absurd to overlook a whole class of people who can't, unlike the wealthy, upgrade their MacBooks every 2 or 3 years. They probably don't have MacBooks to begin with. Parents are passing old hardware between children like hand-me-downs. That's before you even leave the United States.

I think your comment misses the point at a deeper level, which is that most people in this thread already have SSDs (I certainly do), but are disappointed that performance matters so little to many developers that they are fine with seeing software run at the same speed on today's insanely fast hardware that its predecessor did 20 years ago. Software is much more powerful than it was in the days of floppy disks, but it's not much more powerful than it was 15 years ago. We've seen a generational increase in performance, but we've wasted it. I think that's worth some disappointment.


> Yes, my 2TB NVMe is $300, and does sequential read/writes of 3,000 MB/s, which is close to 100x faster than an HDD.

I'd highlight IOPS. A $300 SSD is about 3000× faster than an HDD.


On my 2012 mbp I have the OS and software running on the SSD in the drive bay, and a beefy hdd in the disk slot for storage. I set this up back when a 256gb SSD was pretty expensive, but maybe that sort of strategy would work for you now that SSD prices have gone up again, having a fast drive for software and the system and a big drive for storage.


I've had massive issues with Ubuntu on a HDD. Every time a big file was added to disk (like a download), the indexing system would run and render the whole system unusable for minutes (100% disk and CPU). Upgrading to an SSD solved the issue.


That's an Ubuntu problem and not an Linux problem thank goodness.


That depends how good Linux's support for very-low-priority background tasks is.

UNIX priorities don't handle this situation well because a low priority process is still technically immediately runnable and can cause priority inversions by clearing out all kinds of system caches that your foreground app was using.


What exactly does "modern software" does that justifies needing the hardware to follow at such a pace?


It runs on more modern hardware.

And has more adware!


Modern <noun> is mostly worse than <noun>.


I'm usually frustrated that I've got dozens of GB of unused RAM, and some horrible 32-bit app is thrashing and paging to disk because it's getting close to using 2 GB.


>Do you expect Windows 10 to run on a Pentium II with 512MB RAM as well, because some version of Linux does?

Void Linux, Debian, Slackware... just fine. Maybe with 720p video with a good Radeon 9800 video card for the AGP bus. But a Pentium III with SSE would run much better.


More often than not I find modern software regress in terms of feature compared to “older” software. An obvious example is softwares that has win32 and their UWP counterpart. The UWP version usually is stripped down in features, buggy and more resource hungry, e.g. Windows 7 Photo Viewer vs the Win 10 Photos app.

One would associate modern to something that is improved, better, faster. But that’s not the case with current state of software.


Windows 7 is 12 years old now. Spinning hard drives are 100% dead in the consumer market.

Why in the world would a modern operating system not optimize for SSDs?

I can’t think of a single reason to use a spinning drive on my computer. Almost every conceivable use case is better relegated to a separate NAS box.

You can get a 1TB PCIe nvme SSD rated at 3100Mbps read speeds for $125. Why would I chip off an order of magnitude of performance to save $40?


The thing is, computers with SSDs feel about as fast as computers were with HDDs 12 years ago. Some apps still take 10 seconds or more to load on brand new hardware in 2021. There shouldn't be any random wait times in daily usage.

Sure, things have gotten more advanced, but it feels like the speed improvements of hardware are being used for bloat, rather than making computers even faster.

It seems computer performance stabilizes at a level that's acceptable to most people, I guess because beyond that there's no incentive to spend resources improving something that's acceptable.


I’d love find some hardware along with period-appropriate software and run tests to debunk or support your idea. I still just don’t buy it, but it’s hard to find quantitative evidence without just going out and trying it.

By the way, 10 seconds clocked on a stopwatch is a ridiculously long time. Almost no apps open that slowly, in 2009 or 2021.


You actually need 2x whatever you have if you want to still have it if the drive its in dies or the data is destroyed by malice or mischance. Say you have 3TB of data now and may acquire more later. Your cheapest SSD option I see for sale on newegg.com is 4TB for $322. Going all SSD all the time requires one to spend $644.

Storing the same data on a hard drive can be had for about $140. Now the SSD is much safer for your data, performs far better, and you absolutely ought to want the SSD but its not a small difference once you start talking about storing more than your OS. Spinning hard drives as OS drives ought to be completely dead by now but vendors are absolutely selling machines configured with them new in box all day every day. Seemingly more so on desktop or all in one models.


I still use them for NAS use because you get a bit of "notice" before the data goes down, and costs at this level makes spinning disks make sense.


Some remarks:

I used Hyper-V as the hypervisor of choice

That is not how most end user installations are configured (aka, not as a virtual machine).

32GB fixed disk for each build.

That is much much less than the typical Windows 10 hardware.

the fast boot feature has been disabled for the purposes of this measurement.

That is not the default and not reflective of most installations.


4 GB RAM seems impossibly tight as well. I'm not sure if 4 core and 4 GB was ever a common or representative setup for PCs; perhaps 2 core and 4 GB in the Vista timeframe...


4GB RAM is plenty of common.

Source: junior enterprise machine on german multinational in a third world country.

It gets worse if you have a retail machine...


There were plenty of Core 2 Quads and Phenom X4's with 4 GB RAM around 2009.


I'm not sure why you're bringing cores into this. And 2 cores is still fine for a lot of purposes. And there were lots of 2 core machines with far less than 4GB. Even 1GB was "double" the minimum required for Vista, because OEMs pressured microsoft into lowering the vista requirements down to garbage levels.


> That is not how most end user installations are configured (aka, not as a virtual machine).

Though not the default, Microsoft is moving more and more towards hypervisor-based security, for both kernel stuff and for browser stuff. Right now you need to enable it, but I wouldn't be surprised if Windows 11 relies on it. The leaked installer already relies on having a TPM, after all.

Out of all virtual machine technologies, Hyper-V is probably the one that will give Microsoft the best chances at being near-metal without passing through hardware. Other hypervisors shouldn't pose a problem, but they're not under Microsoft's control.

If you have the time and hardware, you should feel free to test this on actual hardware instead; I doubt the results will differ much, though.


> Though not the default, Microsoft is moving more and more towards hypervisor-based security

So what? That doesn't change the fact that running these tests in a VM isn't going to be the same as running them on bare metal.


> That is not how most end user installations are configured (aka, not as a virtual machine).

IIRC if virtualization is turned on in BIOS, the host Windows itself effectively runs as a hypervisor-backed machine. I still think the tests aren't really representative, though.


Only if the Hyper-V role is installed, which it isn't by default IIRC.


You're right, hyper-v must be turned on via 'Turn windows features on or off' as well.


On my current gaming PC (i7-7700) I have installed W10 in 2017 and... no problem whatsoever? SSD, 10s boot. idk how do people end up with all the problems. I'm really curious because there must be an underlying reason


I think the Startup Processes really gets abused and people default to blame Windows for it. My time to a usable desktop seemed slow until I removed Teams, Steam, Dropbox and some other update utilities from Startup. Now it's pretty much instant.


I keep mine startup fairly clean and this is key to a 'fast startup'. There are about 10 different ways applications can auto start in windows and you have too check them all, as all will be used. My laptop from 2012 is 10-30 seconds. My laptop from last year is usually under 10. My NUC from 2011 is about 30 seconds and has been consistently that for the time I have owned it. Also on older computers check to see if it is thermal throttling. As the fans/paste can stop being effective and just need to be cleaned up after some time.

My parents bought this rubish computer about 3 years ago it is easily 3-5 mins to startup. Which is due to a lack of RAM (3GB, which should be plenty) and windows swapping to startup. Plus a bunch of startup apps they 'just can not live without'. I remove them, and get 'wow the computer is so much faster'. Few months later and some update to that 'must have' app will put itself back into the startup (sigh).


Amusingly, Microsoft provides their own tool (1) for excellent insight and enabling/disabling of all the various places applications can put their startup executables.

1 - https://docs.microsoft.com/en-us/sysinternals/downloads/auto...


I had totally forgot about that util! Thank you for reminding me! The sysinternal tools are great. They should just bake them into the OS.


I've been using hibernate for years now. I start my computer once per month. I really wouldn't care if it takes 5 minutes.

I'm not sure why that isn't shown by default anymore. But you can still add it to the menu and map it to the power button.


It isn't shown because on 10 "power off" is a lightly-tweaked hibernate.


Not at all.

Power off closes all your apps, hibernate doesn't.

I'm generally using 15 or 20 apps. I only have to open those once a month after updates.

If someone uses power off, they have to open them daily (I know, because I have other computers in my house configured like that).


True, it's not quite identical, but I expect that's why hibernate isn't on the menu by default (b/c power off was close enough in MS's opinion).


Yes. All of those. I also disable the various iTunes and Creative Cloud 'helpers'


If you think it's fast now, just wait til you remove Windows!


Sometimes it isn't all that simple.

My work laptop runs W10. At some point in the last 3 months or so, an update came in and fucked up Windows Explorer. About 5 times in a work day I need to use Task Manager to reboot Windows Explorer because the following things stop working:

- Cannot type into the start menu (the entire menu goes black. If I backspace anything I typed -despite not being able to see it- it will work again) - I can do this once, and then typing doesn't work at all.

- Taskbar will not hide behind full screen apps, clicking on icons will not bring the app to the forefront, I need to minimize all screens in front of it until I can find it

- Cannot view/change my wifi. The list just sits there blank, refreshing infinitely.

Likely unrelated, but at the same time, Windows randomly stops being able to reach by Default Gateway. I can disable/re-enable wifi and it works, but it tends to happen over and over until i reboot once it starts happening. Usually good for a couple days after a reboot.

Lastly, on shutdown, the computer will bluescreen if I have had my external SSD plugged in at any point.

I haven't tweaked anything Windows Related anywhere. The only excuse I can come up with is one of the dozens of apps I use for my job conflicts with Windows somewhere. There's no useful log info during/after the points of issue. There's nothing in the scans for `sfc`, nothing comes up in the system troubleshooters. At this point I'm looking at doing a full re-install of Windows, but I have so much stuff to move across that I'm mostly dreading it.

My personal computer runs fine though...


Interesting! On my computer:

- Applications do not gain focus. When a new application is launched, it starts at the bottom of the window stack. I can't even alt-tab to it easily because it's all the way at the end of the list.

- Start menu is a crapshoot. Sometimes it doesn't open when I press the start key (but always when I click).

- Once Start (eventually) opens, Search is a crapshoot. I'm not even talking about documents (although I'm usually better off using git-bash's `find` command despite the search indexer regularly using 30% of my CPU) -- I mean I can't even type half the time. Of which half of that time, the start menu turns white (but only most parts of it?).

So my workflow for using the Start menu is, I often have to mash the Start key 5 or so times just to start typing.

What do I use it for? Mostly launching programs. Eg. last night, I typed Half Life 2. When I type the first few letters of "Half", the correct result shows up. But if I type "Half Life" it goes away because I didn't use a hyphen (seriously).

- Explorer has 2-3 second lags. I don't just mean opening it, I mean every damn operation. The funniest part is, if you have Sublime text open at the same time, you can see that the changes to a folder actually take place INSTANTLY (and Sublime detects and displays them within a few milliseconds of your action in another program) but Explorer, in which you did the action will take another 2-3 seconds to display its own changes. (?????!!!)

If it did all of this on day one I probably would have thrown my computer down a long staircase, but it's been such a gradual deterioration that this is just my life now.


If it's on a work laptop, it could be IT shenanigans.


Well you do have a beefy workstation with a fast drive, most people on windows are running some anemic laptop or the cheapest dell desktop their employer is able to order by the pallet.


How often do you use Linux to actually have a comparison? I have yet to see a Windows 10 pc that does not feel sluggish for me, even a fresh install has random waiting times I am simply not used to.


In my experience it depends on drivers. I bought a PC, I think it was 2017 too. Nvidia drivers were unstable and I was getting BSoD every few days. But at some point they fixed it. It was quite stable since then, no issues either.

That said, I'm trying to be careful to my Windows setup, avoiding installing anything that could tinker with kernel or deep OS integration. Basically it's clean Windows with simple software, no antiviruses, firewalls, registry cleaners, etc. May be it's getting slower with time, but it's barely noticeable.


I have an i5-3300 from 2012, 32 GB RAM, and a SSD. There are no performance issues to speak of. I have a Ryzen 3 3250U from 2020, 8 GB RAM, and NVME storage. The performance under Windows is lackluster, while there are no performance issues under Linux. I'm not sure what the problem is since the software configuration on both machines is quite similar and the performance issues still manifest themselves when there is free memory (i.e. the most obvious difference).


I think people just don't know how to be good computer users.. they might be able to program their way out of a paper bag but systems admin is not their paper bag


yes, if only people would get good, perhaps their OS would speed up.


In part, the way the author does. Turning off sensible default like fast booting and layering on complexity for the sake of complexity like Hyper-V.

The desire to criticize Windows also helps…I mean the boot time for my iPhone is much much worse than anything the author measured. My upgrade times are probably at least as bad,too. And Siri search requires an internet connect.


The first time I installed Windows 7 on an SSD it started in less than a second.

Less than 1s boot time. To desktop.


I find that hard to believe, but back when Windows 8 was new (not that I'd heard of it), I think I would've believed it. You enabled auto-login.


Great question. I'm wondering the same thing. I also built a PC in 2017 and installed Windows 10 on it. It initially had a 7s boot time following bios startup. Several updates later, the boot time is now close to a minute and app startup times are noticeably slower as well. I've always been very careful about what software I install on it. I've done all of the standard troubleshooting, but so far it hasn't gotten bad enough to warrant a fresh install.


Task Manager → "Start up" tab

Disable everything and see if it boots faster.


Thanks, that was the first thing I tried. I didn't want to go into great detail about my troubleshooting efforts because I don't think it's quite relevant to the overall discussion.


To add further, those more adventurous can also use Sysinternals free Autoruns utility to find out everything that could start.

Handy hint, it has an option to hide all the default Microsoft stuff so you just see third party.


Not having a windows machine here to test this, I can't believe some of the results there. Specifically the "Win32 applications" ones. 7 seconds to open the file manager or text editor? Or MS Paint?! On my laptop here I can load gimp including plugins within ~1.5-2s, and I never even bothered to optimize anything about this. I wouldn't even be able to measure the opening time of e.g. gedit without some sort of scripting.

Are win32 apps really this slow to start up, or are the 7s "baseline" measurements in that experiment some cumulative value over all the applications?


The benchmarks are hard to compare. He's using a VM and a fairly small amount of RAM (I don't know of many PCs only shipping with 4GB RAM these days).

I can say that Word takes about 2 seconds to open on my system, but I'm not running in a VM and I have 48GB RAM, so Windows can cache a lot to optimize opening times.


4GB is actually pretty common for "budget" laptops in india. Check out Amazon.in and search for laptops. Anything above 40,000 inr is in the higher than budget, cheap range. For the past few years, there has been a trend of buying refurbished devices. Laptops and desktops.

I bought a Dell e7440 with I7 5th gen I think and 4GB RAM. For 22,000 bucks. I had to upgrade to 8 gigs myself but 4 is pretty standard here.

Oh, and the "gaming" and productivity series of brands are usually upwards of 80,000 inr which is exortbitantly high for majority of Indians.

Just for some context, interns are paid 6-7000 bucks a month and a day labourer, unskilled earns a Max 700 bucks a day. So, a day labourer had to buy a cheap laptop for 30,000 bucks, have to work for 42 days.

Oh, and you have the last year rush for buying cheap devices for online classes, with people have 2-3 kids, buying a laptop for 50,000 times 3 is out of the question.

Again for context, cars in india start from 290,000 a maruti alto .

https://www.zigwheels.com/newcars/cars-under-5-lakhs

apple devices are rich mans luxury products here .


Apart from purchasing power, laptops in India are expensive because of high levels of tax (28% GST I think, which would be higher than what Sweden charges). I'm not sure if import duties apply on top of this as well.

The reality is that any laptop with 4GB RAM and a 5400rpm hard drive and running a modern OS won't give a great user experience. ChromeOS on Chromebooks could be an exception, but even cheap Chromebooks have small EMM/eMMC cards instead of 5400rpm drives.

If you have such a system, a lightweight distro such as Lubuntu will probably give you a much better time.

For Windows 10, 8GB RAM + any SSD seems to be the minimum usable configuration.


nah. local GST is only 18% but import duties are added on top which are not passed down.


Lots of Indian laptops come with HDDs. Even my office laptop, after 2 promotions. When I got an SSD a few years back, I realised random failures in dev tools were not normal, there were lots of bottlenecks.


4GB is just criminal these days - running a few modern and/or memory intensive apps like Chrome on Windows in 2021 is just a dreadful experience.

I can't believe that over two decades after x64/64 bit was introduced that manufacturers still think it's 2001 and "4GB is just enough to qualify getting past the 4GB 32 bit limit"...


4GB is standard on cheaper laptops in the UK as well. 8GB is midrange and if you want more you're paying a premium price (or upgrading it yourself).


Yeah.

That's one reason I use linux. I am a student and I have this 4GB laptop I brought for less than INR 30,000, and linux is still super responsive on this. Windows lags for 5 seconds before pressing Windows key and search results showing up.


I used to have a cheap machine with 4GB of RAM and an HDD. When I got some money, I went for an SSD instead of buying more RAM. The difference was significant. I happily worked on that laptop (with 4GB) for another 6 months (thanks to the SSD for making it bearable) before upgrading RAM. When in doubt, buy more SSD, not RAM :)


This exactly mirrors my experience.

"Starting" is slow in Windows and just keeps getting slower. That could be booting, logging in, waiting for an application to start, waiting some more for an application to start, and then waiting even more, not being sure if it ever is going to start, waiting some more, and then it starts.


I have an old windows machine for playing WoW not happy with the Linux methods, but some day... and it does not have an SSD. I found one thing that helped was to disable a lot of the logging. In powershell on an administrator account:

  wevtutil el
then select some or all of the logs to disable with

  wevtutil sl {logname} /e:false
Another improvement since I only use it for WoW was to disable the spectre mitigations [1] remove some bloat with bleachbit [2] and then defragment the drive. Windows 10 specifically can be sped up a little bit by disabling some of the telemetry that is hidden away using O&O ShutUp10 [3] and improving network latency a little bit with the TCP/IP optimizer [4] as some startup applications rely on a response from servers. Now it starts up the same as when it was new. Maybe some day I will get a SSD. Another thing I found useful for Windows 10 was to disallow applications from running in the background which just "Suspends" applications rather than quitting. Suspending still takes up memory and pagefile. It's easier to see this in Process Explorer [5] I think they do this to mimic the behavior of a cell phone. Disabling suspending is less useful for startup time and more useful if you value your free memory.

One more thing I should add that helped was to cache DNS on my home network.

[1] - https://www.grc.com/inspectre.htm

[2] - https://www.bleachbit.org/

[3] - https://www.oo-software.com/en/shutup10

[4] - https://www.speedguide.net/downloads.php

[5] - https://docs.microsoft.com/en-us/sysinternals/downloads/sysi...


... and you know I am always looking in the log and they are filled with stuff that seems irrelevant but never anything that explains why your computer crashed, why a service quit working, etc -- generally opening the event viewer seems to be a complete waste of time that I never get insights from.

Disabling the log completely doesn't seem like a loss at all.

As for the DNS I agree with that.

Years ago when I first got DSL (1Mbps) I noticed that the browsing the web on a DSL line was slower than the dialup because DNS lookups took forever.

Switching to a resolving DNS server was like taking a ton of bricks out of my car and installing a supercharger.

I wonder if ISPs are just indifferent to DNS speed or if they see it as a form of "traffic shaping" that lowers load on their network.


Not counting BIOS time it is about 5 seconds to Windows login on my Windows with NVMe systems. It does take about 20 seconds after login for it to finish loading all of the tiny startup notification icons, but that's actually intentional by Windows so it does not overload and prevent you from launching the apps you want. After login I can click on the web browser or email and it will launch instantly for me.

If you are doing a lot of waiting look into getting an SSD for your boot drive.

Or you're running some kind of corporate security product that is going to a remote server for "Mother, may I launch this?" on every EXE and DLL.


The point is, why does OP need to spend money on an SSD (and create e-waste in the process) when an HDD used to be reasonably fast for the same task?


LOL! No, the HDD never was reasonably fast. Our expectations changed.

I booted up an old Windows XP box about two years ago before recycling it. It took almost TWO MINUTES to finish booting to the desktop. Some kind of fairly standard 500 GB Western Digital Blue drive. No, I don't know if it had ever been defragmented or had its TEMP files cleared or had old driver modules removed... It was just slow.


I just don't buy it. Sure, SSDs have always been faster—I remember how exciting they were when they were new—but HDDs were always "fine". Not fast, maybe, but nothing ever felt broken. I never waited multiple seconds to open a search box, for example.

And more broadly... have you ever tried running Windows XP in a VM, on modern hardware? Even virtualized, if you feed it anything approaching the resources of a typical 2021 machine, it absolutely flies compared to Windows 10.


> LOL! No, the HDD never was reasonably fast. Our expectations changed.

If the anecdotes of this thread are to be believed, it's not that our expectations changed but that the software has changed.


I have an SSD for a boot drive on my personal and work computers. Both go out for lunch for long periods of time.

It might be me.

I know my reaction time is 35% faster than my teenage son. I am much more bothered by latency than other people, I'm starting to think that I experience more time than other people.

I can't stand playing single-player games on a Samsung TV that isn't in game mode. The sloppy response drains out all my fun.

When I was playing League of Legends on a "gaming" laptop I found I couldn't ever win (not feel like I was floundering, attacks hitting me and i couldn't do anything about, people avoiding my attacks 100% of the time) until I attached an external monitor. I took movies and could show the timer was 30 ms late on the internal monitor compared to the internal monitor.

Most people seem indifferent to this sort of thing, but not me.


> Most people seem indifferent to this sort of thing, but not me.

The average person who walks into a museum will be able to tell you which paintings they like, and which they do not. An art historian, however, will be able to tell you why they like certain paintings, and discuss specific details in the composition.

You are like the art historian. When you play a game on a laggy TV, you can tell that the TV's latency is making the game worse. A layperson just finds the game less engaging, and doesn't know that it could be better, much less why.


Arguably, the issue is when the HDD isn't the only computer you use. If you switch to using an SSD on, say, a laptop, boot times are super fast, you get used to that, etc.

Then you go use an HDD, and suddenly everything feels slow. The thing is, it's not actually slower than it was before, it's just slower than your most recent comparison point.

If only use an HDD, though, those boot time are just the way it is. I never actually found it to be a large difference between Windows and Linux. The difference that really bit me was startup apps on Windows, but fresh installs were plenty quick.

I think in a lot of ways it's like getting off a freeway after a while. On a regular in-city drive, the road speeds feel normal and reasonably quick (assuming no traffic). When you've just gotten off the freeway, though, you're at half the speed of what you've been driving at for the last hour or two, and it feels very slow.


HDD have been slow since before Windows 10 was announced. I was converting friends to SSDs during the Windows 7 era. At least on HN I would expect technical-minded people to have some perspective on the speed of storage mediums and how _fast_ and _cheap_ solid state drives are. Yes, I said cheap.

A crappy SSD from a good brand is £70/TB (Samsung 860 QVO), and it is ORDERS (plural) of magnitude faster an a hard disk drive.

You can run whatever you want on antiquated hardware, but please people, stop complaining about it and get on with the times already.


hdd were never fast. the quick boot/startup times really started with SSDs.


How is stopping these days? I seem to remember there were about 79 reasons why Windows might not shutdown.


The only issue I've had with stopping is when it decides to update, and just hangs there with a completion percentage and the message "This could take a while." I almost always end up holding down the power button because it'll sit on the same percentage, sometimes 100%, for hours.


I have a Windows virtual machine that was doing this to me a lot, and making Ubuntu reboots take a long time, and then requiring a disk check after the unclean shutdown.

I found a registry key that tells Windows to force stop apps that block shutdown. I may lose an unsaved document once in a while but I would have lost it anyway, and I am pretty good about saving things when necessary.

HKEY_CURRENT_USER\Control Panel\Desktop AutoEndTasks REG_SZ 1


My main complaint is all the programs that think all your documents are so precious that they refuse to shut down. Reminds me of

https://www.youtube.com/watch?v=kyOEwiQhzMI


That vm I shut down last week, probably still is shutting down. Hope this answers your question


that happens to coincide with the 79 ways to "power off" windows.


How much of the slowdown has to do with the Spectre and Meltdown mitigations? There was a similar thread the other day about drastic performance hits on the Linux side.


The spike in there is really weird. I'm wonder if that's spectre mitigations causing the bulk of the slowdown. If that's the case I'd be curious to know if disabling them helps and if popular Linux distributions show similar performance loss.


My up-to-date Windows 10 on a 3 years old desktop takes 9 seconds to boot (I just benchmarked it). I don't remember it being way faster before.

So I wonder how OP gets 34 seconds, and how he went from 13 to 34 seconds over a couple of updates. Mine definitely didn't get 21 seconds slower.


What hardware are you running? OP was running a virtual machine with "4GB of RAM, 4 cores and a 32GB fixed disk for each build".

I do wonder how if they only did one run per test or multiple, since n=1 will mean noise can mess with your results.


Probably OP is using a laptop.

There are differences, but I've read a lot of comments and a lot of people aren't specific. So it's difficult to judge.


One time Windows got a "feature update" that made it not boot. Apparently it was an issue with Lenovo motherboards that is still not fixed to this day (afaik). In any case, that was the kick in the pants I needed to switch to Linux. Everything has gone swimmingly since!


Just wait, one day you will have an update to linux and suddenly your whole system wont boot. This doesn't happen often, I've only had it happen 2 or 3 times in 10 years. It is good practice to have a restore thumbdrive around ready to go just in case.


The amount of times that my manjaro installation broke itself after a "full" system upgrade lacked appropriate gpu driver updates... From an OS that supposedly uses upstream Arch repos as a sort of guinea pig and waits until the dust settles upstream to roll out the updates. After a while it gets old to have to press shift and advanced-boot-options my way into the older kernel.

Both Windows and Linux are not perfect. With windows it feels like you have to fight to keep control over your computer. With linux it feels like you have to fight virtual poltergeists.


Debian (LXDE) is, I find, dependable. Sure, it might have a Python version last updated in January, and my dodgy WiFi card might need restarting most mornings, and I might have to run `pulseaudio -k` when the sound starts getting laggy, but it just doesn't break. (I expect restarting my computer more often than yearly might help some.)

The only Debian issue I've had is needing to delete the Intel graphics drivers to get Vulkan working. (Yes, delete; it doesn't need them on my machine, despite having an Intel card.) Nothing else has ever broken.


you can't possibly compare Manjaro to Windows.

Manjaro is a rolling release distribution!! What do you think will happen under this model? It's the same issues as Arch and Gentoo.

Consider Debian instead, where upgrading is always viable but doesn't just happen randomly. My current Debian install dates back to _etch_ when it was frozen to hit stable. Years later I'm on testing for bullseye, on newer hardware, and everything is fine. Never any issues upgrading from stable to stable, only went to testing because I started needing newer libs for my GPU.


Manjaro is not a good choice. I don't have any arguments ready right now.

Choose something else:

New hardware? Fedora Otherwise Pop!_OS or Debian Unstable


On my new laptop Windows drivers were unstable (I tried both default Windows drivers and manufacturer's drivers). My audio was working for few hours and then disappear until reboot. I tried latest Fedora Linux and it's stable so far, no driver issues at all. I was pleasantly surprised.


Same thing that gave me the kick to go over to Linux. Unfortunately I had a no-boot issue updating Fedora last week and had to do a complete re-install... at least I'm not paying for Fedora.


May I ask which distro?


Not OP, but I went to Ubuntu at first, but since it didn't like my Lenovo Yoga's tablet mode and screen rotation I gave up fighting it and went to Fedora. Works perfect out of box, I only had to change a few personal preferences to get it how I like it (changing touchpad so tap-to-click works, and using two finger tap for right click instead of lower right)


I've been using Manjaro for the past 2 years. I used KDE for the first 6 months, but ended up switching to Gnome for the rest.


How were these measurements obtained? Are they an average, and if so what was the variance of the measurements? For some the difference seems substantial but without knowing the variance across measurements it is a little difficult to assess whether the differences are actually significant. (Basically would there be a statistically significant difference between each condition)

Obviously this would be a lot more work, so I don’t want to detract from the work that’s already been done.


Any chance some of this could be due to sidechannel exploit mitigation? The timing seems about right.


Almost certainly. Several of the worst benchmarks (Win32 App launch, UWP App launch, and Explorer stress test) all start increasing in the 2018 H1/H2 releases. That matches the timeline for the Spectre/Meltdown fixes, and those are the benchmarks that I'd expect would be most effected.

17134 = 2018 H1 (1803)

17763 = 2018 H2 (1809)


1809 is the first build that shipped with the mitigations, so that explains why there are a lot of performance problems around that time.


I don't know about slowing down: I do know that I keep getting stuck on some update after which I can't moved beyond due to some cryptic and unfixable error.

This has happened for the 2nd time in a year and I end up having to download an up to date iso to move past the dead end update


I can't remember the last time I used Windows search because it's worse than useless - there have been multiple occasions in the past where it can't even find a file in the current folder right in front of my eyes. Nowadays I just use Everything which I think is one of the best piece of software ever.


An important thing that it took me a long time to realize is it doesn't search file names by default, only file contents. To search by name you have to use the 'name:' prefix. This is nonsense, but it's also easy to overcome once you know about it.


Wow really? I never knew that. Who thought it would be a good idea to have the Explorer search NOT do filename search by default?


It's probably how the first iteration worked before anybody knew what would and wouldn't be intuitive for this kind of utility, and they just haven't updated it since


Also when files were mostly just text, it would have made a lot of sense


No, this search was new in Vista (iirc).


Oh, well then there you go ;)


Edit: I was wrong about this behavior. Could've sworn this had happened. `name:` definitely helps speed up and refine the search, at least


Honestly interested how this plays with Control Panel, various config screens / apps, etc.

It seems the worst offender of "type in exactly what the link / window header is titled, does not show up in results."

I haven't figured out if this stuff just isn't indexed, is de-ranked below everything else (my guess), or is using some weird non-user-visible tags that aren't named identically.


what? ridiculous, of course it searches filenames


I don’t think this is right. I just did a test with a text file and Windows not only found a word only in the title (not in the document text), but also highlighted the title match in the results.


Hmm. Maybe I'm thinking of macOS? This definitely happened on one of the two and I was pretty sure it was Windows


Trying this right now with various commonly opened files and still getting no results. It's incredible how bad it is.


Off topic, but is Google Drive search this way too?


For me Windows Explorer search became useful when I learned about its syntax. I can't remember it exactly right now, but I think it's something like `name:*.jpg` for example (and worst thing is it's localized, so you have to use whatever Windows Language you use). This way it does not try to be smart and just searches for file names. I think that in default mode it searches inside an indexed files or something like that, which probably is useful for ordinary users trying to search inside theirs docs.



One of the things I miss most from Windows is the CHM help system. Now selecting Help in the menu opens a web page in Edge regardless of your default browser, or in an awkward, sluggish help minibrowser app.

The query syntax should be available offline right in the operating system, not in a web page.



Concur - 'Everything' is one of the first tools I install on a fresh PC. Blazingly fast file-system search with many useful options


Yep. One of very few pieces of software that when it runs I swear I can actually hear the zillions of transistors in the CPU doing their job. Unlike an Electron app that tasks 16 cores with endless fucking apologizing for their creator's love of layers upon fucking layers of abstraction...

Sorry, my PC's acting slow today, I'm tired and had a glass of wine already :P


I also like how it can make a machine unusable for like an hour after an update while it rebuilds the indexes. I have some laptops where it will take upwards of 45 minutes to log in after a patch. Windows 10 really hates slow spinning HDDs, and these are machines with plenty of memory (8GB) to avoid paging.


Found Everything back in the Windows 7 days and have never used anything else. It still amazes me that a silly little freeware search outperforms what Windows has. Really goes to show Microsoft's utter contempt (or apathy) toward the user.

Technically though, there is still a use for Windows search on the start menu: pulling up Windows components like the Control Panel or Disk Manger quickly.


> Technically though, there is still a use for Windows search on the start menu: pulling up Windows components like the Control Panel or Disk Manger quickly.

And not even that since some times typing "Panel" will not find Control Panel or "Power" not find "Dell Power Manager". I am baffled as to what is the actual algorithm behind this "search". I am quite sure it is not an indexing problem as searching the same items by prefix usually works.

And as usual when it fails to find what you were looking for, you are just one Enter key away from being sent to a browser and Bing. Way to raise your ratings...


And for extra fun, sometimes you'll start typing and get "Contro" in and it will pop up "Control Panel" at the top, but then by the time your fingers have caught up and stopped typing you've got "Control Pan" and you mash the enter key and... it suddenly replaced it with a bing search for "Control Pan".

I actually generally enjoy Windows 10 as an OS, but I actively dislike the start menu search. It's worse than useless.

I just went and punched "drive" in a few times. The first couple times it gave me "Computer Management" since it's related to drivers apparently. Then it decided to give me "Create and format hard partitions". Meanwhile, the several options that contain the literal "drive" in the name are all ignored as the "Best Match".

I too am completely baffled by what sort of algorithm it's using to find and select the most relevant search results.

I have PowerToys installed and have "PowerToys Run" bound to Win+R. I use that pretty much exclusively these days as an app launcher.


https://en.wikipedia.org/wiki/Everything_(software)

https://www.voidtools.com/forum/viewtopic.php?t=590

This is closed source software (Freeware) as far as I can tell. I realize you're already on MS Windows, but downloading a binary blob from some third party that's reading all files on your system seems a little too trusting.


Everything has become ubiquitous enough over the many years to be completely trustworthy for most Windows sysadmins. Additionally, I agree with OP that it's one of the best pieces of software ever.


Voidtools is well known and really good.

I've been using them for years now.


I can't live without Everything!


Slightly off topic, but I ditched windows explorer for most things years ago. I’ve been using a neat indie app I found called fman. Not affiliated, just a fan

https://fman.io/


I use Q-Dir, a quad-pane file explorer. http://www.softwareok.com/?seite=Freeware/Q-Dir


Haven't used Everything but I am very happy with locate32 https://locate32.cogit.net/


FileLocator Pro is another nice one. It's handled my Ctrl+F hotkey ever since the early days of Win7. Doesn't index, but uses techniques that quickly scan the MFT.


FileLocator Pro for me (there's a free personal version). Been on that since Windows 7 quite frankly.


Thanks. Hadn’t heard of this before.


fman is another rad File Explorer alternative.


These speed issues are annoying but the thing that kills my experience is the lost clicks - and when one is missed, because it's quite common that Windows is just being its usual useless self, you then don't know for sure if it really missed the click or not. Invariably the time you think "it did miss it, try again", you'll click again only to find that now your slow Windows machine is stupidly struggling to do the damn task twice!

The other issue doesn't sit completely with Windows: in a corporate environment, there are numerous remote activities that get hooked up without much thought or care and it only takes an occasional slow response with one or two of them for Windows to become unusable.


that's one of the main reasons why i switched to linux. double clicking a file in windows to open it but it would activate file renaming instead. this was all on their surface book 2 which you would think would be fast enough but it used to happen quite a bit. ive been using windows since xp as well so its not like i don't know how to double click things. its something that only started happening a few years ago

anyway, linux is great now. clicking only selects or opens a file, to rename you have to open the right click menu or press F2


Would be interesting how much worse it is if you upgrade in-place. Or is that what this test did?

Overlaying SPECTRE etc. mitigations could also provide some insight.


This is purely speculation and observation but I had to disable all the anti-telemetry hacks on my (aging) W10 gaming desktop and I noticed a marked increase in latency in everything from opening a folder to launching simple applications. Once I re-enabled all the patches the latency seem to vanish.

I have no data or hard evidence to back any of this up so take it with a large grain of salt.


What's the fastest way to configure this?


The two apps I have used in the past have been O&O ShutUp10[1] and SharpApp[2].

1: https://www.oo-software.com/en/shutup10

2: https://github.com/builtbybel/sharpapp



For me, every update seems to trigger some. NET compilation in the background (if I remember right). This process destroys your disk while it runs, and contributes a lot to the slowness in my experience.


I built my mom a moderate Windows PC for her accounting work a few years ago. Pretty standard Intel build, no graphics card cause it's not like she needs that.

And for the most part it works fine, until every few months it'll slow down to an unusably slow crawl and I'll have to hop into task manager, see what rogue Windows service is bugging out this time, Google it, and find some forum post somewhere telling me what registry edit I have to do to disable some service that restores it to full speed.


MSCompatTelRunner.exe

I've killed as much telemetry as I can, but every time my PC loses a core or two to a random background service, it's that piece of shit.

No matter how much you purge it, it comes back. Removing execution permissions seems to work best, because Windows still realises that the file is there, but eventually it'll have its ACLs restored and the shitshow starts again.


IME windows takes more work to administrate than many Linux distros at this point.


It may require less administration but what administration you do have to do takes much longer in my experience - some Linux things are arcane, but much isn't, but when you get to things Microsoft don't really want you to play with you're on your own


For me, what makes Linux a better administration experience is the ability to backup and move configuration. In Linux it's all files. In Windows? Who knows. It's registry entries, files spread across hidden directories and god knows what.


True that. On Linux _at least_ you almost always have a way out of issues, because you can pretty much forcefully change, update or modify everything. Windows has too much "magic" nobody outside of Redmond, WA truly understands, so when things go south you can only hope that it can fix itself, or it's a goner.


Sort of like MacOS, every update added more features and slowed it down.

The special effects slow it down, you can disable them: https://www.cnet.com/how-to/easy-ways-to-speed-up-windows-10...


Over the past decade, software updates in general have mostly become another form of malware, as software makers care more about extracting money from their users or adding pointless shiny features than actually making their software fast, stable, or better able to serve the user.

As a result, my default policy now is "If it ain't broke, NEVER fix it." Install, then disable all updates, forcibly if necessary. On Windows and your browser at a bare minimum, and probably on any other software you use often or rely on. Yes, there are risks and downsides to doing this: security holes won't be patched, bugs won't be fixed, and new features won't be present. But everything in life is a balance. And on balance, the bad of updating is almost always worse than the good is good. This article highlights that in brilliant neon letters. How on EARTH do you justify boot and reboot times doubling or tripling, and just about every metric getting worse over time? Why would I let Microsoft fuck with my system if they're going to make it shittier?

And of course, the article leaves out the worst part. Never mind gradual performance drops, there's a good chance that update you just downloaded just broke something. Entirely. Something you NEED to work. When is a fix coming? Anyone's guess.

Install Windows 10 LTSB. It's the only one worth using. And unless you NEED a new feature or bugfix, never EVER update.


I can never relate to these posts. I always update all my stuff pretty much immediately when I see an update available. I run Windows, so I'm talking Windows Updates, browser updates, drivers, anything and everything. It actually gives me a pleasant feeling knowing I get bug fixes, security fixes, maybe some new feature every now and then. I'm definitely naive enough to hope for performance improvements rather than worry about performance regressions. Historically, it's extremely rare that an update messes something up for me.

Maybe I'm not as quick to update as I think I am, giving vendors time to fix broken updates before I get them? I dunno. I'm also privileged in that I update my hardware quite often. Maybe that hides any worsened performance from my perception.

I'm not sure if I understand your strategy correctly, but disabling (security) updates on Windows and browsers sounds like a recipe for absolute disaster. To me that sounds waaay more risky than any risk taken when installing (potentially broken) updates from MS/Mozilla/Google


I can never relate to these posts.

I can relate to them very well. I've wasted far too many hours cleaning up after one bad update or another. Windows and driver updates have been among the worst offenders. You could argue that the good updates might have protected me from malware that would have wasted even more, but I have no evidence to suggest this is the case.

As a result, I tend to be very binary about updates now. If it's something that involves direct contact with remote systems, it gets updated almost instantly, at least if the update is anything security related. Browsers, email clients, phones, publicly accessible servers, anything like that. The risk of not updating promptly in that situation is too high, even though I've seen many adverse changes when updating those kinds of products too. For most other things I use, if it's doing its job OK already, it probably gets updated if I have a specific reason to want a newer version and otherwise gets left alone.

I detest the modern trend for bundling essential updates like security patches together with other changes that users might not want, as the likes of Microsoft, Google and Apple all now do. Fixing a defective product is one thing. Changing it arbitrarily is something completely different.


And yet, I see this attitude pretty frequently in the software world. I too don't understand it (All my packages move to the latest dependencies as soon as possible). It's very often not the case that things won't magically start working after a version that breaks you. From there, it's just a ticking timebomb for some random CVE to come around making your app exploitable in all sorts of interesting ways.

Yet so many software devs take the approach of "Well, this version works, so why do the next?".


It’s a good idea to keep updates enabled so that you get security patches, but it’s ridiculous that you have to do this. The industry seems to have given up on the idea of making finished software, so instead you get endless churn - the bugs and vulnerabilities are infinite because the bug fixes are mixed into the same update stream as new features which themselves come with new bugs…


sony messed up a phone update a few years ago and it took them maybe two to issue another update to fix it. i can't remember what it was now but it was annoying enough that i stopped updating until a few other people confirmed it was ok.

ive had the same happen with two android apps as well so i have auto-update turned off now and just go through the list a few times a years. if the changelog just says 'buxfixes' but im not noticing any bugs, then i don't bother updating


I used to find new technology to be exciting. Now, new technology (speaking about more than just software here-- appliances, cars, etc) causes me to instinctively respond with "What features, capabilities, ownership models, etc, are being taken away from me with this new thing?"

I assume it's a function of my getting older (44), to a large extent. It can't all be about me getting older, though. I can't be alone in thinking that a substantial portion of "advances" in existing technology have been for the benefit of manufacturers rather than owners. Surely "normal" people are starting to notice this too.

Edit:

I also think: "What capabilities that I already had are going to be sold as an add-on or, more likely, rented back to me on a recurring subscription?"

A close second is: "Will the thing be usable when the company goes bust and takes their 'cloud' infrastructure with them? Or how about when they decide not to update their thick-client mobile device app for new device OS's?"

A third is usually: "How will this new thing spy on me?"


Eversince Windows 10 came out I don't think there'll be a back-to-generalized-efficiency-tool era for the masses.

At some point enterprise-grade companies that produce software get inefficient in terms of constant costs. They somehow need to make enough money for all the clutter on top that doesn't have anything to do with development/design of the product itself.

So, as long as a software producing company is a public company with shareholders and the market expansion isn't possible anymore...things necessarily _have_ to get crappier for the end users so that the company can keep producing more money out of nothing.

You can see that everywhere in enterprise level software companies. Microsoft, SAP, DATEV, just to name a few where they incrementally didn't make the software better - but instead artificially created lock-ins for their endusers so they aren't able to switch in future. Even when you argue that those companies have many market segments they deliver to, I'd go as far as adding all custom OEM Android apps to that list, as well as smart cameras, smart TVs, everything related to IoT etc.

Most of the smart devices are about removing control from the users and forcing them to pay more money for the same featureset later, gradually, without them noticing in the beginning.

Now with Google FLoC we basically will have the same thing applied to the web, and it won't turn back to the good old "just block cookies" ways. Google will persist in tracking everybody, and they got the leverage to do so.


I've been the same since Android removed phone recording capabilities.

Internet feels slower each year and the screen space dedicated to content and not ads or popups shrinks every day.

It's like in Idiocracy where the guy has a huge screen but maybe a tenth of is for the content.


I'd forgotten that TV UI from Idiocracy. Now that I see it it certainly does seem a bit prescient: https://scifiinterfaces.com/idiocracy-tv/

(Then again, sadly, a lot of that movie seems prescient...)


I don't remember your name popping up as my HN doppelganger the other day, but you could have been me writing that comment.

I don't think it's because we're getting older so much as because we can remember a time when new technology didn't suck like this and so we see straight through the rationalisations and marketing doublespeak.

We live in a time when a right to repair can be controversial, copyright is being distorted to limit what users can do with tech they have bought and paid for in ways that have nothing to do with giving copies to other people, and the working life of equipment is being dramatically shortened by software limitations or a lack of ongoing fixes for software defects from the manufacturer. There should have been laws protecting ordinary people against these kinds of dangers long ago.


I'm young (20) and I already have this negative view. I used to be excited about how fast computers would be. How good graphics would become.


In that case, install Debian Stable and stick to Firefox ESR. Nothing will ever change without warning, and you will have the most blissfully boring user experience.


Debian is great, personally i favour it most with either the LXDE or XFCE desktops - they're blissfully boring and functional!

I would have perhaps recommended Ubuntu LTS to some folks previously because of the long release/support cycle (even if you only decide to install security updates), but i guess with software packages like snap infecting the OS i can no longer make that recommendation.

Previously i would have suggested that some folks also look at CentOS because it's wonderfully stable and releases are supported for ~10 years, but i guess all of that was ended by Red Hat with CentOS 8. Maybe Rocky Linux will once again provide a stable RPM distro for free, without resorting to using Oracle Linux, but only time will show that.

Is it just me, or have many once stable OS releases have been killed off in one way or another in the past decade, either forcing people to migrate to paid projects, forcing automatically updated software that cannot really be controlled easily (snaps) upon them, or doing other shady practices for no discernible reason?

That said, i personally also only update software (like Nextcloud, GitLab, OpenProject etc.) manually between larger releases when i have made and re-checked backups of all of the data, before archiving the old versions and then migrating over a copy. I'm not sure whether i could live that way with OS updates or versions, though, without absolutely minimizing the attack surface - maybe with something like a locked down Alpine Linux.

Either way, it feels like perhaps automatic updates that can't even be controlled being forced upon users are the inevitable future. It's nice that there's Debian, but my question would be: "How long before it goes the path of Ubuntu?"


You write as if that would be a bad thing. Debian is probably the gold standard for reliability, longevity and user control today. If all software was so well managed, most of us would be better off. If there was also a good process for updating application software to a new major version if you wanted it, that would be ideal.


Even with Win10 LTSB there are updates. The last update did cost me 2 days of fixing my other Apps again.


I understand that it sucks to deal with bugs that may release with updates. That said, "never update" is both bad and irresponsible advice. It is important to ensure your systems are up-to-date, even if you choose to lag your updates by a week or a month.

Security is extremely important and in some cases can save your personal information and money from being needlessly stolen.

I can empathize with the slow downs, bugs and deprecation that may occur, but I can never agree that to never update is a good alternative.


So... UWP is a tenbagger: It got 10x slower in 5 years.

I always like to remind folks that it takes longer to open calculator - or windows terminal - than to open excel.


Special since the last update, calc does need a long long time, even on a fast notebook. An app like calc should be open just instant. Even just a second is way to long on fast and modern hardware.


calc.exe on Win10 2004 loads way faster than Excel on my machine, its not even close. cmd.exe and powershell.exe are also way faster than Excel. Terminal is a smidge faster than Excel but a hair slower than either cmd.exe or powershell.exe.


There's a calculator discussion now that calc is open source: https://github.com/microsoft/calculator/issues/209

All other UWP apps that do not get GitHub attention are 100x worse. Microsoft Photos, for instance...


Microsoft Photos also loads noticeably faster than Excel for me. Mail (the default Windows app) also loads faster than Excel. Edge loads faster than Excel.

Your statement was:

> I always like to remind folks that it takes longer to open calculator - or windows terminal - than to open excel.

but anecdotally this does not hold out for me. Not even close.


Anecdotally, this holds out for me.


All I know is my Win 10 Pro laptop has 16gb RAM, yet if an application, most likely Google Chrome, uses more than about 4-5gb, it crashes.

Yep, it's a lot of tabs. But not as many as you think, and often without warning, as certain ad-heavy sites, especially forum sites (flyertalk, rennlist), can sometimes require 1gb RAM alone.


All the more reason to install serious ad blockers not found with Chrome and to install extensions that deal with old tabs instead of letting them pollute your memory


I disagree for a few reasons. First, why is the memory limit so low? Like I said, 16gb RAM, yet using just 30% of that total is too much? Secondly, I simply don't believe it is entirely ethical to consume all of the content with none of the ads. I deal with the egregious ad situation by visiting those sites infrequently -- visiting once every few months as opposed to every day, or skipping those links when they appear in a search result. But most sites, I believe it is a fair tradeoff to view an ad or two in order to read their site. Third, I would likely reach the memory limit without any greatly offending tabs... Google Maps is a big memory hog, many government sites are as well, and more.


If they want me to view their ads, all they have to do is make them less hostile to me such that they don't get blocked :)


>yet if an application, most likely Google Chrome, uses more than about 4-5gb, it crashes.

64bit apps shouldn't have such a limit under Windows, and can definitely use more memory. Perhaps you're seeing a case of RAM gone bad (where the large apps are much more likely to trip over the bad location)? Consider running memtest.


I built a Windows PC after 3 years of using my 15 inch 2018 MBP (which is by far the worst machine I have owned considering the price).

Windows has gone to shit, I have visual glitches in the UI using dark mode (even the search box on bottom left doesn't render correctly in dark mode until I hover it). They preinstall some shitty "news" bar right there on the task bar and hide disabling it behind a submenu of a right click on taskbar. Start menu search goes online by default and misdirects me because of it (I disabled it quickly).

Using a 5k monitor and display scaling is very iffy as well compared to MacOS.

I'm waiting on next get M1 and then I'm using the Windows box as a docker server I'll SSH into and maybe play some games. But using it as a daily driver is really underwhelming. I would use Linux desktop but from what I've heard HDPI scaling is even worse.


Windows has gone to shit

I feel the same way and it makes me sad.

I've kept a death-grip on Windows 7 in hopes there would be a Windows 11 one day despite all the naive "last version ever" messaging. Now it's been announced, and I pray it's more than just a paint job. My asks are simple: more stability, better performance, and axed telemetry.


If anything I suspect they will double down on adding internet services to the OS and double down on the app store.


There are dozens of us!


Didn’t Microsoft backport telemetry into Windows 7? You’ve got several of the same privacy issues as Windows 10 without the security improvements.


Is there even a point to benchmarking boot up time when you can't rely on it being consistent? Sure, it may be thirty seconds ninety percent of the time but the other ten percent it takes ten minutes because it decided installing an update is more important than whatever you wanted to use your computer for. I would never, ever tolerate a Windows 10 machine being the only computer in the house. There's too many times I need something right now and I have to do it with my phone because Microsoft is too busy fellating itself. This problem becomes exponentially worse on any machine that spends most of its time shut down, like a travel laptop.


To me, Windows 10 feels like it's getting faster. I've noticed that updates (even major ones) complete in a "reasonable" amount of time (as opposed to the old Microsoft tradition of "randomly taking hours").


I would be curious to see a "windows picture" or something test. I don't remember the name but the default app for opening images in windows 10 is so slow that it feels like a joke.


150MB word document works fine on Office 2010 but it hangs Office365 desktop word. What the hell Microsoft? I had to use LibreOffice to edit a word document created in your Word!


> “Each version was clean installed.”

This might have been hard to recreate but I feel like some of the association with updates making things slower is the updating process itself. I was always told when a new Windows version came out that the advice was to back up files manually and clean install every time. I’d love to see a comparison between the statistics you have for clean installs and those done through the Windows Installer “Upgrade” path.


It would be interesting to see the results if windows was upgraded instead of doing clean install for each version.

I find upgraded to be the #1 cause for issues with windows. A clean install (using iso downloaded from MS) normally resolves pretty much all problems and upgrade be the one that creates them.


Glad to know it's windows and not my fault. My boot time went from 15 seconds to 30-45 now. I used to be able to reboot my computer and by the time I'd gotten a drink or tied my shoe it would be back to desktop. Beautiful. Now it's slow as hell.


This is great. I would especially love to see some sort of automated testing for this kind of benchmarking across different versions of windows, macos, linux, etc as well. I think it would be incredibly powerful for users to have access to these kind of metrics.


I'm not sure why they tested with disabled fast boot option? Why not disable every feature of system?

Fast boot from powerdown is the reason why I stop using other power down states. Just shut system down, and boot it up in 10-15 seconds.


Because fastboot turns shutdown into something closer to hibernation, vs the shutdowns of yore. The author was trying to test a true cold-state, startup, vs what Microsoft claims is a start up, but isn’t really.


I'm still on an 2nd gen i5 with 12gb of ram. Haven't noticed any slowdowns even with the huge amount of processes in the background. I've been updating regularly and have enjoyed Win10 for years.


Exactly my experience, but I'm a "stock" user. Meaning, I don't game, I don't have extra bells and whistles.


I feel having a built-in graphics card that's poor also results in terrible performance, even if its just desktop use, things just become slow and laggy.


If each update gets slower on the same graphics card, it's probably not the fault of the graphics card



After a recent update, i know that my bluetooth started intermittenly disconnecting and reconnecting. Continuous connectivity problems. Not a coincidence.


All of the major slowdowns seem to pop up in build 1809, which is where the first Spectre/Meltdown mitigations were introduced, which is to be expected.


Before we can help each other and find the culprits, we need to be more detailed in what makes Windows 10 worse.


I replaced my laptop with HDD with SSD and Win10 is usable again. Did the same with my 2011 Desktop.


You can't really run Windows on an HDD anymore... well you can, but it's not a good experience.

It feels like all the speed we got from SSDs, and to some extent faster CPUs are quickly being consumed by Window, leaving us at the same speeds we had in 2010.


I'd wonder what processor they used; mitigations for Spectre and friends could be a contributor.


>0.3 MB/s read/write to brand new HDD on W10 >Disk usage 100%

It's a conspiracy.


Yes it's a hog.


Yes.


windows 10 is what happen when you let clueless people work on an OS


At what point does Windows 10 cross into what people would generally call "unstable software" and call Microsoft's classification of "stable release" as being way too low a bar?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: