Citation: "When KFR is being used, the October 2018 Update will delete the original, default Known Folder locations. Microsoft imagined that this would simply remove some empty, redundant directories from your user profile. No need to have a Documents directory in your profile if you're using a redirected location, after all."
Sorry for harsh language, but this is not a bug. This is a complete brain damage of those who decide to implement such behaviour.
I have redirected Documents. And there are A LOT of programs that directly try to use C:\Users\user\Documents instead of redirected.
Sorry for harsh language, but this is not a bug. This is a complete brain damage of those who decide to implement such behaviour.
That's not "harsh", harsh is what happened to the unlucky users who lost data because no one on the development team bothered to call it out. It should be an implicitly understood rule that you NEVER remove a file you did not create, unless the user explicitly asked to, but I guess MS considers it acceptable to do anything to a user's system after they convinced everyone to take forced updates as being acceptable too.
> but I guess MS considers it acceptable to do anything to a user's system after they convinced everyone to take forced updates as being acceptable too.
This. Once you get in the mindset that you know better than the user, it's not a big jump to "I know these directories should be empty, any content is leftover garbage, let's remove them".
Absolutely this, with a followup of "and if you're 'helpfully' deleting an unused folder, check that it's unused first!"
Given that this was (stupid) desired behavior, making sure Documents didn't have stuff in it should have been a screamingly obvious step. It would still have been utterly unacceptable, it could still have created weird downstream bugs when users installed things that target the now-missing default Documents location, but at least it wouldn't have set a bunch of data on fire without any warning.
But then, I guess relying on common sense after the first terrible decision is made is never going to work. There's a reason "never break user space" is rule 1 for Linux updates...
Microsoft doesn’t consider the OS to be the “user’s system”. They clearly see it as something they rent to the user a little bit each day. If the user wants to keep the system for a little longer, he/she has to make a special request to Microsoft that they hold off for a bit before retaking control of the OS.
I ditched Windows as soon as I saw that "Windows is a service . . ." notification show up in the bottom right after an update, boy did that make me livid. (You can ask my roommates, I actually shouted at my computer; it had just spent over an hour, on a best in class NVMe drive, doing that update.) I was pretty sure I paid $200 for an operating system, not an operating service. Admittedly that was just the straw that broke the camel's back, but this is just getting insane.
What I paid for is a HAL, some drivers, a filesystem, a handful of media codec licenses, and some basic applications to manage all that. What I got was a bunch of "apps" like "Photos" that routinely try to use up all my available system RAM, a start menu that connects to the internet, displays ads, and routinely lags on a high-end workstation, a file index service that was broken for 3 major update cycles, and an ever increasing number of "privacy toggles" in the completely dysfunctional control panel^W^W settings app.
Microsoft clearly doesn't care about "developers, developers, developers" anymore, because I have never lost this much productivity to an operating system in my life. (I say this as someone who has used Linux long enough to remember how bad WiFi was in the era of ndiswrapper; as well as someone who has used Windows Me and Vista for significant stretches of time.)
Oh yes. Excuse my harsh language, but fuck SaaS, and fuck Windows if it tries to be a SaaS. And I say that as someone growing up on Windows, still using it for work sometimes, and planning to buy a Surface device in the near future (I guess maybe I should reconsider).
Actually, I should probably use the term SaaSS, because that describes the kind of software I hate with passion more precisely. I recently stumbled upon this[0] essay by RMS, and like usual, it turns out he had this thing figured out years before I've even noticed we had a problem. If Windows wants to become a service substitute for an OS, I'll save myself future trouble and migrate away completely.
> Microsoft clearly doesn't care about "developers, developers, developers" anymore, because I have never lost this much productivity to an operating system in my life.
Azure's APIs are really nice! Microsoft cares about people developing against Azure. Those developers make them money.
And they've invested a lot of effort into allowing you to develop against those APIs on macOS or Linux.
What Microsoft doesn't care about, is people trying to use Windows to do development (outside of an enterprise use-case, where everyone is using the LTS version of Windows anyway.)
I feel like, increasingly, Microsoft sees Windows the way Apple sees iOS: something you develop for (to target the consumers that use it), not something you develop on.
Imagine a Microsoft that didn't have a Windows product, just their cloud products (Azure, Office 365, Xbox Live, etc.) Would that Microsoft build an OS? Or would they just tell you to use macOS/Linux to interact with their software ecosystem?
I feel like the answer to that question tells you a lot about Microsoft's priorities.
What I don't understand is, what led Microsoft down this path to begin with? Don't they have 90% penetration in desktop OSes and are fighting with established and trusted players like AWS, GCP in the data center and PlayStation and Nintendo in the living room? Why make it harder on yourself by shooting yourself in the foot?
The problem with Windows was never that it didn't have enough features, it was that it wasn't reliable enough. So instead of making it more reliable, they've doubled down on the features.
The trend for a while now has been diminishing user time spent on desktops and more time spent on mobile/web. Presumably Microsoft wanted to stake out a territory in those spaces.
Maybe it's finally the "year of the Linux desktop," lol... I mean both Apple and MS don't care much, and the market for desktop OS has become specialized again, so if it doesn't happen now it's never going to happen.
This whole meme truly underestimates the average person's Stockholm Syndrome for fascist rule. Microsoft's userbase is like some kind of microcosm of learned helplessness. I've never seen so many otherwise intelligent people feverishly defend Microsoft, its OS, and their actions. I have literally seen in the last week someone posting about Windows 10 completely hosing some of their important pictures, then in the same breath defend the people responsible.
Recently was reading the blog of the team upgrading conhost. On it they joke several times that the developers doing the work were not even born yet.
Well, this is the kind of thing that happens when your whole team is interns and the "senior" is 28.
Reading the associated bugs on github I also learned that there were lots of complaints about the new console, not on features, but breaking compatibility that is. Guess no one thought to start a new project, rather than changing a 30 year-old one that hadn't been touched in 20.
TL;DR: At least one codger is needed on teams doing this kind of work to give perspective.
Seems like common sense, but Microsoft has been doing this for a while. In Windows 7 (not sure about later releases) the OS runs a 'Desktop Cleanup' periodically that deletes shortcuts to network locations it can no longer connect to. God forbid your network drives don't map one day and Windows decides to nuke all your desktop shortcuts... this actually happened to a user I was doing support for and they were understandably livid
Yes, seriously. I would still think it's totally unjustifiable as a 'helpful' change, but at least it'd be causing problems like "hey, this install failed, how do I fix it?" rather than "where's all my stuff?"
That, and I'm just a bit shocked that "the folder should be empty, so delete it" didn't just naturally make people think "obviously I should check if that's true first". Even if it's not a total fix, the failure to add that is it's own layer of screwup.
"Save/Read X to/from the user documents folder" should be an OS-level function (on Windows, where you expect a desktop user to have a documents folder).
Every time a program calls a deprecated API, the developer who signed the code should get an e-mail.
Windows/NTFS has links, but they aren't used for this (and software support for them is ... iffy, which can be an issue with backup tools etc, for which they aren't just transparent). These "known folders" are roughly implemented as environmental variables containing paths to the configured folder.
I created a vhdx file on windows 2016 server, formatted it with ReFS and copied it to my PC. And I mount the vhdx as D:. It's such a silly way to do something.
Since Win 2k; I'm using Link Shell Extension[1] that puts nice and simple GUI feature in context menus and on icons (green shortcut arrow for example or chain).
I use them for tons of stuff. Such as moving spotify song cache (not the same as offline storage location that you can specify in settings) so that it wouldn't use precious SSD space for cache (which was quite the inconvenience back when SSDs were very small/expensive).
It doesn't work well with applications that have a habit to remove and recreate the directory that you want to link though (for obvious reasons).
I guess I need to relook them then. Would definitely make a few things easier in my life. I really don't want my steam directory on my main machine, being able to hard link it to an iscsi drive would be nice if that's possible.
This seems like a common sense thing that every intern would consider. Why the Microsoft development team didn't really raises some questions not only about their QA, but about their whole development process.
Tangential to this when are operating systems going to ship w/ CoW filesystems by default? Accidental deletion of critical directories has been a solved problem for years now. I take instant snapshots of my home directory on ZFS every hour, my boot environment is snapshotted before every update.
You could literally remove my entire root directory and at worst you've minorly inconvenienced me: I now have to boot from a USB drive and run `zfs rollback` to the most recent snapshot. I'd be impervious to most ransomware as well: my home directory is filled w/ garbage now? Good thing it can't encrypt my read only snapshots, even with root access. Oh and I can serialize those snapshots across the network at the block level before I nuke the infected machine.
Of course humanity will always invent a bigger idiot, so it's possible a virus could gain root & exec some dangerous ZFS commands, or it's possible some MSFT employee could pass the `-r` flag and recursively delete a dataset & its snapshots in their post-update script. Plus let's not forget that no filesystem in the world will stop you from just zero-filling the drive. Still it's easy for me to imagine a world where I'm completely impervious from ransomware: by way of only being able to delete critical snapshots when a hardware key is present or when my TPM is unlocked.
On the whole it seems to me CoW filesystems are a major step forward in safe-guarding users from accidental deletions. At a minimum they have other benefits (serialized incremental send streams, block level compression & dedup, etc.) that make backups easier to manage. -- Yet I still can't boot from ReFS on Windows.
That'd be great, if I didn't have to turn it off. System Restore is so slow as to be borderline unusable. (In my experience creating the restore point accounts for the majority of time Windows spends doing updates, on a true CoW filesystem this should be nearly instantaneous.) Furthermore if you have VSS enabled on a volume hosting an MSSQL database it creates entries in the backup log for every snapshot, which destroys the chain of differential backups. This makes it impossible to use SQL Server Agent's maintenance plans alongside VSS in small-scale deployments (e.g: a workstation.)
I cannot stress enough: NTFS is not a CoW filesystem, any attempt to add CoW features on top of it will be poorly performant and doomed to fail, because they actually have to copy the old data blocks when you overwrite something. ZFS, btrfs, et al. are not actually copying user-data, because they don't have a concept of "ovewriting a block," every block that is written is written to unallocated regions of the disk; blocks that are not referenced by a dataset or snapshot are then returned to the allocator for later use; at no point is the old data "copied", it's just "not returned to the allocator for reuse."
What btrfs or ZFS mean by "copying" is not the user's data blocks, it's copying filesystem metadata, portions of btrfs' extent tree, or ZFS' block pointer tree. There is a world of difference between volume shadow copy, and an actual CoW filesystem. -- Microsoft knows this, that's why they are working on ReFS. (Of course in typical MSFT fashion they've made a number of critical mistakes: user data checksums are optional, and volume management is not integrated so it can't even use those integrity streams to self-heal corruption. Also last I checked ReFS can't even be used as a bootable Windows volume. -- It's worth pointing out APFS also made the mistake of checksumming metadata but not user data; which in my opinion makes both of them unsuitable as next generation filesystems.)
> ZFS, btrfs, et al. are not actually copying user-data, because they don't have a concept of "ovewriting a block," every block that is written is written to unallocated regions of the disk
Unless you just so happen to be writing over an entire block, then there still must be a copy happening. The old data, less the bytes you modified, must be written into the new block along with your changed bytes.
AFAIK no disk hardware (or filesystem) supports _actual_ byte writes, and never has. They all work by reading in a block (say 512 bytes minimum), changing the bits you asked it to and writing the new block out. So it's no extra effort to simply write the new block to a different location.
Last time Microsoft decided to replace NTFS with something modern, they took about 15 years to give up. If they start now, we can expect them to give up roughly in 2033.
They tried radical, that's why it was slow and sadly painful. Now implementing CoW ideas is at MS reach easily and would bring commercial value easily, so it could go very differently.
ZFS has now been ported to Windows. With some testing and polishing, maybe it will be something they could adopt in the future (I know, wishful thinking, but the technology is there).
You could always do what I did, and drag Windows along kicking & screaming into the future. I setup Linux on a ZFS root, carved out a zvol formatted w/ NTFS, and installed Windows on top of that in a VM. I get the benefit of being able to `zfs send | receive` the ZVOL to my backup pool, I can do instant snapshots before these godawful upgrades, etc. -- Throw a second GPU in an IOMMU group and you even get near native gaming performance. (Gaming & Visual Studio's debugger are pretty much the only things I use Windows for now.)
It's not perfect, but it works, the major downsides are:
- Granularity is at the volume level, so you get a lot more block churn in your snapshots. I mitigate this by trying to separate the "OS" volume and "Data" volume to the extent Windows will let me.
- If you're doing any heavy lifting on top of the ZVOL that expects sync writes (e.g: MS SQL) its performance will degrade slightly. Anecdotally I find this to be negligible on my all-flash pool. Besides they sell SQL Server for Linux now ;-).
- NTFS is unaware of the snapshots, so your snapshots will be of a dirty state if the guest is running. AFAIK there's nothing like `xfs_freeze/unfreeze` that can be done in the guest to make these snapshots atomic. That being said NTFS' chkdsk is quite mature, and I've never observed this to be an issue in practice.
”The Volume Shadow Copy Service (VSS) keeps historical versions of files and folders on NTFS volumes by copying old, newly overwritten data to shadow copy via copy-on-write technique. The user may later request an earlier version to be recovered.”
Out of curiosity, have you got a link to a workflow doc/tutorial/guide that could instruct someone who's a big green around the ears with using a zfs backup like this?
For a great primer on ZFS (on Linux) in general there's this website[1] by Aaron Toponce that is very well laid out. Specifically see the section on snapshots & clones, as well as sending/receiving. Also don't forget that you can make a ZFS pool out of any block device, this includes normal files mounted as a loop device! So a great way to experiment w/ learning how to manage a ZFS pool is to just create a few sparse files, add them to a pool, and just start messing around with it. (For actual production pools though always put ZFS as close to the actual disks as possible.)
What's really cool about serialized snapshots is that once native encryption support lands in ZFSonLinux you'll be able to send encrypted volumes over the wire. The data blocks don't have to be decrypted to be serialized or checked for integrity, so you don't even need the decryption key to do incremental sends! You can send your backups to an untrusted remote pool that never has to have knowledge of the decryption key!
(You can also serialize snapshots to files, which is useful for say writing them to detachable media to sneakernet them. If you want to do incremental sends though the receiver does have to be an actual zpool.)
Some other helpful hints would be using something like `mbuffer`[2] on the sender & receiver. This is handy for spinning rust pools, so that your disks don't have to wait on the network before seeking to the next blocks to be sent.
Also ZFS lets you delegate permissions so you can manage filesystems as normal users, I'd recommend doing this so you don't have to login as root on the remote pool to receive the snapshots.[3] In my case I ran into this early on because I have `PermitRootLogin no` on all my boxes.
They had that 10 years ago and called it "system restore", but they couldn't figure out how to actually...make it work.
Besides, msft makes money by selling SaaS. There's no financial incentive to protecting consumer data since the EULA indemnifies the company for any damages caused by their products.
I would have been bitten by this bug. All my Known Folders are redirected to be in a separate hard drive so I can switch them to a new system more easily when I upgrade. It also stems from when SSD's were too expensive to store anything but your OS and your applications on.
A whole bunch of applications do the bad thing Microsoft is talking about, and hardcode the path to 'My Documents'. I have a ghost 'My Documents' folder that has mostly app settings and maybe some save files.
It's always vague about the known folders, and hiding the details is arguably the point of them, so you have to teach users less if you move them around in a managed environment.
Not sure what you mean by "the exact same address"? At least from what I've seen "Documents" always points to exactly one folder, and explorer shows only a view of that folder?
I've had two separate "Documents" folders on a few different machines before. I never used those folders, but they both showed up in my home directory and could contain different data.
I'm sure they were something different if I dove deeper in cmd, but I never cared enough to figure it out.
I had libraries hidden, so I had to check: there's both: the Documents folder, and a library that contains just that folder by default. So yeah, that seems unnecessarily confusing.
I guess I'm glad that I don't use any of those folders... I just dump everything into category folders at the root of my data partition, e.g. G:\Code, G;\Pictures, G:\Downloads, etc.
The entire Users directory ends up being such an unimaginable cesspool on a Windows machine that has seen any significant service time, with this, that, and everything else poking and prodding and leaving its detritus inside. My desktop is essentially a single-user machine, as most Windows laptops and desktops are, but even on servers where user accounts are actually used, I spend an undue amount of time fiddlefrigging to take ownership and permissions on files because some script I need is saved to a different account's desktop. Bailing out of the whole thing and running your own filesystem shouldn't be easier, but it is.
I suppose the developer who thought it was a good idea to delete a non empty directory was high at the time he implemented it. But, how this went pass through QA is entirely a mystery to me.
It's pretty simple. Because there isn't a QA. Microsoft laid them off and uses the "Insiders" as beta testers. But they reported this issue but Microsoft ignored it because it didn't get enough votes.
Wow... This is not acceptable at all. They should have a team to go through these and at least prioritise the tickets instead of just relying on upvote. I'm pretty sure Microsoft can afford that.
I don’t believe it is. The parent said MS should have a team that manually sifts through raised bugs and sets severity. From the article it appears that MS is instead adding the ability for the bug reporters to set the severity. That’s definitely not as good a solution because everyone thinks their issue is the most severe.
I’ve read this comment about MS getting rid of their QA in a lot of places. Is that something they actually really did or is it something that people say because their QA quality has dropped significantly (which as in Apple’s case could be due to the increased release cadence).
It's the Microsoft mindset that they rule the world and everything works the Microsoft way. Not only do other operating systems not exist (yes yes Linux subsystem for win 10 whatever), but every software vendor and user is assumed to use the system exactly the way it was intended. That means no app will have the path to the original folder hardcoded instead of querying it the official way, and no user will mnavigate to the old pre-redirect location manually for any reason. By that definition, the folder can only be empty, so it's safe to delete it, recursively, since deleting an empty folder recursively is no different from deleting it non-recursively.
Sounds more like a business/product owner request given to a developer who has been told one too many times they are too negative when being given new stories to work on.
YOU are their QA department if you're using a non-enterprise version of Windows! Wake up and use Linux or buy a Mac (at least Apple only mocks your wallet!).
I don't think this was a single developer mistake (that would be easy to spot), instead this was a solution for a problem so it was a requirement for the new update.
Looks like someone didn't do their job correctly. Carrying on a destructive action (DELETE) without first checking whether that folder contains any files (except maybe desktop.ini)...
I won't try to defend their execution, but I think they're trying to solve a legitimate issue. It's really confusing to an end-user if they see two Documents folders.
And I'm certain that I've personally encountered instances where both Documents folders had the Documents special folder icon (though I'm not sure if I've seen this on Win10, specifically)
So move/merge the locations since that is clearly the intention if the user has a redirection setup. A simple message confirming that "there are 2 Documents folders and would you like to move them to a single location" would be sufficient.
This is just a project management failure that somehow got through, but the fact that it did seems to show major QA issues.
Merging the locations can easily get complicated. What if the same file exists in both folders? What if it exists in both folders, but with different contents? What if not only it has different contents, but also the same last modification time?
Why is it complicated? It's no different than moving any other folder to a new location which already has one with the same name, something that has been handled for decades now.
It should be the same exact action, just triggered by a message saying "Documents folder detected in 2 locations, click OK to copy".
I have two Documents folders. It's fucking confusing. But I'd think it shouldn't have been possible for them to exist in the first place. Windows Explorer is a mess.
I've run into this on a bunch of systems, and it absolutely is a mess. Particularly since Explorer gives all kinds of special status and UI elements to 'Documents' folders - if they just had the same name in different places I'd care way less.
But the solution to "we screwed up years ago" is definitely not "nuke one instance with no warning and don't even glance at what's there".
It's basically the only reason I didn't lose a bunch of data from this bug as an Insider - the "Libraries" and special "My Documents" folders are so damn messy and have caused me data loss at other times, that I stopped using it and manage things myself.
Another lovely one, for ages (probably still) OneDrive would purge it's local copy whenever you turned it off or uninstalled it. That was probably one of the most frustrating things that Windows has ever done to me.
Edit: Lord, can we please talk about "3D Objects" too? Who the hell got promoted for shoving that down everyone's throat in half a dozen places? Every week I get an update I have to run a regedit script to remove all the bullshit they've shoved into Explorer for absolutely no reason.
`Libraries` and `My Documents` an absolute mess, agreed. I've given up and just work from a personal folder that serves the exact same role but doesn't have any special Windows status, which is a pretty serious indictment of those "helpful" features.
A sincere question: why are you an Insider? After the Windows 8.1 issues I'd be really gunshy about getting early anything for Windows. Are there upsides to that, or is it pretty much just a public service in the same sense as running nightly Firefox builds?
I like being on the bleeding edge. My only Windows PC is a gaming machine, so I don't mind it updating frequently and/or needing babysitting (for example, the very latest flight added a HyperV adapter and broke my internet, despite having HyperV disabled).
That, and honestly, they're pretty rock-solid. Other than this net adapter this thing past week, I've never had anything truly disruptive. OTOH, there've been a time or two that I was glad to be an insider because of a bugfix or feature that wasn't likely to be backported to stable.
And some misplaced desire to help? I can rationalize it with OSS, I can't really tell you why I feel some sort of responsibility to do it for Windows too.
That makes sense actually, thanks. I don't think Microsoft is inherently evil or anything, nothing wrong with trying to help make software better even when it's not OSS.
My confusion was mostly just that I couldn't picture who would have the tech background and Windows familiarity to fix (and productively evaluate) weird or buggy releases, but was also willing to have their system destabilized without much warning. For myself, I expect I'd find being an Insider exhausting; even if I know enough to fix the issues, I don't use Windows very often and I'd feel lost whenever things changed and needed to be fixed.
Getting early OS updates sounded much higher-risk than nightly Firefox builds or something. But "it's a gaming machine" actually makes total sense - I forgot how many people have a nice Windows machine they use often, with no real need for it to be stable.
I've seen it a lot over the years. Literally two folders side-by-side with the same name and same icon and questionably the same path but are two different folders. I even had it on a customer's PC just last week. Don't know what else to say.
I'll chime in here with +1. It's why I refuse to mess with KFR anymore. Wound up in a situation with multiple directories with the same name appearing, with different file contents, with different behavior depending on how you accessed it. Kept coming back despite not being how I'd configured KFR. It was a fucking nightmare. Nothing like staring at what is supposed to be a filesystem view and seeing things that make no sense.
You're not alone. It's been a bit reassuring that others run into these things. I figured maybe I was just "holding it wrong".
There is no “check first” in an ever-changing file system environment that has no atomic operation for something this large. Any “check” is a false sense of security, convincing you that you’re about to do the right thing; meanwhile, any background process could create important stuff in the directory tree you just “checked” and you’d destroy it anyway.
If you could lock down the whole directory tree and then check, it would be moderately safer but you are still assuming the tree contains only what you expect. It’s far wiser to have a list from the start or a file search that you can audit before individually processing files on your list.
Sounds like making the perfect the enemy of the good. Why bother to do any checks of anything when a random flip-flop could go metastable forever and brick your system?
"No longer". That's a very bad sign about QC and how confident the major retailers are that we're not going to switch to a different and more stable OS. The problem is that they're right, because Unix/Linux/etc was never and is not meant to be a single-user desktop home OS for the general public. Of course, random broken updates completely bricking your system should be no surprise to those users either.
Anybody got a suggestion for an OS I can use that exists on hard media, doesn't use kernel or base OS code that's been distributed digitally and has optional completely non-destructive updates via hard media no more than once a year, so that I'm not feeling like I'm trying to hit a moving target with the stuff I want to use, and it either works or not until a year later?
It certainly brings the suitability of Windows for production use into question. Not only this bug, which is outrageous, but the direction of Windows 10 in general before this.
I actually used it as a desktop OS in the early 00s, it was a very relaxing experience. It was years behind FreeBSD (which was years behind Linux, which was years behind Windows...) in terms of device drivers, but it was incredibly stable, minimal, beautifully documented and well-organized OS that was a joy to work with.
As an example of what "stable" means here: OpenBSD has a port collection, but it's an implementation detail that you're not supposed to use. Instead, when a release is created, all ports are built and tested to guarantee that they work. Then, they're not updated. Ever. You're supposed to upgrade to the next release yourself, and if you don't, all the packages available to you right now will be available and working the same way 20 years from now.
(Of course, there is a -current version you can use to get more liberal update policy.)
I don't remember all the details, but it sounds like it would fit your requirements very well.
I may have to try it then. How does OpenBSD name drives? Logically, like A:\ & B:\ for floppies or removable media, C:\ for main HDD, D:\ for main optical media or second HDD, E:\ for second optical media, etc? Or does it name in the incomprehensible method Unix & Linux use that have no application to what I'm trying to do with my life?
Physical devices are named based on the driver used. Floppy is fd0 (fd1, fd2, etc. if you have more than one), cd drive is cd0 (cd1, etc.). These are generic drivers, with hard disks it gets more specific - for example, a Western Digital disk would be called wd0.
These physical devices are "mounted" into a single logical filesystem hierarchy, starting at /, with user data in /home/<username>/ (vs. C:\Users\<username>\). Various disks (and partitions) can be mounted basically anywhere in the filesystem, under any name you choose. You can mount your floppy as /A, your CD as /D and your other disk as /G and the OS won't complain at all. Traditional location for that is /mnt (ie. /mnt/G), but it's just a convention you don't have to follow.
You can learn about all the other places in the filesystem, basically what goes where, in `man hier`[1] It's like reading about the internal organization of C:\Windows, though, and can be ignored for the most part: normally, you stick to your home directory for everything and can organize it any way you like. Some desktop environments will create (and display in a special way) a more traditional set of folders, Documents, Downloads, Music and so on, but they will all be inside your home directory. You can mount your other drive (or a part of it) as your Music folder if you want, too.
As a side note, I'm not sure I would call Windows naming convention logical. It's just what you're used to. A single filesystem for everything vs. a separate filesystem for every disk/media is really a minor difference, just like the different path separator (\ vs. /) is irritating for the first week and then you stop thinking about it.
A warning, though. OpenBSD is not a desktop-oriented or newbie friendly OS. It's meant for servers and power users. There's very little hand-holding - even the installer has no GUI at all, not even console-based, it's just a command line where you type responses to printed questions. The primary interface to everything is the command line. The OS is focused on security and stability, not convenience. That being said, the system is incredibly well-documented and discoverable, and the community is very nice and welcoming, so it's not hopeless; just be aware that you're going to struggle for quite a number of hours before you make it work exactly the way you want it. The upside is that when you finally make it, it will stay that way. Forever.
A live DVD of some Linux variant (Puppy or Alpine come to mind)? I'm not sure that I would want to run it for that long unless you figure out some way to run your browser off a different device, though; security updates aren't something that's safe to put off for a year.
Can download the binary Moz-distributed FF and unpack it wherever, then run $wherever/firefox. I think currently needs PulseAudio if you want audio, so that might limit what distro one can use. There can also be some traps with missing libraries, so very stripped down distro might be unusable. Otherwise, just check Moz website for updates every couple months?
At worst Linux updates might screw up your system and render it non-bootable. I'm not aware of any bugs in the past 20ish years where your home folder was deleted.
Apparently the bug doesn't actually delete your files that often, if you read the article. Also that second one isn't really something unique to ext4, is it? Seems like a universal problem.
(first link) Well, it was a one-off bug that was quickly bug-fixed, but still it managed to be released in a stable kernel and resulted in terrible and rapid data corruption. It mentions restarting twice in quick succession (or even just remounting the volume) was enough to trigger the bug, which seems just as unacceptable to me as the recent Windows bug.
That's the kind of thing I'd expect from btrfs, not the quintessential Linux filesystem.
Well.... there are a few really nasty issues that come to mind, usually surrounding filesystems. Not too long ago, if you were using a certain version of systemd on certain laptops and did a rm -rf / (which as a new user is not hard to mistakenly do), not only would you lose your files, your hardware would become unusable as well.
(Systemd mounted an EFI partition read/write, and nuking / also nuked critical firmware information)
That said, given that Microsoft charges money for this, goes out of their way to render themselves not liable when their code breaks your shit, and doesn't test a use case that's not exactly uncommon, it's still unforgivable and not comparable to Linux.
Linux usually breaks when the hardware lies about having written something to disk or flushed the disk (both are more common on consumer hardware than you think, especially cheap SSDs), though I don't think any FS would fare well if the hardware starts lying to it.
There is also this[0] short blog post on FS reliability on Linux where they analyze the source code for cases where error codes are dropped (code like "if(err) { /* ignore */ }" counting as handled error here) and there was some significant amount of problems across all FS in the kernel.
Though, I don't think the other OS' out there are much better, FS are hard. Getting them right is harder. Getting them right while being compatible with existing implementations is very harder.
Fantastic link, thanks. Though I'd just point out that ntfs and apfs are as bad, if not worse, and that we're all in the same boat when it comes to filesystems.
Oh god, the EFI bug. I thought it was utterly absurd that some people on that mailing list defended that behavior and it really put me off Linux culture and the Linux community.
What should put you off is people two and a half years later still wrongly ascribing this to systemd. And this even though, as noted at https://news.ycombinator.com/item?id=11011399 , OpenRC had been doing the very same thing.
It wasn't a systemd thing. The people who relate the tale as if it were a systemd thing or something that the systemd people did are ill-informed. It was entirely a kernel thing. This was even stated outright at the time by the creator of that particular kernel mechanism.
I actually remember reading exactly this discussion. So I was correct in my original assertion that Linux (as opposed to systemd) culture is batshit insane because of all the Linux/Unix people defending efivars/Lennart's decision to 'not workaround broken efi implemetations' that allowed rm rf / to break hardware.
That's more a "freedesktop.com/systemd" culture and community than anything, thankfully. Their philosophy seems to be that if the user has a problem with something, the user is wrong.
Even if that may be the case, systemd has now afflicted every relevant distro. That means their community and their philosophy affects everybody else. Not great.
Linux also destroy intel e1000 nics for a while. And up until ext3 it lied about synchronous writes. Linux has been far from safe for storing your data.
Bad drivers can also destroy CRT monitors, hard drivers, Flash memory, CD-ROM's in CD drives, overheat graphic card (I had this bug with official NVidia drivers on decade old notebook), CPU (I had it too with AMD CPU), or motherboard, battery can catch fire if overcharged, and so on.
If you are not experienced, you should not use root, because it's dangerous. Your experience confirms that. When you need to use root, plan for disaster.
You opened command terminal. You entered user mode using sudo. You entered command manually instead of using a user-friendly file manager, e.g. Midnight Commander. You disabled user-friendly interactive mode for rm. Literally, you said "system, delete everything and don't ask any questions".
Yeah, sometimes sheet happens. I did that too once on production server with about a hundred of web-servers because I hit enter in the middle of the command. But I never blame system for my errors. Now I write tested scripts to make changes, use RPM to deliver updates, and use file manager to manage files manually. Learn from your mistakes. It's price for performance. For example, I needed to erase my partition recently. Using Linux, I erased it in less that minute.
While I understand your mindset of, "If you are not experienced, you should not use root, because it's dangerous. Your experience confirms that. When you need to use root, plan for disaster." for a network-based terminal at a workplace or public facility, I disagree for a home user's computer. When we error-proof a computer against an average user, we lose the ability to silently teach that user what NOT to do. We lose the ability to empower that user to control their own machine. If you type in "format ." or something like it, then yes, you should expect a great deal of trouble; especially if you didn't have good backups. That's a big part of my problem with Windows 10, Linux, OSX, Chrome, Firefox 52+, and basically most "modern" software. By demanding that the OS, the browser and even videogames must be able to be updated at random by the whim of some "upstream" group, and the user doesn't actually have the authority to decide what is or is not on their machine, or what data is or is not being sent to wherever, we lower the standard of the average user.
I'm not talking about accessibility, mind you. I'm talking about the silent education that takes the starting computer user and makes them into someone who is confident around computers in general. Someone who understands the appropriate niche for each type of computer (i.e. in roughly ascending order of generality; mobile cellular device, tablet/iPad, Chromebook/netbook, laptop, desktop, server), and recognizes both the minimum and maximum range they need. For example, I certainly can read ebooks on my laptop using Calibre. But the fact that iBooks can benefit from the built-in screen reader makes things so much smoother than trying to find an audiobook; and I can read along with it or turn off the audio if I want. I can use a Chromebook/netbook to connect to HDMI displays and make presentations smoothly, but if I want to use a Smartboard or a projector, I probably need a laptop for VGA support - and the fact that my current laser presenter uses PgUp & PgDn to switch slides (Chromebooks don't have those keys and Google Docs doesn't seem to support them from the Chromebook). The list goes on.
As you said, learn from your mistakes, it's the price for performance. But what many groups are trying to do is take away the capacity to make those mistakes.
rm -rf / is a completely software operation. It should completely wipe your system. It should not render your hardware inoperable. I don't expect or want it to wipe my BIOS or GPU firmware or NIC firmware or any other hardware component that happens to be attached at that time that has an EEPROM or otherwise reprogrammable microcontroller. That's insane.
"rm -rf / is a completely software operation. It should completely wipe your system."
To me saying it should completely wipe your system implies that people sometimes type it on purpose fully expecting it to do what it does. Is that actually true? If not, I think there's something askew with your worldview.
This is what you get when you adopt the "let users do the qa/testing/beta-testing for us for free". The whole attitude behind Windows 10 "continuous updates" instead of actual releases that are actually alpha-tested and beta-tested by actual hired testers on multiple machines is disgusting and offensive to the user! An operating system needs to be a boring "rock-solid foundation", it doesn't need frequent updates and experimentation, that's what apps are for.
Use Linux or buy a Mac. Microsoft always is and always will be horrible to end-users. They maybe cool in the enterprise sphere, and when it comes to open-source, and when it comes to developer tools and languages (I use lots of MS stuff... but run then on a Linux system!), but they don't care about regular end users since most of them "can't cast a vote" in deciding the system they use and where money goes to.
That's good progress - congratulations, Microsoft!
Frankly, I can't understand why - having a dominant position on the market - they seem to do everything to drive people away from their platform. It's not like we're in the 90s and there is no other choice.
During about 10 years in the late 90's, early 2000's, I was a kind of Linux zealot, with anti-MS signatures on my emails, which survive on some BBS and Usenet archives.
Nowadays with the exception of a travel netbook, I mostly run Windows or Android on my computers.
Because GNU/Linux never managed to get their act together what means to have a full stack experience for UI/UX focused developers, specially on laptops.
And I just won't pay the Apple prices for less hardware than I can get with a Thinkpad/Dell/Asus workstation laptop, usually about 500 euros cheaper.
I mean MacOS requires a relatively expansive machine to run it, and many business-critical software either doesn’t run on Macs, or has drastically reduced functionality.
Linux isn’t even worth bothering with if one isn’t technical.
If you're not technical, just go with Mint. Looks like Windows 7, behaves like Windows 7, doesn't break. You don't have to leave GUI environments once, neither in installation nor in usage. Doesn't break. Gives you the opportunity to optimize your workflow if you want to.
Mint broke all over the place on all the machines I've ever installed it on. Couldn't get graphics, sound, or networking running smoothly. Complete disaster with my built-in Bluetooth and my BT mice & keyboards. And doesn't behave like Win7 when it comes to actual programs. .deb is not .exe, .bat files didn't work, programs needed to come from a central app store or else be "compiled". Drive names were completely whacked as well. My optical drive wasn't D:\ and I had no idea how to find a DVD through that version of VLC, my main HDD wasn't C:\; and my USB floppy drive (yes, I still have one) didn't plug in as A:\.
No version of Linux "behaves like Windows 7". At best, it's like Linux wearing a bedsheet-ghost costume labelled "Windows 7" and screaming BOO! at you every time you do anything from a DOS/Windows background.
All of it works flawlessly as long as you run only Intel/Amd. The moment you go to nvidia(which unfortunately has a near monopoly on laptops) is the moment you start paying with performance on nouveau or major features (wayland) and battery on proprietary driver. And this is before we even get to optimus and prime.
Yes, there are solutions for these problems but you need to be technical for them.
Hmm. I'm skeptical that those 'computer illiterate' people don't have a computer literate person providing them with support that's key to enabling that situation.
I've had Ubuntu running on a few cheap desktop machines for some years now used for light duties in a few living spaces. So far we're at 100% failure rate on version upgrades: Both LTR release upgrades have bricked both the machines.
When they switched the window manager on one of the recent ones, the UI simply died and the simplest resolution was to just re-install the OS from scratch.
Steady state, with apps installed and running and only doing basic patching via the GUI, Ubuntu is 'operable' by avg. Joe. But app installs and beyond are fraught with problems.
@FooHentai: “I've had Ubuntu running on a few cheap desktop machines for some years now used for light duties in a few living spaces. So far we're at 100% failure rate on version upgrades: Both LTR release upgrades have bricked both the machines.”
Bricked you say, I don't recognize that from my experience, as for 'light duties' it's a little more useful than that:
So for the purposes of discussion, there are technical people and non-technical people. The claim up-thread was that technical people are fine on Linux. So then this commenter comes along and points out that there are non-technical people who do fine on Linux. In this context, this assertion is completely relevant and useful.
They are still not many choices out there: either Apple with high-priced defective keyboards and no desktop solutions or Linux which is still a gamble especially on laptop. In some regards, the situation is worse than a decade ago.
On Microsoft side, as soon as they announced Windows 10 would be rolling release OS (i.e. a perpetual beta one) I knew I was done with it on bare metal.
What other choice though? I mean especially for Laptops. I find Macbooks now completely unacceptable, as a 13 year mac user, and Linux still seems to have the old issue of unreliable driver support.
There are choices, better than ever before, but to vast types of users this doesn't matter.
Some examples - corporate users (nobody big seriously considers Linux for desktops for various reasons, Apple would be easily 3-5x that expensive for no good enough added value), gaming (again some good options, but subpar to windows on probably every aspect).
Everybody knows Windows, everybody can somehow get by with just clicking around. If I've put Linux on my fiancee's notebook (she is a doctor), I would have to do 24x7 support for it, forever. No, thank you.
In our branch we have tiny windows desktop boxes (20x20x3cm), they cost below 300 USD to buy. Our corporation has around 100,000 of those around the world. Good enough for any office work you will ever need. We devs are forced to use them too, and they are OKish with 16gb RAM. I've seen these kind of computers in every single employer I ever worked for in last 15 years, corporate or tiny. There are 100s of millions of similar computers in offices around the world.
What does Apple have that's cheaper? To save 535$ they would have to pay us to take them.
Topic might be different for high-end notebooks, especially with some sweet corporate deals. That's NOT the bulk of computers used for office work around the world. Cherry-picking some specific relatively marginal scenario doesn't affect the big numbers.
If the machine is more expensive to buy but significantly less expensive to manage (if, say, OS updates don’t delete all your data necessitating hours of recovery), the total cost goes down.
The initial purchase price of a machine is near-insignificant to the TCO of a corporate machine.
You don't know much corporate environment, at least you give off such an impression. We don't get patches straight from Microsoft servers, we have our own update servers. Only tested patches get through with some delay, something like this wouldn't make it. This is standard in big companies.
And we don't use Win10 at all, no sane CIO would ever approve it and stayed in his position for more than a week. Windows 7, no issue paying directly to Microsoft to produce patches long after public support is finished.
You're claiming that your internal IT department does better QA on Windows than Microsoft does. That's amazing. Do you have evidence to support this? How many major bugs do you report upstream? What's your QA system like? Can you confirm that your test suite checks upgrades using Known Folder Redirection when the default path is still in use?
I find it awfully hard to believe that you're arguing for using Windows on the basis of TCO, while on the other hand apparently have (and need) a QA department and test protocol that exceeds Microsoft's, and are willing to pay Microsoft to produce patches for an old OS just for you.
You've got 100,000 systems in a highly constrained environment. You pay for the licenses, and to run your own upgrade servers, and to do your own QA on their software, and to get patches for it, and you can't even run the current version because you think it's not "sane". I'm sure there must be external constraints you're not telling us, because this doesn't sound rational.
Windows 10 is rapidly becoming the standard, the "no sane CIO" remark is absurd. The entire DOD has standardized on Windows 10, and they're the largest enterprise in the world.
> You don't know much corporate environment, at least you give off such an impression.
Based on one post, where I point out TCO is not significantly driven by hardware purchase price? Ok then.
Anyway, the argument of lower TCO cited by the GP comes from (not exclusively) IBM, who I’d say know their own business, also used WSUS/SCCM and claimed they were better off with Apple and Jamf.
If there's anything IBM does better than anyone else, it's metrics. It's almost like they invented the field of IT metrics measurement and analysis, in fact.
Recursive delete has always been a misfeature of computing, out of a mistaken entitlement to convenience when you are performing a fundamentally risky operation on a target you can never know the state of.
At best, it is redundant with a recursive search feature that chooses “delete” as the operation. And if you want “do something else then delete”, you can no longer call a recursive-delete command anyway so why not just learn how to enumerate files first and give your system a fighting chance to audit first?
Disk cleanup code should always create lists of known target files, attempt to delete only that list of files, then do the platform equivalent of “rmdir” at the end to attempt to remove the directory. If that fails, congratulations: your ass was saved by not deleting something you didn’t know was there.
I really wonder what goes on at MS to have consensus to think such messages were ever a good idea. Even if the message is true, it scares the users because it's like ransomware. If it isn't, that's even worse because you're now lying to your users. In any case, they arouse suspicion and fear.
In the XP days, I believe updates would, after restarting, at most show a dialog with a more informative message ("Installing updates...") and a progress bar, and more importantly, your wallpaper and desktop would continue loading in the background --- the latter really helps with the unease, if not the annoyance. The full-screen, vague, and unnecessary messages just invoke feelings of horror.
If you want to run an older OS have you considered just running a hypervisor like ESXi or KVM and then handling OS through that? There are lots of good solutions there at this point, and it can be a fun way to play with a lot of other cool features and different OS as well. You can even get near-native performance even for heavy duty graphics applications by using PCI passthrough. The only caveat that adds for hardware choice is that you'll want a processor with an IOMMU for the hardware virtualization support (AMD calls this "AMD-Vi", Intel "VT-d"). AMD is pretty good about not artificially segmenting there, I think everything modern they make supports it (all Ryzen/EPYC at least) though probably worth double checking overall system compat. Intel splits this all up more, Xeon always has everything but support varies elsewhere and you really just have to check the specs.
Even so that gives a ton of hardware choice and flexibility, and will give you more options to protect and control the systems beyond the OS themselves which is very important if you want to run something older since security patches will stop. But if you're judicious about what you use for what tasks and how you handle I/O it offers another option, and can make hardware changes a lot easier as well by abstracting away the metal somewhat. Basically a lot of the advantages that make virtualization so popular in general for business can be just as applicable at home these days, most of us have cycles and memory to spare and can afford to burn a bit of it on making a more pleasant software experience or working around issues coming from a higher level. In this case for example you could be running your Windows VM on virtual disks on a NAS/DAS or even the same system but supporting better snapshotting, and if the data was deleted simply roll back the entire VM to pre-upgrade state.
Judging from the state or Server 2016/Windows 10 updates I suspect not. How a rollup update for 2016 takes 30 mins+ to install (and often fails), yet the 2012 rollup is done in 10 mins is still baffling. This is on 2012 machines with much longer update histories.
People have gotten XP running on Haswell so I don't think running 7 on Skylake would be a problem. msfn.org has some useful information on running (very) old OSs on new hardware.
Every once in a while an MS support thread actually has a useful answer in it... given by some random internet commenter about 15 posts after MS support has determined only a reinstall will fix the problem.
Yes, I read about this earlier and along with the support tip was the "and don't touch your PC" tip. So I'm pretty sure they'll advise some undelete tool and until then they don't want deleted data to get overwritten, but no more magical solution than that.
Just yesterday I saw that my brother's Win10 desktop was magically empty after he'd rebooted due to windows update. After hunting around for solutions, (none of which worked), I noticed that all the missing desktop files were magically in the recycle bin.
Nice work M$.
From memory, the update was #1803, so not sure if this is relevant to the arctechnica article... but since it was yesterday, it's clearly not 100% accurate.
(PS: No, my brother didn't cause them to be put there)
The update in question here is #1809; so it's possible your brother encountered another data loss bug, which, considering their current state of QA, doesn't seem that unlikely.
By default the recycle bin has a limit of how much files it can store, and if you try to put more in there it will be deleted. It'll ask when you do the operation, but it if was done automatically as part of the update, who knows. A safety net with such big holes never made sense to me so that's why I always change the limit to 100% of the disk size. Your brother was lucky to recover his files.
I wonder if Microsoft is able to cancel the installation of already downloaded updates - if not, something like this might happen if the erroneous update was already downloaded in the background earlier. I think the default setting is that updates will be downloaded automatically and then installed later whenever the system decides there is a suitable period of "inactive time".
The update that deleted files wasn't ever an automatic one. You would have had to manually update. Fingers crossed they didn't mess up another. Seems unlikely.
"Compounding this issue is that Microsoft's rollout of version 1809 was already unusual. For reasons unknown, Microsoft didn't release this update to the Release Preview ring, so the most realistic installation scenario—someone going from version 1803 to 1809—didn't receive much testing anyway. And all this is against the longer-term concern that Microsoft laid off many dedicated testers without really replacing the testing that those testers were doing."
And from this article:
"In response the company has promised to update the Feedback Hub tool so that the severity of bugs can be indicated. Many people reported this data loss bug, but none of the reports received many upvotes, with Microsoft accordingly disregarding those bugs. If the bugs had been marked as causing data loss—the highest severity possible—then they may have received the additional attention that they deserved. Microsoft hasn't, however, explained why this update didn't receive any kind of "release preview" distribution or testing. There are no guarantees that this would have caught the bug, but it would have meant that an extra round of people would have installed the update onto their systems, and who knows, one of their bug reports might have gotten lucky."
As a dedicated tester for a large-ish company I can't even imagine how many problems would go unreported if they even got rid of half of our department. It's hard to quantify the exact value of SQA so I can see some manager over-looking its importance, but this is Microsoft. They should know better.
I've switched from macOS to Win10+WSL as my main dev machine this summer, mainly because I like Thinkpad hardware much more (and wanted to give the standard OS there a try), but I'm close to giving up on it and switching to Linux. It's crazy how much crap it throws at you at a daily basis.
* explorer, and even in general file operations are dead slow for some reason. an expand of a zip from explorer with a couple of 10s of thousands of files can take an hour, while in WSL takes maybe a minute. explorer also takes its sweet time to load, including in open dialogs. This being on a near top-of-the-line 480s with 24GB Ram and 1TB SSD.
* windows don't remember their previous position on multi-screens.
* copying in terminal sometimes seems to work, sometimes not.
* terminal beeps at you on every tab with more than one option, always have to keep sound muted.
* bluetooth menu is glitchy and there's no standard quick way to connect to a previous device.
* no idea whether that's win10, spotify or thinkpad software, but hitting a media key produces a NON DISMISSIBLE big overlay for spotify that just hangs there for a good 10 seconds and blocks the stuff I want to click.
* solution for a full taskbar? just make it scroll with very small scroll buttons...
* some older Logitech mouse I connect has buggy assignment of forward/back keys - does a completely random operation instead. Windows doesn't seem to have a GUI-way to set this stuff up
* terminal has no tabs and crappy colors and I don't wanna go down the rabbit hole of trying to integrate WSL with a non-default terminal emulator. I've installed the spring update, won't touch october one for a while at least.
* there's no integration of WSL & windows GUI layer. Have to start an X-Server separately and have Linux GUI-tools instead. If I seriously need that I will simply switch to a Linux distro instead (which given the above I start to suspect I should have done from the beginning).
You think that's bad? I'm using Linux at work on a laptop specifically designed for Linux and it's been a nightmare to get even basic functions to work right. The computer immediately resumes after going to sleep, it took several days to get hibernation working, the nvidia driver keeps locking up the system, external monitors aren't automatically detected, after an apt upgrade, hibernation stops working because my EFI loader file gets overwritten and I can't figure out from where, I managed to completely break X after trying to get Optimus (GPU switching) to work, applications written in different GUI frameworks (QT, GTK) use different themes and even different mouse cursors, applets don't always show up, bluetooth crashes randomly, and the list goes on.
Windows has annoyances, but Linux is like building a car in a garage full of car parts. Yes, you can build a working car, but you better be a mechanic.
Don’t use Linux with a laptop which has discrete graphics. I use Thinkpads with integrated graphics, and haven’t had a problem with drivers for a decade. I don’t use hibernation though, I just use standby.
Agreed. So many things breaking are to do with the binary dgpu drivers, that issue goes away with intel.
I've been running ubuntuMATE for years on my t450s. It is remarkably boring. The only things that don't work for me is the fingerprint reader and docking/undocking can be funny if done hot.
This is a System76 laptop which only puts Linux on their laptops, so it's specifically designed for it. The two mDP connectors are only connected to the Nvidia chip, so maybe the Intel IGP isn't powerful enough to drive 3 4k monitors at a time (4k internal, 2x 4k external).
If you bought a laptop which came with Linux, why did you fight battles with GPU and hybernation? It is supposed to be supported, so let the vendor solve the issues. Or return the computer, if they can't. That's exactly where the vendors' value proposition comes from, have them earn their money.
If you want to do any of those things, Linux is already so niche that you probably know what you're getting yourself into.
But for most people, the discrete graphics option is easier and more stable, but counter-intuitive - usually more power is better. So worth mentioning.
that's the reason why I went with Win+WSL in the first place, I feared all the driver crap.
So... our choice now is between a great OS on laptops with next to no I/O and totally unacceptable keyboard, OR great hardware running on an OS that's put together with duct-tape and strings?
We really thought the future was great right around 2007 didn't we? You had nicely put together Macbooks that could run all the software you wanted, including from Windows and Linux land, they had pretty good support and overall decent hardware. Then they released the iPhone, SJ died, and everyone else in their management just started hunting either quarterly numbers or pretty&stupid design that works well for TV ads. It's a wonder that their software departments have actually turned around and started producing decent releases again. Here's to hope their Mac department will recover as well.
I'm not sure I ever liked the Mac hardware. I thought I did. TBH the old white plastic MacBooks running 10.4 was my last good experience. After that I bought a top end 2011 15" rMBP with 16Gb of RAM and 1TiB SSD.
What do I use now?
A second hand i5 T440, because it's more productive.
I started with a Powerbook 12inch. Totally fell in love with that thing. By far the best trackpad at the time, plus a very decent keyboard. It even had CD burner and expandable RAM in an incredibly tight package. From then on their hardware became worse with every iteration in some way or another - either it was graphics driver issues, thermals, faulty boards, or now completely crap I/O and keyboard. But IMO till 2016 they still had overall the better package than windows PCs, mostly thanks to a far superior OS.
Try to install Fedora Linux on it. It should work okayish on such hardware, if you switch it to Mate desktop (Gnome 2 fork): https://getfedora.org/en/workstation/ .
Look, it sucks to find out there are thousands of devices, but none with the specific attributes you want. (I'm currently looking for a phone.)
However, the (sad?) truth is for most people, MacBooks works just fine, but they aren't vocal about that. Personally, I thought I'd hate the new ones, but was given one for work, and it's fine - to the point where I bought one for home, too
I think I could live with the 2018 iteration of the keyboard, although I‘d want to wait another 6M at least to see about reliability. But the dongles man, they‘d just give me headaches. Using a Thinkpad just reminds me of how great it is to have all the ports built-in.
I actually love usb-c. Thought I'd hate it, but I vastly underestimated how much I plug stuff in. It helps that usb-c docks are pretty awesome if you don't buy the cheap ones. My only issue is Yubikeys when I'm not docked, which is only for meetings though.
Stick some distros on a USB stick and give them a go. Your 480s should be fine with it if you're on Intel graphics. I'd recommend Ubuntu in whatever flavor appeals...
I’d like to highlight a few points that are mostly not about Windows.
> * windows don't remember their previous position on multi-screens.
Not an OS concern. Most applications do remember, by the way.
> * terminal beeps at you on every tab with more than one option, always have to keep sound muted.
Windows terminal doesn’t have a bell. WSL does, as does macOS or anything Linux/UNIX really. You can disable it of course. Google "wsl disable bell".
Either way, nothing new or objectively bad.
> * no idea whether that's win10, spotify or thinkpad software, but hitting a media key produces a NON DISMISSIBLE big overlay for spotify that just hangs there for a good 10 seconds and blocks the stuff I want to click.
That’s mostly Spotify. It can be disabled in settings. Windows only shows the volume "slider", which is gone after 5 seconds
* some older Logitech mouse I connect has buggy assignment of forward/back keys - does a completely random operation instead. Windows doesn't seem to have a GUI-way to set this stuff up
Nothing is "random". Except perhaps when the device is broken. Mouse buttons 1-5 have been well-defined for 10+ years now.
> * terminal has no tabs and crappy colors and I don't wanna go down the rabbit hole of trying to integrate WSL with a non-default terminal emulator. I've installed the spring update, won't touch october one for a while at least.
Yes, it sucks. You can either enable SSH and SSH into WSL or just use wsltty (which offers bell options!).
> * there's no integration of WSL & windows GUI layer.
>no idea whether that's win10, spotify or thinkpad software, but hitting a media key produces a NON DISMISSIBLE big overlay for spotify that just hangs there for a good 10 seconds and blocks the stuff I want to click.
Is Spotify, go to settings, display options and deselect "Show desktop overlay when using media keys". For some time I had to do that after every Spotify update, but seems to stick now
Your explorer issue is Windows Defender's realtime protection. If you toggle it off you'll see the operation you're trying to run complete almost instantly.
I like Windows 10 and generally have been happy with it -- but that particular behavior has been driving me nuts for a while. They really need to fix it.
"AV makes any computer slow" is still true. Scanning every file every time it's accessed simply can't take zero time.
"But think of the security!" they'll say... of course, it's always a tradeoff. I've experienced a similar problem with a large growing logfile --- appending becomes essentially quadratic, every time the process closes the log the AV opens and scans all of it.
I first noticed it when I would open my downloads folder (because I'm a heathen and never clean it out), and it would sit there and "process" for fifteen minutes. Just clicking realtime protection off fixed it instantly -- but it would seem there's no need to re-scan every file every time a folder is opened, scan them when I click on them, or scan them when I try to open them, or scan them in the background while I still can interact with the folder. Being unresponsive is a cardinal sin for any UX, and that's where Apple shines and MS still continues to drop the ball.
Like I said, I really like Windows 10 -- but it really falls short here.
Oh god, that's exactly what I have. There's nothing we can do short of disabling defender? Are there AVs that are as secure and don't slow explorer down as much?
Defender is probably the least offensive AV in existence, sadly.
Frankly I think the entire concept of AV is bankrupt as it always causes many more problems than it is worth. Both false positives and false negatives are common. You shouldn't be afraid of running without Defender because, like all AV, it isn't very good at catching real threats anyway. If you really want to feel safe there are much better ways of doing that, like running untrusted programs in sandboxes or VMs, or using Software Restriction Policies to whitelist what can and cannot run. I haven't played with it at all, but there's even Controlled Folder Access to limit FS rights by application, which isn't a terrible idea given the era of crypto-ransomware.
I ran without any AV for years and never had any issues. Intelligent web use is generally a better protectant than AV programs. I use Defender now, grudgingly.
As you say, CFA is probably the most useful part of Windows Defender. I was very glad to see that included.
Presumbly there may have one time been some type of malware that executed through thumbnails with embedded data or some other type of convoluted nonsense that the scanners had to protect against. I think I'd rather take the risk of that though over the accumulated production loss of watching explorer hang for 5 seconds everytime I open a large folder
That's accurate. And I will say that I've seen it hang for as long as fifteen minutes opening up my Downloads folder - which isn't all that big really.
I've been using FreeBSD, then Linux since 2002 and I'm seriously considering moving to Windows, which I haven't ran on any of my machines since back when Windows XP was fresh. If you think all the stuff you listed is bad, wait 'til you run into Gnome, GTK and KDE, where not only do windows not remember their previous position on multi-screens, but desktop icons don't remember their position on a single one :-).
(Or you can't have them at all without an extension, yeah, that too...)
Windows has progressed by leaps and bounds since 2003. Linux, not so much. We have this fixation on building something, then deciding it's full of legacy code that doesn't allow us to build what we really want, so we throw it out and do it all over again.
The good news is that we've been on the "building" side of this pattern for a while. The bad news is that we've been on the "building" side of this pattern for a while so I expect there's not much time left until the next "revolution"...
> wait 'till you run into Gnome, GTK and KDE, where not only do windows not remember their previous position on multi-screens, but desktop icons don't remember their position on a single one :-).
KDE has explicit settings to remember window position which works well. As for icons, I haven't had icons on my desktop besides a dock for years, so can't tell you.
Yeah, that was weird. I remember that as the only time I actually felt that Linux had a decent shot at the Desktop. New money had rolled on to the scene with a coherent vision, started fixing a lot of the suck, and quickly became the predominant distribution.
Then something happened and Canonical went crazy with NIH and mobile obsession. Then they tried to monetize in a stupid way.
Let's stop with the NIH blaming, Canonical could not change GNOME direction, so they had no choice, if you remember Canonical wanted to add some fancy scrollbars in GTK but it was rejected, later when GNOME wanted same fancy scrollbars GTK accepted them.
I am a KDE user and I watched this from the side sao I am not invested in any camp, what we see is that when a meinteiner with big ego controls a project (it happened with KDE Plasma too) you can't bring any improvement that the meinteiner does not agree with.
Yeah, after sixteen years of this stuff, I definitely needed someone to tell me that there are other DEs than Gnome :-).
How are things in XFCE land, do you have a sane "Open file" dialog, or do its (so, so few...) applications use the new and improved GTK 3 version? Speaking of which, has it moved to GTK 3 completely, or do only some of its tools work on hi-DPI screens, and only with a few themes?
I am content with the cinnamon, the default in linux mint. It looks great imo, configuration is easy. The Mint developers are listening to their users.
I honestly don't get why people recommend cmder. It's really slow, even compared to the slower terminals on Linux. Also, and more importantly, xterm emulation seems to be utterly broken. I get weird incorrect lines shown all the time.
I found the WSL console to be a at least a bit more reliable.
Also don't underestimate our willingness to try and make our inbox stuff better - it just takes a lot of time to add new features without breaking some back-compat. We're hard at work improving conhost every day :)
I actually just installed that before you mentioned it, and disabled its sound in the volume mixer. Finally something I can live with in terms of terminal.
Also, it you want a tab-like experience, you could always try tmux. It's a linux commandline tool that gives you tabs, panes, and all sorts of other goodies, and as of 1809 you can even use cmd.exe within it.
When I first tried W10, I noticed a distinct lag before the start menu opens. Often, menus would pop under the taskbar, instead of over, being unreadable and unclickable.
I assumed they pushed it to market before it was ready, allowing users to find the bugs to save on testing costs (helping justify why it was free). Only, now, years later, the start menu still lags, and things still open behind other things.
Combined with all the mysterious data it sends to various IP's, I think that when you press Start, those first 300 ms are spent as part of some sort of distributed computing effort.
Oh users do find these things. Every time they release a new "Feature Update" full of problems (which is to say, every time they release a new Feature Update), it turns out those problems had all been well reported by people on Insider Builds. Nearest I can figure, in MS's continuing effort to Embrace OSS software, they have contracted the disease that prevents them from paying attention to what users are complaining about.
The slow start menu is awful. I want to press start then type what I want to open but it is so slow you have to wait nearly 2s to type or it will lose the first few characters. Add to that the search result quality is dire.
There ought to be an unofficial Windows 10 Settings app that uncovers stuff like this. Somehow Microsoft isn't quite getting it right and they ship Windows 10 with two different ones...
For me start menu is instant (well, animation takes few milliseconds, but it starts instantly) and I can start to type anything immediately, like <Win>cmd.
You’ll find the slow file operations are entirely down to NTFS. Regardless of how you play around in fsutil it’s hopeless on lots of small files. This incidentally makes WSL unbearably slow. Hence why I still use VirtualBox and putty.
I was suspecting as much. I wonder why NTFS gets so much praise, especially compared to HFS+. Yes, HFS has less features, but at least it lets me get my work done.
I've read/heard a couple times from people ridiculing Apple's HFS pointing out all the ways in which NTFS is superior, so in my mind at least the filesystem I never thought of as an advantage on macOS. I guess I was wrong then.
I think it depends on your metric. I don't know what's wrong with HFS, but I do know that Apple drops these .DS_Store entries all over the place, things that look ideal to be stuffed in NTFS streams.
I'm sure there are many metrics on which HFS is superior to NTFS; and probably many metrics where NTFS is superior. NT (and MS in general) has never philosophically worked well with more than N of X, wherever N is significantly larger than a consumer would deal with - whether it's processes, files, TCP connections, whatever. Their consumer heritage usually finds a way to shine through.
> Anything that does lots of small writes, so basically anything unixy, suffers like this.
To be honest, when NTFS was conceived it probably didn't have as a design goal that common Unix patterns should be fast. After all, Unix applications use the file system for lots of things where Windows has different mechanisms for that purpose.
SVN sadly has to traverse the complete working copy and lock every single directory individually because every directory is also a working copy on its own. Most of the time SVN spends on, e.g., update, is spent on locking and unlocking. Git/hg only need to do this in one place and avoid that problem.
Never checked if it was NTFS issue, but it felt like logical explanation - when starting Stronghold Crusader through wine on Linux, loading took like 0.5s, compared to 10s or so on Windows.
Didn't bother to check the numbers, but it was order of magnitude for sure. Way more than just noticeable.
It's definitely not 'entirely'. I have a folder with a bunch of small files where explorer and other programs sometimes hitch for tens of minutes, and WSL can reliably process in seconds. It's too many seconds compared to native linux, but it's still seconds.
If you turn it off it still runs like ass. I have spent hours on this one even down to fsutil tuning and I can't get more than about a 10% improvement.
This is because the small files are stored on the MFT which is global read/write locked.
It's been okay since I upgraded to Windows 7. The lack of updates may become an issue at some point. On the other hand, the lack of updates can also be a boon. I really want to swap to Linux, but can't let go of Visual Studio. The second WINE can get Visual Studio running, I'm out.
Windows 7 is the best OS M$ ever made. If it had DirectX 12 (and subsequent updates like raytracing support) i don't think I would ever update.
Thinks work, OS doesn't get in my way anyhow, its fast and takes relatively little RAM (well not compared to XP but these days its OK).
Never tried Win10 because everything seemed worse, but I had hope they would polish things over the years. When I read comments here, it doesn't seem so. Sad story...
Care to explain please? This is the first time hearing such an opinion.
I require OS to just not get into my way, make hardware and software work flawlessly, take as little system resources as possible and ideally make me unaware of itself. I don't spend time in OS and what it provides, I spent time in software running on it.
Files-related operations done in Total commander, way more efficient tool than anything MS ever created in past 30 years (once you get used to it). Anything else is also 3rd party.
1. Windows 10 have a good design. The last time Windows had good design was Windows 2000. Windows XP was terrible and that "pretty" design was in Windows 7 as well. I could switch to "classic" theme, but Windows 10 is even better.
2. Windows 10 have Linux subsystem which is good enough replacement for my Linux VM I always used for some tasks.
3. Windows 10 have multiple desktop support, much better than macOS for my needs, for example (those desktops are actually separate, so I can have gaming setup on one desktop, work setup on another, hobby setup on another and they are like different computers).
4. Windows 10 have notification support which is nice to have.
5. Windows 10 have awesome improvements for good old cmd. It's really bad feeling when I have to use cmd with old Windows versions.
6. Search in Start menu is useful (I don't remember whether Windows 7 had that). Also Windows 10 had tons of tiny features which are not very noticeable until you don't have them. Win+X, for example.
It does not get into my way any more than other Windows versions (I'm always reading those scary stories about sudden updates but I don't really experience that, once a month my "power off" button replaced with "update and power off" and that's all I notice. I have enough system resources (16 GB RAM) so those are not really any concern for me (Google Chrome sometimes eats more resources). I don't spend any significant time with OS, I'm using Intellij Idea, I'm using Google Chrome, I'm playing some games, system just works.
OK I see, I don't use most of things you mention and I like Win7 aero interface, but its good to know its not all doom and gloom upstream. Anyway eventually I guess I won't be able to avoid it, MS doesn't seem to be keen on releasing next OS anytime soon, and eventually most games will be probably DX12-only.
> Adding insult to injury, there are ways in which Windows users could have enabled KFR without really knowing that they did so or meaning to do so. The OneDrive client, for example, can set up KFR for the Documents and Pictures folders, if you choose to enable automatic saving of documents and pictures to OneDrive.
Worse still, the OneDrive client apparently left documents in the original locations which the Windows update would then delete:
> The current OneDrive client will set up KFR and then move any files from their original location to the new OneDrive location. Older versions of the OneDrive client, however, would set up KFR but leave existing files in the old location. The October 2018 Update would then destroy those files.
Microsoft's own software which they bundled with Windows 10 and nagged Windows 10 users to set up and use triggered a data loss bug which caused Windows 10 to delete those users' documents - all because Microsoft didn't think it through, didn't test properly, and didn't take any notice of data loss reports from external beta testers.
LTSB is just lovely. Maybe the best part of having access to an MSDN subscription these days. It'd be worth paying a premium for as a consumer, if they ever wanted to; manufacturers are slighly better than they once were, but it's still a necessary first step to wipe disks and install a crapware free Windows immediately on a new machine, and LTSB is far and away the best for that.
WSL would be nice, but it's better all in all to just spin up a ubuntu vm in hyperv.
I believe it also broke the File History and Windows Backup (if you use those). I had a client I had to restore from backups. Thankfully they only lost 1 small file in the mix of it all.
Honestly, I don't care. I've disabled Windows updates for the time being since I no longer trust their update QA. I can deal with a non-functioning system. I can't deal with files being deleted outright that don't belong to the system in any way.
Sorry for harsh language, but this is not a bug. This is a complete brain damage of those who decide to implement such behaviour.
I have redirected Documents. And there are A LOT of programs that directly try to use C:\Users\user\Documents instead of redirected.