Hacker News new | past | comments | ask | show | jobs | submit login
Windows 10 will use protected folders to thwart crypto ransomware (helpnetsecurity.com)
185 points by Errorcod3 on July 3, 2017 | hide | past | favorite | 169 comments



One of the "features" (back in the day) of running a diskless system was that you could set change policy on the server hosting the file which was completely out of reach of the "client" machine that was running the program. For nearly all of the system files there was no reason for them to change. NetApp turned this into a huge win when they could use snapshots to support multiple VM images with just the small configuration changes.

Given the well known benefit there, and that the processor on your hard drive is about as powerful as your phone, why not have the drive set up files that are 'read only' unless allowed to change out of band. Here is how it would work.

Your disk works like a regular SATA drive, except that there is a new SATA write option which can write a block as 'frozen'. Once written that way the block can be read but not written. You add an out of band logic signal and wire it up to a switch/button that you can put on the front (and/or) back panel. When the button is pressed the disk lets you 'unfreeze' or write frozen blocks, when it it isn't pressed they can't be changed.

Now your hard drive, in conjunction with a locally operated physical switch, protects sensitive files from being damaged or modified.


So basically, there's a switch on my computer which I have to flip every so often or things stop working? Or maybe I can just leave it in R/W mode because I'm tired of flipping a switch every time I ctrl+S...


With a versioning file system, it should be possible to save without overwriting locked blocks. The only time you'd need to flip the switch would be to free up disk space by irreversibly deleting files or old versions of files.


So "append only" not "disable writes", got it. I could see that being useful in auditing logs.


Actually its a switch you would have to switch, when you wanted to update the OS or any file that had been marked as read only.

All it does it convert something which is currently invisible (the bad guys escalate privledges and then can stomp all over anything) to something that requires you to stop and say "ok you can stomp on things."

Typically that would be unexpected if you weren't updating the OS but sure social engineering always works as is mentioned elsewhere.

The goal is just to add depth to the security to slow them down.


Sounds like exactly what UAC did in Vista. It prompted so much that people either turned it off, or just blindly hit OK. Subsequent versions toned down the alerts to what we have now in Win10. Making it a hardware switch doesn't change the fact that the average user will just quickly learn to flip the switch anytime something doesn't work quite right and we are right back where we started.

The average user is never going to be protected by something they can switch on and off at will. They will never understand the complexity around when it is OK to switch it off.


The constant uac notification wasn't a windows thing as much as apps. While windows itself was more or less reasonable with the warnings, lots of apps assumed it's still xp (or just haven't been updated) and they can do whatever, wherever.


How is two or three times a year like UAC?


It seems like there's a spectrum of outcomes here. As a user, I don't particularly care about system files, I care about the state of the system. In fact, if all this does is protect system files, then a ransomware attack would just wipe out all my user files and revert the machine back to a fresh install-- an identical outcome to if I just reformatted the drive and reinstalled Windows. Not helpful!

Really, this scheme needs to include the entire contents of the drive-- "freezing" the restore points Windows makes automatically. Now the tradeoff is how often you annoy the user/how fresh the backups will be. Once a year is obviously too infrequent, once a second is too often. Once a week, or month. Maybe do it at the same time as Windows does a system update, as you suggest.

You don't need fancy hardware support for this, just a NAS backup box with a client that doesn't let you erase older snapshots.


If you want to declare program files and system settings unprotected along with the user files, you could just continue to use UAC and make it only trigger two or three times a year. It'll protect you about as well as the switch, which is very little.


Sorry, I didn't explain well. I wasn't really making a point about the frequency. The point I meant to make was that anything the user learns has to be switched to continue proper functioning won't protect them, because they will just switch it when the malware requests it too. Which is exactly what has happened with UAC.


The OS we can reinstall, we don't care about its files. We care about our Excel files with the accounting books in them that we can not put in read only mode because we keep them open and update them every day.


Something like a versioning system with the past versions Immutable.


People might not want to use that since you can't redact mistakes that could get you into trouble. There are a wide variety of scenarios that are mostly innocent but would need to be addressed. And if you can destroy data, that leads to the original problem.


There's already no guarantee your data gets deleted on modern filesystems. And in corporate environments you can assume your data gets backed up transparently in the background all the time. Beyond making it more explicit I don't think anything would change in practice.


VM snapshots work this way, e.g. persist changes, discard changes and revert to previous snapshot.

This is more feasible with a file system where snapshots are efficient, like ZFS. With client-side virtualization (e.g. Qubes), snapshots can be done outside of Windows.


Reminds me of this XKCD: https://xkcd.com/1200/

Point is, nobody cares about the OS files. They care about their documents and data, and their logins to various secure systems. Securing system files alone doesn't really help much.

Probably the best thing is basically to be a Chromebook - the OS is signed and locked-down, can't ever be changed except for by signed updates from the mothership. Documents are meant to be stored entirely on the cloud. No support for running (unsigned?) apps locally, and even if they did, it wouldn't do much, because all of the data is on the cloud anyways.


Came to basically say the same... Also, a big fan of Chromebooks for most users... especially as many intranet/internal applications are now web based.


Well of course, but one reason why people want to make system files secure is because if those can be messed with it makes it so much easier to do all the bad things in that XKCD - without detection even.


I would like to have a physical write protect switch on drives I connect via USB ports. It would be great for backup drives, so you wouldn't inadvertently goof them up when restoring from them. (Like get the arguments reversed in an rsync.)

I used such a lot with floppy disks back in the 1980s.


Wasn’t write protection a thing on old USB flash drives? I have a 32 MB drive somewhere that has a WP switch on it.


I remember looking for those models some years back.

http://www.fencepost.net/2010/03/usb-flash-drives-with-hardw...

https://eikonal.wordpress.com/2010/05/21/usb-thumb-drives-wi...

Ought to take a look again if they're available now. Even if they still only come with a few gigabytes of storage there would be some nice applications.


https://www.digitalintelligence.com/products/usb3_write_bloc... exists, but they're not priced for consumers.


SD cards have this. Is very useful.


except the sd card switch is driver based, not hardware


Software based switches are not reliable, sigh.


MicroSD adapter cards come with hardware switches. At least all mine did.


eh that's load of work for something that'll require constant manual intervention, besides, block level protection is the wrong level of abstraction and will get in the way of getting anything done, unless you rewrite the whole operating system to be aware of that (just run a lsof / openfiles )

a software defined version would be: make an opt in sandbox for processes that ties a folder and it's content to a single executable, with the executable pinned by the operating system and let the whole thing be mediated by the kernel.

of course that's as thigh as the kernel security, but if you're worried by that, offsite incremental backups are a cheaper answer.


Understand that it is isn't required to be set to change files, it is only required to be set to change a block on disk which has been marked as protected. If your OS marked nothing as protected it would never have to be set.

If you look at the time stamp history of your 'system' files on a windows system (C:\Windows\System\*) you will see that the files have a change frequency of approximately once every 3 to 6 months.

That would correlate with when you would need to flip the switch to allow an update (ie very rarely, like 2 or 3 times a year).


Well...but I don't know which files will be updated when, and Windows 10 likes rebooting itself automatically to update files. On top of that, I'm not terribly worried about malware encrypting Windows, more like it encrypting my own data files, which change much more often than several times a year.

Some malware gets delivered to my computer as a new file, and the code that it contains begins to execute as an admin user. Its attempts to mess around with C:\Windows is thwarted by block-level protection. Instead, it goes into C:\Users\Me and starts encrypting stuff (not block-level protected, because it's all constantly changing anyhow). So, it can't modify the Windows system files that I could just re-install anyhow, but it can modify the creative output on my disk that only exists in a month-old backup copy, or something.

Given that scenario, I'm not sure how much help the ability to write-lock blocks would be.


if you rely on the os to mark protected sectors, the physical switch becomes moot. malware can just keep quiescent til a privileged action is required, especially if the switch is a disk wide all-or-nothing

also, system files aren't really the target here, the issue are userland files - who cares that the os is safe when the data is crypted


Seems like it'd be a lot simpler and more reliable to just rely on the server to create frequent snapshot backups of user data in a place where malware couldn't touch it.


The drive could just not overwrite blocks and expose an interface to access old copies. Flash drives already do this except for the interface to access old blocks.


Similar to this there is a program I use called Faronics DeepFreeze. It allows you to freeze a drive (the OS drive is what I use it for).

The difference is that it allows writes but any modifications are removed on reboot. I use it to lock down public access machines, users get a network drive they can write to but without being able to modify the OS they can't do much damage.

Not the solution your presenting but it works pretty well.


Not entirely sure why you're getting downvoted - I can only assume it's a kneejerk reaction at the mention of DeepFreeze, stemming from residual bitterness due to all the installations of Halo CE that were removed from school computers thanks to DeepFreeze.


Why not just mount / as a ramdisk then? Live CDs do something like that IIRC where the file is read from the CD/USB and any changes are kept in RAM.


Can you do that with Windows? I also imagine you'd want ram. The public access machines I manage are not exactly powerful.


I agree with others that append-only is best way to accomplish this. Maybe with an additional feature that specific files won't be overwritten when it starts running out of space. Far as doing stuff on the HD, there were Australian products in high-assurance sector that had user profiles w/ access controls on partitions. Most products like this disappeared since even the military wasn't buying enough of them. Here's one you could build that in that retains lots of good capabilities:

http://securesystems.com.au/index.php/high-assurance-silicon...

The other reason these products didn't take off is that it's really the operating or file system's job to do this. That's where it's easiest to enforce access control whether using labels or just crypto. There were and are systems that can do that with small, attack surfaces (i.e. TCB's). So, the integrators offer a combination of stronger OS's (eg trusted OS's, separation kernels) for data in use and encrypted drives for data at rest. Two examples: one of the first, security kernels (GEMSOS) enforcing MLS at FS level; a modern, crypto-oriented filesystem with small TCB usable in a variety of setting.

http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=048...

https://www.usenix.org/legacy/event/atc11/tech/slides/weinho...

The first one was deployed in the field for a variety of applications including controlled access of files. Similar kernels were used in databases. The other one could be modified to do access control (i.e. write-protect) on files that had been labeled as such by the operating system when it was in a clean state after trusted boot. It would be a configuration sent over IPC to an isolated app w/ privileged access to secure filesystem.

So, there's how I see it happening. The hard disk could also be used as an accelerator by offloading interrupt handling, some file access, and the crypto parts. The filesystem would then be mainly doing startup and handling issues reported by hardware. They'd have to be designed compatibly, though.


I feel Copy on Write can give you all the benefits you seek, easier. In case of fire, rollback changes. Daily snapshots when verified as OK.


The problem is that someone will then work out a way of hacking the drive so that it ignores the physical switch and allows then to change the contents. Then you have lots of inconvenienced users having to use a switch which ends up not protecting them anyway.


Not necessarily. It would be straightforward to engineer the drive such that the switch could not be overridden by firmware updates. Whether they would actually build it this way is another question.


Okay, so I know Windows probably doesn't actually work this way, but from a user interface perspective... what's the rationale on giving an App permanent access to the user's home folder directories? Don't most well behaved apps have a file open / folder open dialog, which should be able to grant access to files at runtime? If the file opening dialog is provided and controlled by the operating system (I realize many, many legacy apps work differently in Windows) then the OS can silently grant permissions at the time of open, rather than letting apps either have free reign or no access at all.

I feel like this is the expected behavior anyway; Power Users may run utilities that need to touch the whole system, but most regular users are doing pretty good to juggle more than a handful of open files in their mental model of the machine while they're using it. The idea of file permissions is already pretty foreign to the average end user. Applications already have a designated area (%APPDATA%) where they can store their temporary files and things, so perhaps the documents folders should be more locked down by default.


The main problem and is that the file open dialogue generally runs in the app's memory space, at which point we can't stop the app corrupting it in any way it likes.

I hope we are moving to a world where apps are built of seperate processes, most of which have minimal access. If nothing else, this will make many old buggy C libraries (including code I have written) much less dangerous.


The file dialogs already can handle scenarios where you don't have permission to access files/directories. You just use that.


What the parent comment (and some siblings) are saying is that the file open dialog's behavior can probably be modified by the application that calls it, e.g. faking the effect of the user clicking Open on every single file.

That could maybe be prevented by keeping the relevant sections of memory marked as read-only, and maybe it already is.


Heck, most CLI tools keep credentials in text files which are very easily read by random apps. Sometimes people will keep all a bunch of API keys in a single .bashrc file, which gets passed down to every child.

On macOS I've been experimenting with creating a separate keychain for storing most of my API keys. Once keys are stored securely, you can write a wrapper for each tool. The wrapper just has to read the value from the keychain and call the original. That way you lower the changes of keys being needlessly shared. It has good UX too, since it only has to prompt you when it's first used. Although for that to really work I think you have to sign the wrapper, otherwise anyone can just edit it.


That is how Windows sandbox for store apps works, the applications cannot access files directly.

The problem is getting everyone on the store train, and to move away from classical desktop.


It's unfortunate that Windows seems to conflate sandboxing applications and central control of which applications are available. I'd love all the apps on my system to be sandboxed, but not if I lose the ability to install "unapproved" apps at the same time.


This isn't a thing on Windows. You are not required to go through the store in order to install or run UWP apps.


It originally did, but doesn't anymore. Version 1607 last year both added the ability to double-click install modern (sandboxed) apps outside the store and for the store to carry desktop apps, so that there's no longer any necessary correlation between sandboxedness and storeness. (personally I'm not sure this is entirely a good thing TBH)


The thing is most of us would rather live outside of the store-land and take our chances than be restricted in that way.


I rather live in the store than having $RANDOM_APP taking over $HOME, even on UNIX based systems.

All my UNIX-like OSes are locked down as much as I can, and yes I do enable SELinux, AppArmor, seccomp, Gatekeeper, SPI and friends.


The best protection against malware / ransomware, hardware failures or stupid accidents of your own doing is provided by regular backups, not by app store restrictions that don't work anyway for the stated purpose.

I do not have SELinux/AppArmor installed, I can't remember the last time I installed an antivirus and it doesn't matter, because I have backups with file versioning going back for a whole year of everything important.

My laptop could burn in a fire right now and I wouldn't lose anything.

But then I don't remember the last time I had problems with malware, because I don't install random software from shady internet sources either.

It's typical of software developers to solve social problems by automation instead of education. It doesn't work well, it never did.


> most well behaved apps have a file open / folder open dialog

https://msdn.microsoft.com/en-us/library/windows/desktop/bb7...

This is one of the "common dialogs", and as mentioned elsewhere it runs in the app's memory space so you can, if determined enough, mess about with them. They also run all your shell extensions, which is a fun place to put malicious code.

What might be viable is UAC-style privilege requesting to get out of a sandbox, but that kind of thing was really unpopular when UAC was introduced with Vista.


Not on UWP, there you don't have any control beyond "I need to open a file" request picker.

https://docs.microsoft.com/en-us/windows/uwp/files/quickstar...


this is essentially how the sandbox works on macos from what I understand. 90% of applications should work fine for this. Some though like antivirus (as an example) can't really do so.


Reality is that almost all applications do not use sandbox, unless they are forced too. At least in my experience. I have 16 installed apps and only 2 from AppStore. Check out Android. It has very fine-grained permissions model. But most developers don't care and ask a lot of permissions even for simplest apps. It turns out that users don't care too. I'm not sure how it works for iPhone, where App requests access to some specific very privacy-related functionality, like location or address book, but I think that even on iPhone most users will press "Yes" without second thinking or even careful reading.


There are many scenarios when an app opens user's files, besides interactive Open File dialog. Last file worked on, projects etc.

Open/save file dialogs basically return just file path. Actual file access is a separate API call.

Requiring interactivity in all scenarios would hurt UX badly, as UAC story proved.


macOS does exactly this, although it’s sandbox is an opt-in for the app and there is no easy way for the user to see if an app is sandboxed.


I've always wondered why Windows and other OSes don't offer a 'cold storage' area where you need thaw out files before editing. Files not modified within a selected time freeze from further modification. I've got plenty of files that are archived that I'd never want to change, but it's a hassle to unmount/remount just to add a new file to an existing directory.


How about just enabling Shadow copies by default! I don't understand why Windows has great "Time machine like features", but every fucking time I right click and go to Properties and look at the "Previous versions" tab and it is completely empty.


Probably because the typical anti-MS comments would be worse for them than the risk of ransomware (from their perspective):

"Windows eats all my hard disk!! I've updated to <windows xy>/Windows did an update and now all my disk space is gone!!! Don't update!!!!"

"New MS update steals your disk space, here's how to stop it"

And so on, and so on.


Could be worse, like a circulating recipe how to completely remove VSS by removing system files related to shadow copy services, or something like that.


No it wouldn't be turned on by default. If it was every software that you run occasionally would break. This would be opt-in for certain folders.


No, this thread is about shadow copies being turned on by default.


Because it's a PR nightmare?

I have seen otherwise smart, famous people flip out in public when they learn that Windows has a built in window recorder. Same folks have no concerns with their video drivers doing the exact same thing.

That said, I think the only really safe way to do this is a history chain that lives off-site. That means copies to azure, and even with great crypto and blocklists, that's not going to fly in the news.


ransomware typically delete shadow copies (and any other local backups they can get their hands on)


My observation is that people who buy Macs also buy an external drive for Time Machine but Windows PC buyers don't usually buy an external drive and turn on File History. Slightly different culture, I guess.


macOS will say "You plugged in an external drive. Do you want to set it up with Time Machine?" Windows, IIRC, doesn't ask you if you want to set up a backup if you plug in an external drive.


I'd also really like Microsoft to develop the Application Guard (app in a VM) feature faster and make it widely available to almost any app, or at least any browser, and of course to everyone, not just enterprise users.

Microsoft has some interesting new security features on its roadmap. Unfortunately, 90% of them are for enterprise users-only and some only for its own applications.

It also wouldn't hurt to overhaul/replace UAC with something better, but I imagine that would require deeper architectural changes (which I think would be worth the pain).

Microsoft should also push users towards creating a Standard account when installing Windows, and setting up an Admin password, too. It shouldn't be too difficult/disruptive. They just need to create an easy process for it at installation.

The vast majority of Windows malware infections happen because users are also Admins. This alone would give Windows a huge security boost on average.

https://www.avecto.com/news-and-events/news/94-of-critical-m...

Once they do this, they could also start encrypting Windows devices by default with the Admin key, similar to how Android does default encryption.

Windows is pretty much the last major operating system not to encrypt by default. Hopefully, if they do this, they at least give users the option to keep the key locally, and not automatically upload it to Microsoft's servers, as they do now if you login to your Microsoft account.


It still won't help against dumb users that think security is only about inconvenience.

Go to macOS user forums and you will see lots of discussions about how to turn off Gatekeeper or be "always root" user.


> don't offer a 'cold storage'

If the malware gets privileged access, it's game over. If it can't, good file system permissioning fixes the problem.


How is 'thawing' your files more/less of a hassle than mounting a drive?


The file would still be there and available, even more so as it would be "frozen readonly", you just get a request on write access.

Whereas with drives you don't have access to the file if it's not mounted, you have to know on which drive the file you're looking for is, go through the mounting process, then actually find the file. And if you want to alter the file you have to remount the entire thing, and possibly need to track down that one daemon which still has a handle open and prevents you from unmounting it.


An idea I have is implement simple versioning on the thaw folder. If edited the changes can then be reverted.


Isn't that just a CoW filesystem?

Just use e.g. ZFS and configure it to take regular snapshots (though you may want to be careful, a few years back if a zpool got completely full things got wonky, dunno if they've fixed it, so you may want to keep some disk space outside the zpool just in case, so you can expand the zpool enough to work if it gets completely full)


OpenSolaris derivatives (eg OmniOS, SmartOS) are able to present the snapshots over SMB as windows previous versions.

I have the server at work, which only has windows desktop clients, making 1-minute snapshots with a one-day lifetime in addition to daily, weekly, monthly etc. It's very handy for the occasions where you accidentally save over a document and slap yourself in the head immediately afterwards.


And I have counter idea — encrypt N times, so every shadow copy is borked; certainly you are not storing "infinite" history?


I presume the thaw folder would have an upper limit on storage, once hit you forbid writes entirely.


Interesting idea.

Can you explain the thawing procedure, and how a normal everday user would experience it?


For Windows it could just prompt the user when an application attempts a write operation; just like UAC, if permission is granted the calling application wouldn't even notice the difference except for the pause while the open for write access call blocks pending user permission. Done at the same level as UAC in theory it should be impossible for malware to bypass approval, heck I'd even be happy typing my user account password to thaw it out.


Thats how I would envision working as well.

Seems like the classic tradeoff between better security and better UX.

Users would complain, and/or try to disable it.


My concern is first off, this seems like it is going to break a massive number of applications. It also seems that they are pushing this layer of access management that doesn't have proper support on any platform but UWP.

I see this as Microsoft taking yet another step to force people to move to their new Appstore model. by choking the access to the operating system away from any other platform, which I find really amusing because their own top tier applications aren't built on these platforms (office, visual studio, etc..).


Better update yourself.

The next version of Office and Note for Windows 10 are going to be store only.

At Build they also had people from Adobe, Cakewalk and Kodi showing their desktop apps ported to the UWP via the Desktop Bridge.

Like they did with WPF and Visual Studio, they are pushing everyone into the train by dragging their own devs into it.


> Like they did with WPF and Visual Studio

Except Visual Studio isn't using anything newer than WPF yet, is it?


You completely missed the mark.

.NET developers only started taking WPF seriously after the performance improvements Microsoft did to WPF, which took place after Visual Studio team adopted WPF to prove its quality.

If you intended to do any remark regarding UWP, first WPF is not going anywhere as communicated to those that care to learn what goes at Build, and second the architecture between WPF, Silverlight and UWP is almost the same, just a few differences regarding XAML features and .NET APIs.


I imagine Office, VS etc are too big to "port" to Appstore model. Also people still use these applications in Windows 7 and so that would mean having two parallel versions of the same app and release features and support for both.


Office will be "store only" on the upcoming version for Windows 10, check Build presentations.


Isn't that just wrapped for the store in their AppV model not a true UWP app?


While it isn't a pure UWP app, it is a step into that direction and already a big difference from being a standard desktop app.


The distinction of what makes a "true UWP app" certainly gets blurrier with the "Centennial" desktop bridge, but it's still rather more "true UWP" than AppV is/was.


So last ransomware we seen in the news actually tried to reboot system and encrypt files before OS is loaded. So unless that new tech gonna protect MBR (which should be protected anyway) - not sure how it going to stop encryption.


Fun fact, if you manage to replace the osk.exe (on screen keyboard) and flip the registry bit that loads it before the login screen, it will be executed as SYSTEM, with full disk and network access. It can also interact with the Winlogon window and stealthily phish the password of any user.

Heck, it can even self-delete before any user logs in.

Why programs before the login screen and the screen itself don't run in a sandboxed account is beyond me.

Every windows user with a laptop is running in local admin mode. I've demonstrated this for german TV by having a file less UAC exploit install osk.exe malware, then have this send the password of the next user logging in via SMS to the "attacker". The deleting itself (and remove any anti virus install).


Fun Fact #2, an easier way is the on screen accessibility tools, which if you replace with cmd.exe via Windows very own Recovery Console (on the OS DVD/USB), you can just click the Accessibility icon on the logon screen and get a SYSTEM level command prompt. It's even documented as a way in (Google is your friend) if you forget your Domain Controller password.

I find it shocking that this even works. However I'd be a liar if I didn't say it saved my arse once.


This is why Secure Boot is a thing.


Except Microsoft leaked their "golden" Secure Boot keys. I don't know all the details of how Secure Boot works, but I am under the impression that if malware gets Administrator access to the system, it can install it's own bootloader using one of the leaked keys. Then bypassing Bitlocker is as easy as presenting a fake BitLocker screen asking the user to enter the key.


Wouldn't secure boot just prevent you from booting into the invalid MBR? At that point your files are already encrypted and your MBR already over-written, Secure boot is just preventing further exploitation.


You can get around UEFI Secure Boot by installing an old signed bootloader with known exploits (if I understand correctly this is why the "Secure Golden Key Boot" exploit of last year[1] cannot be patched without changing public keys in the UEFI firmware). Not only that, the code that is shared by most UEFI implementations is garbage[2] with a large attack surface; exploits against the firmware is a possibility.

The primary function of UEFI Secure Boot is for Microsoft to prevent other operating systems from being installed on as many systems as they can get away with (right now there is no provision that end users should be allowed to disable Secure Boot on ARM devices, for example). The "security" functionality is an unworkable side-effect that provides a convenient fiction to accomplish that goal.

[1] https://www.reddit.com/r/netsec/comments/4wybax/writeup_of_s... [2] https://www.youtube.com/watch?v=V2aq5M3Q76U


Yep!


Completely unrelated, but am I the only with an impression that MS has switched Windows into a rolling release OS (like Gentoo or Arch) with infinite updates of Windows 10? This would be a genius move to solve the issue of the users remaining on the old unmaintained release like it was with XP, and like it is now with 7.



Yes, full lockdown and a subscription for anything useful is the new model. Hope you like it.


Yes. Windows 10 would be the last OS from MS. I think they confirmed it.



Just like OS X (10). It's like everybody is afraid to go to 11.

To be fair, software is a recent human endeavor, and except for Emacs, I'm not familiar with software versions over 10.


In operating systems, FreeBSD, HP/UX, and Solaris are all on version 11. iOS 11 is in beta now.

In databases, Oracle and Informix are both on version 12.

I think the lack of high version numbers is not necessarily paranoia, but simply that there isn't much software that's old enough yet.


I'll bite. Chrome, Firefox, plenty Adobe products.


Chrome is in its 50s :D


Plenty of Spinal Tap reference opportunities in going to 11 though. The iOS 11 presentation did just that :)


I always thought protecting users from malicious code they willingly download and run themselves is futile and a waste of developers' resources.

Do I miss something and this is actually a viable security approach?


It's not going to do much for targeted attacks, but there are definitely ways to limit the damage for large-scale ransomware attacks. As it is right now, ransomware doesn't even need to bother with privilege escalation because files valuable to users are most likely owned by them. Not to say that all ransomware malware sticks to just user privileges, but it's usually enough do get the job done.

Having a sort of firewall for file systems that's enforced by the system means that in addition to getting code to run with user privileges, the malware authors need to trick the victims into giving the software root (which might be impossible on enterprise networks), or use a privilege escalation vulnerability to do that.

Of course, people could still click through prompts, allow access to all apps due to warning fatigue, etc., but it's an improvement - if done correctly.


It's one of the few non-futile uses of developer resources, when it comes to security.

It's a virtual certainty that users will download malicious code, so as a security person you're left trying to mitigate the impact when they do.


Given malicious code is hidden in applications that appear to be safe or appealing to users I don’t think they are usually willingly downloading malicious code.


> If an app attempts to make a change to these files, and the app is blacklisted by the feature, you’ll get a notification about the attempt

So it's allow default? That sounds useless.

We need a deny default thing. Like Little Snitch but for disk. Every time an app accesses a directory it hasn't accessed before, ask. (Skip asking when files are opened using the system "Open file" dialog for a bit less annoyance.)


I think that the most recent attack in Ukraine already overcame this obstacle. They were able to use an in-place update system by a trusted software vendor to install their malicious code on the victim's computer. That software would almost certainly have had permissions even under this list, so it's not that effective.


How about using ML to detect profiles of access and disallowing un-common access patterns? If I only use VS Code to access my source, prevent win-malwr.sys from accessing that folder.


And then one day you want to zip up the project to send to a friend, run an external linter on it, or make backups. ML depends on an adequate training set, and real life uses change quickly enough to break it.


The OS would confirm that it was an end user making the action not malware. It is about the automatic creation of security rules based on observed behavior. The other option would be to create everything manually, which doesn't happen.


Browsers have already taught us how useless this is, users will always click through.


Having an OS that arbitrarily denies applications access to files would drive me mad very quickly. I'm guessing that seemingly unpredictable behaviour would annoy the average user as well.


I'm surprised Google hasn't run a Chromebook advertising campaign which just says "use a Chromebook and never care about ransomware again"


Because files on Google Drive cannot be encrypted?


Google has old versions of all the files, and would immediately revert them when they detected a virus going around.


What if the malware just waits a few months to spring the news that your files are encrypted? Offsite backups don't save you from encryption based attacks.


Yeah and if you use Playskool hammers, you'll never break a thumb again either. That doesn't make Playskool hammers better than regular ones.



On a tangential note, I'm pretty sure the comic is inaccurate; I'm no microbiologist, but I'm about 61% sure that shooting a petri dish full of cancer cells will just splatter the cancer cells all over the place.

Now, if you were to light the dish on fire, on the other hand...


I mean, impact with something as fast moving and hot as a bullet? The ones that directly contacted the bullet at least would likely be dead, from the heat, shock, or both. The ones on the rest of the dish would likely be fine.


This sounds like a feature that will be painful to work with for regular apps, but that malware will easily work around.

I mean I am no security expert at all, but you kind of need administrative privilege to install a malware, so why not keep it to access all the folders you need?


You don't need administrator privilege. You just need to double click an exe.


This seems like a good idea, and I'm pretty excited to see this step. Though I suspect if certain apps are whitelisted to edit in those folders, ransomware will simply turn to finding exploits in those apps. And most of your document and photo editing apps out there may not have been designed with security in mind, as they never expected to be gatekeepers of file access.

This will also probably be a UAC-level nightmare for getting old software to work on newer PCs, as today's software generally just assumes it can have file access to document folders.


Many of the ideas seem good for a corporate/enterprise setup where you lock the system down to run a few business/tech apps, but not so pain-free for desktop users. I mean, nearly every app on my system needs access to the usual folders. Unless MS bundles a good whitelist of approved apps, granting permissions is going to get really annoying.


Akin to Windows SmartScreen and stuff, I expect Microsoft to offer the whitelist as a service. Obviously, they wouldn't want to cause extra headaches in getting Microsoft Office and the like to have access to your documents.


How about we just have "copy-on-write" filesystems by default?

Something which then tries to "encrypt" your hard drive merely winds up creating another layer on top which you wipe out to get back the original files. You only have to flip a "hardware switch" when your disk fills up or you get a catastrophe.

I cry every time I see something that IBM or DEC got right 40 years ago that we STILL haven't adopted.


Why was this not implemented widely? I mean not in source control systems like git or TFS, but built into OSes.


What are "end-to-end security features"? They mention it once but then never again.

As far as I know, the term end to end is about communications: an exchange between two or more parties, or endpoints, which can be encrypted "end to end". I'm afraid they just dropped it as another term nobody knows the meaning of, so we'll have to find a new term to describe why Signal and Wire are better than (non-PGP) email.


"end to end" has been a term in common use in language since the 1800s meaning complete coverage. look it up on the oxford english dictionary for more details.


Oh, right, objects can lie end to end and have nothing to do with encryption. I had never heard it in security context without meaning e2e encryption.


I'm skeptical. The cost of managing these permissions might outweigh the benefit. But hey, why not try it. As long as I can disable it when it ends up getting in my way...


Linux has had the same issue for the longest time: You need root or a capability to set the time, but any program you run can wipe your entire home directory.



Perhaps the place to implement countermeasures is in the disk drive (SSD these days)?

e.g. arrange for the drive to never delete anything unless some key exchange has recently been done, that depends on user input (bio parameters, or password).

From a user perspective you'd see this as :

All deletes (and file version changes) go to a recycle bin. Emptying the bin can only be done upon presentation of the secret.


Do you trust any of the SSD makers to implement proper and updated (obviously necessary) counter-measures against ransomware?

They can't even get encryption right.

https://motherboard.vice.com/en_us/article/mgbmma/some-popul...

https://www.theregister.co.uk/2015/10/20/western_digital_bad...


.. and overwrites? This screws up savegames, autosaves, swap, browser caches, and so on.

(note that the drive is the wrong place, it doesn't know what a "file" is)


I wonder MS has given any thought to 'sealing' executable regions so no new instructions can leak into memory. IOW Once executed, a process can only reference instructions present in the binary itself. Basically make running JIT-ed code, self-modifying code, etc, a special process privilege, that can then have a limited process context for I/O.


Isn't that a subset of W^X / DEP https://support.microsoft.com/en-sg/help/875352/a-detailed-d... ?

Can be defeated by "return-orientated programming", which uses only the existing instructions in the binary and a modified stack.


Yeah, DEP + ASLR already addresses some of this. Perhaps the stack regions could partially be set to read-only to protect return addresses.


Control Flow Integrity is one technique for addressing these sorts of attacks:

https://www.microsoft.com/en-us/research/wp-content/uploads/...

IIRC there are even some experimental efforts to add hardware support for CFI techniques to new processors (Intel I think?) but there's work going on to add support for it to modern compilers, which would allow you to compile libc and other system libraries with it turned on.

EDIT: It appears Clang actually ships with some CFI support already: https://clang.llvm.org/docs/ControlFlowIntegrity.html


A lot of code in the windows ecosystem uses UPX unpackers. The code extracts itself before executing the actual application. This is common for certain installers.

Windows does a pretty good job of enforcing Data Execution Prevention for code which opts in.


This seems like another strange workaround. We need to change the way the operating system behaves for the future. The problem is default allow for untrusted code to execute. Everyone recognises this as the problem, no one wants to step forward and implement the change.

We do it for mobile, mostly, the desktop needs the same shift.


That basically means forcing everyone to sign their code and offer it through the App Store. You'll see developers complaining about that upthread.

Windows 10 does make code signing mandatory for new drivers, and the drivers must pass a suite of acceptance tests.


You're right, but people complaining shouldn't dictate life. People also complain about being crushed by ransomware. Not to say they don't have a valid point, but the paradigm needs to change.

We used trusted stores for certificates and mobile applications, it's time for the desktop to do the same beyond drivers.

Not to say things won't creep through, but default allow needs to go for this to be truly solved, not a new feature or vendor product.


"If an app attempts to make a change to these files, and the app is blacklisted by the feature, you’ll get a notification about the attempt,” Microsoft explains."

I don't understand. If they have a blacklist, why ask the user? Or is "blacklisted" used loosely here to include code flagged by heuristics?


Perhaps "brownlist" would be more appropriate


The filesystem itself is a risk: per-user default permissions so any application launched by one user can trash all his files is scary. Even applications being able to access other installed applications is dangerous. I hope the industry find a way between all closed (a la Apple) and all open.


Or "Windows Will Protect Vulnerable Client Software With More Client Software".

Wouldn't it be much easier and more effective to offer a one-click low cost encrypted cloud backup-service? They could bundle this with Update or Defender to offer point in time recovery.


macOS already does this.

System Integrity Protection.

https://support.apple.com/en-gb/HT204899

[edit] apologies, indeed, SIP only protects system files, which is not what this article is about.


This is about protecting user files and areas, not the system files. A user level ransomware can indeed encrypt all /home/$user contents on macOS just as easily as it can C:\Users\$user on Windows.


This seems like a rushed reaction to recent events - I think there will be problems as a result of the rushed implementation. I could only begin to imagine the embarrassment if this was the cause of the next zero day attack.


What?


The UI is not really explained. I hope this is not going to train more generations of Windows user to click "yes yes yes" in response to annoying dialogs.


.. what about the existing file versioning and backup tools?


Also some crypto trojans delete or even encrypt your snapshots if you using the existing restore software from Windows.


Most consumers don't use them.


How often are browsers affected by 0-day exploits these days?

I they are not, wouldn't using web-applications and keeping your system up to date solve the whole issue?


Countdown to malware using this feature to prevent removal


To me it seams like a part of the definition of a zero day exploit makes it impossible to stop.


Part of the definition of a zero day exploit requires software providers to constantly fix issues, otherwise a thousand days exploit would be enough to compromise a system, and no zero day concept would have been necessary, they would be just called exploits.


At least, it requires someone to be able to act in mitigation. That might also be the user of the software (if they can patch it, find a workaround, have some other software validate inputs or detect attacks, etc.).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: