> a purple opium-fueled Victorian horror novel that uses global recursive locks and SEH [Structured Exception Handling] for flow control.
All though after his post blew up the developer recanted their statements a little, saying
> First, I want to clarify that much of what I wrote is tongue-in-cheek and over the top --- NTFS does use SEH internally, but the filesystem is very solid and well tested. The people who maintain it are some of the most talented and experienced I know. (Granted, I think they maintain ugly code, but ugly code can back good, reliable components, and ugliness is inherently subjective.)
Using SEH in kernel mode is pretty common, just like copy_to_user etc. in the Linux kernel. If the pointer comes from the user and page faults you want to handle it and return failure to the caller.
Does that mean you could send someone a link, or take them to a webpage with a link, to file://killing string and if they click it, their system grinds to a halt? Can you DoS a Windows box by trigging an antivirus to try and look for that string? Does it impact Server?
Browser's same-origin policy wolud prevent you from accessing local resources from internet. But this is still pretty exploitable DoS.
One could craft a shortcut to "C:\$MFT\non-existing.exe" or bogus "desktop.ini" inside some folder (on network share?) and explorer will crash the system while trying to fetch an icon. I've got a lot of freezes and one BSOD somewhere in ntsf.sys on Windows 7/8.1:
Yes I just confirmed this in a Win7 VM by opening an html file with an img src set that way. It seemed to take a moment for the box to crash so perhaps if you close the window soon enough it might not happen.
> [...] the NTFS driver takes out a lock on the file and never releases it. Every subsequent operation sits around waiting for the lock to be released.Forever. This blocks any and all other attempts to access the file system, and so every program will start to hang, rendering the machine unusable until it is rebooted.
That delay will likely be how long it takes for the deadlocks to crash the system.
To clarify, did you try this with a remotely hosted .html file or one on the local hard drive? Browsers treat this case specially and allow more access to the local filesystem.
Putting an html file with an <img src="file:///..."> in it on a remote server should not trigger the vulnerability, if I understand correctly.
I almost guarantee the damage is done immediately once the browser tries to read that file.. The reason it took a moment for your system to crash is because it took a moment for whatever the process was that actually hung to try to read from disk.
Yes, and that's the trick. The bug happened when IE encountered an "input" element with a "type" attribute which was not followed by an equals sign. "<input type=crash>" wouldn't crash, while "<input type crash>" or "<input type abcdef>" would. Technical explanation: http://www.securityfocus.com/archive/1/319488/30/0/threaded
browsers implement cross domain origin policy to prevent js from accessing the local filesystem. Or did I misunderstand the nature of the Windows bug. It must be trying to read from file:// right?
Resources/frames/XHRs/etc from 'file://' might be blocked, but what about top-level redirects?
At the very least, user-initiated top-level navigations should bypass any policies. If you're out to cause mischief, you could just link to the dodgy path on forums/comments/etc – there'll always be people out there who are careless and/or clueless enough to click on it.
is this true for even evergreen browsers? Is this true for pages that's hosted in non localhost domain or drag n' dropped into browser from the file explorer? (file:// protocol)
Someone please correct me if I'm wrong, but at least in my testing, IE blocks the file:// request for any non-intranet site. So it won't work with just any webpage.
I often wonder why these special filenames aren't more widely known. I've been using Windows for 25 years now, but first learned about them a couple of years back when I committed a perfectly sensible (or so I thought) directory of auxiliary files from a Debian box and named it "aux/".
Cue arriving back at work on Monday with the rest of my team kicking back waiting for IT to "fix Subversion"...
I knew of NUL, CON, COM#, PRN and LPT# restrictions but never heard of AUX... today I learned two more!
"The following reserved device names cannot be used as the name of a file: CON, PRN, AUX, CLOCK$, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9. In addition, any combinations of these with extensions are not allowed."
Restrictions on the File Mask and File Name Properties
I only learned about these when I was trying to scrape information about airports and save each in their own folder named after their IATA code. I learned about NUL, PRN and CON because of Nulato Airport, Pristina International Airport and Concord Municipal Airport.
Such a weird bug, I didn't know why I was getting a FileNotFoundError from trying to create a new file. Everything online will tell you the directory you're placing it in doesn't exist (it did of course), and no hint that you're trying to use a magic name.
They did it wrong then. IFF you have to map stuff to files they should all be located under /dev/ and not confuse the system if someone happens to use those strings for their own file names in their own places.
Agree but these came from DOS days, and were just always brought along for compatibility. I think it could be argued that there's such a thing as too much attention to compatibility...
Edit: MOST of the no-no file names come from DOS, not this $MFT one which is NTFS related so must have come later!
$MFT et al are not special file names. It belongs to a set of files (they are well-formed files) that make up the NTFS volume (they're in the volume root); I think the design is a bit poetic - the file system is made up of files contained in itself. While they are relatively regular files, the FS of course needs a way to bootstrap itself. To do that, the $MFT location is in the NTFS header. The other NTFS-internal files are found by lookups through the MFT - these files are always in the first few K of the MFT.
>To do that, the $MFT location is in the NTFS header.
Actually the "NTFS header" in itself (the first 8,192 bytes of the filesystem) is actually the $Boot file, which is indexed in the $MFT as residing on first cluster of the filesystem.
It's poetic, yes, but why? As you say, you already needs lots of special handling to be able to bootstrap. On the other hand, treating the files as actual files causes bugs like this one. So on what occasion would $MFT being a file (and not some special purpose data structure) actually be useful?
If I had to guess (And this guess may just be one of a list of reasons) it might just be to ensure the blocks that make-up the MFT are correctly marked/treated as 'used' and that the system knows which blocks the MFT is using.
I don't know tons about NTFS, but most/all file-systems have a way of marking which blocks in the system are currently in use, and which are free. This is usually some type of bitmap, and a quick google suggests that NTFS uses this approach as well. In the Ext file-systems, the bitmaps and metadata are a constant size and allocated when you create the file-system, so when you use the file-system you already know which blocks are being used by these structures, and thus utilities like a fsck can check that those blocks are correctly marked as used because it already knows which blocks should be marked.
In contrast, it doesn't appear that the MFT is a constant size, meaning that when you add files you may need to increase the size of the MFT. But obviously, since it isn't preallocated beforehand you can't guarantee there is always contiguous space to add to the MFT. This means you have to track which non-contiguous blocks are being used by the MFT - and that's what file entries do in the first place, track blocks being used by some entity. So you make a file entry representing the MFT, and then that file entry tells you which blocks make-up the MFT, ensuring that any fsck or similar utilities know which blocks are in use by the MFT. If you didn't do it this way, you'd basically just have to make a 'fake' file for the MFT in the NTFS's header/superblock and all utilities would have to parse that separately to get an accurate list of currently-in-use blocks - which would basically just be a duplication of the file-system structure already there. But of course, you would also avoid this $MFT bug, so it is what it is.
Perhaps what would have made sense was two "roots" to the file-system - the standard root, and a "NTFS hidden root". The $MFT and other various special files would be placed inside the hidden root (And some way of accessing that hidden root would be provided, in theory this shouldn't be extremely complicated and programs just reading the disk image could just parse both roots the same way, requiring basically no extra code), and the regular files go in the standard root. That likely wouldn't require much actual changing to the underlying structure (Obviously it would break stuff now, but when NTFS was created it likely wouldn't have been that big of a change) and would have (in theory) prevented these types of bugs while still retaining the advantages they got from making those things represented by files.
Yes I managed the same thing when I wrote a code generator to produce Java class files based on database contents and one of them ended up being named Con.java.
For Insider builds it is a Green Screen of Death, to make it obvious it was an Insider build and not a production build.
Certain kinds of boot errors get a Red Screen of Death.
Since Windows 8, I think the Blue Screen of Death is starting to get just as much momentum behind it being called the Frowny Face of Death. I overhead one person even say, "My machine frowny faces a lot these days."
It was back in 1995. I'd just loaded Windows NT 3.51 on my computer. Not long after, I got my first BSoD. What to do?
So of course I gathered as much info as I could, and I promptly reported it to Microsoft. (That just shows you now naive I was at the time). I think someone even contacted me; not that they ever seriously followed up.
True story.
But there's a larger point, which is that if, back in 1995, Microsoft actually made a serious effort to debug and fix these problems, we'd all be better off today.
Sorta like the quote, from memory: "If Bill Gates had a nickel for every time Windows crashed ... wait, he does!".
Windows 7 and 8.1 users were provided the opportunity to upgrade to Windows 10. Per the article, it does not have the bug. Many people who are running the old versions of Windows are doing so by choice.
I'm intentionally still on 8.1 due to the telemetry/tracking/ads/etc in 10. Slowing converting our stack at work off all MS so that my next OS doesn't have to be Windows.
As I said for many people it is a deliberate decision not to use Windows 10. Like any technical decision, it involves tradeoffs. The same was true when Windows 8 came out. And the popular media was filled with stories about how bad it was and how great Windows 7 was.
That's what I was prepared for when I installed Windows 8 and when I started using it I thought there was no way I would be productive. Then after about the first two hours or so, I decided to try the tutorials and within three hours of finishing the install over Windows 7, I was fully productive, because all the fuss about no start menu was about not knowing how to press the start key on the keyboard.
With Windows 10, I use |Settings| to set what I want and don't have much trouble and when I do, I just turn it off. Compared to my smartphone, it's a relatively high level of privacy. Though none of it is really private since someone resolves my DNS requests and my ISP routes packets before they get out of the building...not dissimilar to my wireless provider's access. And when I take my smartphone out and about, all kinds of beacons can ping it and ID it and most phones default to automatically connecting to whatever network there is.
I did not mean to imply it was a foolish choice. Rather, I meant that it is often a deliberate choice that includes technical tradeoffs and this is one of them. The fact that 7 and 8 are still supported makes Windows more stable than Android or iOS. My intuition is that Windows 10 collects less telemetry than either of those and that what telemetry it collects that cannot be turned off is shared with fewer parties than a mobile device.
I find it odd to think a web browser displaying a page from $some_remote_url would happily try to load an image from the local machine. Never mind the NTFS bug, this is one of those cases where the browser is out of bounds IMHO. The only time it should have access to the local file system is if the user is explicitly doing something like selecting a file to upload somewhere, or saving a downloaded file. I suppose if you're reading a locally stored .html file it should be able to grab other things like images. The ability to exploit this seems like lazyness on the part of browsers. They needed local file access for legitimate reasons and just opened it up.
The whole cross-origin model in browsers, like it or not, allows something like this. It's hard to fix. Chrome already aggressively restricted permissions for file:// in a way that broke existing apps because they wanted to limit the risk of attacks against the local filesystem.
IIRC there have been file://-related vulnerabilities in webapps like pdf.js, too.
I don't know if you were around when the web started, but I was. The web was purely a viewing experience, and it gave me pause the first time I was asked to select a local file to "upload". I thought hmmm, when did they poke this hole? Of course for all I know it was a feature from the start but hadn't been used until then, but the concern is still valid. Had the original browser not allowed cross-site resource loading, perhaps other solutions would have been found to common problems (mostly related to advertising).
How do you define local precisely? Is it 127.0.0.1, 127.0.0.2, localhost, 192.168.0.2 or an IPv6 address in ::1/128? Browsers allow a Web page to load images and code from other domains for obvious reasons. Making exceptions to that rule for the local machine would break some legitimate software and be difficult to implement correctly.
>> Microsoft has been informed, but at the time of publication has not told us when or if the problem will be patched.
Doesn't a bug like this one deserve a responsible disclosure and wait for a patch to be available? The report doesn't state when Microsoft was informed about this, but given the severity of this issue and the fact that they haven't heard back, I would suspect it wasn't too long back.
It's a minor nuisance. It requires people to click on a local file. If a criminal can get a user to do that, he will not waste that opportunity on crashing the desktop.
>As was the case nearly 20 years ago, webpages that use the bad filename in, for example, an image source will provoke the bug and make the machine stop responding. Depending on what the machine is doing concurrently, it will sometimes blue screen. Either way, you're going to need to reboot it to recover. Some browsers will block attempts to access these local resources, but Internet Explorer, for example, will merrily try to access the bad file.
Can confirm it for my Win7 installation. Open cmd then cd c:\$MFT and your system freezes up. Ctrl-Alt-Del doesn't help, but you can still open one (but completely useless) Explorer window. I didn't get a bluescreen. It's weird.
Update: A hard reset helped and everything is fine again.
I wonder how that works actually, would be interesting to find out. The site reports a 'possible' blue screen. Does this mean there's a mechanism which watches for the file system (or whatever) to lock up and if that happens reports a stop error? Or does the error rather occur because some critical component locks up and doesn't like that? Or does the blue screen actually not occur at all for this particular bug and was it just added to the article?
Based on the error on the blue screen (KERNEL_DATA_INPAGE_ERROR), I'm guessing the blue screen is from a failed paging operation. Which, of course, would have failed due to the file system being deadlocked. Note that the filesystem is still available, so I'm not sure how a monitor would help here. It didn't crash or anything.
EDIT: Specifically, it looks like it's actual kernel memory that fails to load from a page file that causes that specific error.
Speculating here: My system has already loaded everything so I didn't get a bluescreen. Because everything ground to a halt waiting for the filesystem it didn't proceed enough to encounter something really fatal.
Attempts to open the file are normally blocked, but in a move reminiscent of the Windows 9x flaw, if the filename is used as if it were a directory name—for example, trying to open the file c:\$MFT\123—then the NTFS driver takes out a lock on the file and never releases it.
So if I set someone's desktop background, or $path, to the relevant path ...?
Or share a soft link on Dropbox, or include the file in a zip for someone to unzip?
Also people are saying "this big doesn't work on Chrome browser", surely more interesting is if it works in Outlook Express given the install base. Like can we perma-crash OE by sending an email with a file:///$MFT\crashme.jpg image link??
I mean, the sheer stupidity of such a bug isn't worth a serious comment. It's just ridiculous that hundreds of millions of computers can be disabled by a freaking file path embeddable as an image source in any web page.
Screw the karma, I'm still standing by the parent comment.
> a purple opium-fueled Victorian horror novel that uses global recursive locks and SEH [Structured Exception Handling] for flow control.
All though after his post blew up the developer recanted their statements a little, saying
> First, I want to clarify that much of what I wrote is tongue-in-cheek and over the top --- NTFS does use SEH internally, but the filesystem is very solid and well tested. The people who maintain it are some of the most talented and experienced I know. (Granted, I think they maintain ugly code, but ugly code can back good, reliable components, and ugliness is inherently subjective.)
http://blog.zorinaq.com/i-contribute-to-the-windows-kernel-w...