Hacker News new | past | comments | ask | show | jobs | submit login
Encrypting Windows Hard Drives (schneier.com)
86 points by jron on June 15, 2015 | hide | past | favorite | 94 comments



I think this is worth repeating:

"I asked Microsoft if the company would be able to comply with unlocking a BitLocker disk, given a legitimate legal request to do so. The spokesperson told me they could not answer that question." - https://firstlook.org/theintercept/2015/06/04/microsoft-disk...


It doesn't mean anything because it's a standard move to reduce legal risk. No comment is almost always the safest answer. This applies to individuals too:

https://www.youtube.com/watch?v=6wXkI4t7nuc


It is a move to reduce trust in a market where trust is the single most important aspect. Schneier's article is all about whom he trust, why he trust them, and conclusion he makes based on that trust.

If you bought a security product and the developer of it describes its strength as "No comment", would you trust it? Personally I am sticking with the abandoned TrueCrypt until a successful fork has been created or luks + dmcrypt has been ported to windows.


If you were right, then the most trustworthy companies would have the most market share while offering us good EULA's. Looking at top software names, it's clear trust and success in the market place have almost nothing to do with each other. It's actually opposite with the dirtiest companies on top in most places. Negative media definitely hurts the bottom line but majority of time isn't an issue. They have PR people for that.

There have always been companies that share source with customers, use stronger security tech, warranty their code, and so on. They were minority players pre-Snowden with many having left that market because so few cared. (myself included) Post-Snowden, they're still minority players with the market mostly going for whoever promises the most on their web sites & in media. Trust and security have always come 2nd (5th?) to all kinds of other criteria for buyers in the IT market. If you doubt, look at number of Facebook or Gmail users vs number using more private alternatives. People & companies sell themselves out in droves.


It's a pathetic stance.


Personally I don't believe there's a backdoor in the technology but think they can (and probably will) comply if you have a backup key stored in the cloud, which Windows 8 consumer versions do by default (https://onedrive.live.com/recoverykey). That would explain the evasive answer.

There were earlier stories from a developer building Bitlocker indicating the FBI did want a backdoor at the time but ultimately settled for this.

You can avoid sending the backup key to the cloud, but I'd advise to keep a backup of this key somewehere: I have had to use a backup key on several occasions after a bad reboot.


Bitlocker (well, "Device Encryption") does upload your harddisk keys to OneDrive by default, and OneDrive is onboarded to PRISM for government request.

So in the case that you end up provisioning a computer or device with Bitlocker, the key may very well end up in a database for query.

Outside of this it's not really so speculative to think that Bitlocker has backdoors for gov't access. It's unlikely that Microsoft Bitlocker survived the combined forces of state-of-the-art cryptanalysis, legal compulsion, and company infiltration (exposed by Snowden).

A backdoor for disk encryption need not directly attack the cryptography. It could be something as simple as a means to generate a bunch of predictable blocks on the harddrive - that's enough to break XTS. That is, even if there's no software backdoors or backdoors build into the TPM (Lenovo, for example, has 'key escrow' capabilities to extract Bitlocker keys out of TPMs) or crypto backdoors in HW PRNGs (e.g. Intel RDRAND), etc there are software bugs in other places that could reveal the contents of the hard disk.

So it's merely not a threat model you're ever going to find a solution for. In the very worst case, presuming there were some mystical level of harddisk encryption that was't trivial to backdoor or break by a sophisticated adversary - intelligence folks can use TEMPEST attacks, break into your computer when you turn it on, and/or get rubber hose access. An encrypted disk will not stop Mossad.

There is no disk encryption that will unilaterally prevent USG from accessing your files (you can only make it more expensive).

But as the USG is fond of repeating - you don't need your disk encryption to protect you from the government unless you have something to hide. You only need it to prevent attacks from criminals and for device theft.


Can you explain more carefully the XTS attack you're contemplating here?

The Device Encryption recovery key feature was discussed at length here: https://news.ycombinator.com/item?id=8546524

Certainly, people who are concerned about security should disable/avoid it.


Sure. XTS reuses the sector-block for an IV, and so on a per-block level encryption is deterministic. The flaw here would be a scenario where there are deliberate repeated modifications to blocks of the harddrive (somewhere like within hibernation/wake code). Something carefully designed could, in theory, lead to the compromise at a block level that would allow tweaking of some contents on the disk (say, contents of the registry, or of some boot switches, etc) that would in turn enable the machine to be booted and the disk to be decrypted.

It's also true that the TPM protector for bitlocker uses a pin with max ~20 bits of entropy to shield the bitlocker key from exfiltration. The TPM is supposed to lock out repeated requests to extract they key but given the heavy involvement of the NSA in designing the TPM spec and its similarity to the Clipper Chip in function (so too with Apple's "Secure Enclave"), and its difficulty to audit (as if mom and pop consumers really need protection from adversaries who are going to reverse TPM chips to get computer data), one can't help but to acknowledge that a backdoor could easily exist there.

All of this is hypothetical of course. I don't claim to know that this sort of attack is there or that it is placed deliberately. I'm merely trying to make the point that a backdoor need not be in the harddisk encryption code itself.


I'm still not clear. XTS doesn't have an IV; it has a tweak key. It's deterministic (which also means it can leak data, ECB-style, across successive encryptions of the same sector). The tweak key is simply the product of the encrypted sector number multiplied by a polynomial; I don't understand what tricks you can play with it.

The fundamental attack you seem to be describing --- that an attacker can rewrite the contents of the disk --- applies to practically every sector-level cryptosystem; none of them are authenticated. But XTS can't productively use "patterns" of disk blocks, unless there's a clever attack I don't know about (hence me asking).


No, we're on the same page. I don't have an attack on XTS you are not aware of.

Code (again pretend in hibernate/wake behavior) that deliberately leaks information about important boot blocks on the disk could lead to a full compromise. The attack and back door, of course, would be fairly sophisticated in this instance.

You'll concede that the possibility exists, but that its hard to evaluate. What I'm trying to contribute is not that such a backdoor is currently used (I have no idea) - but that RDRAND/TPM/XTS-leak or other non-Bitlocker backdoor can be used to thwart Bitlocker if we imagine that it hasn't been directly backdoored.


I'm honestly still not following the XTS leak you're talking about. The deterministic leak I'm talking about happens because XTS isn't randomized: as you write and rewrite the same sector, you're doing so against the same set of "tweaks". I think that's a meaningful problem --- particularly if you're using XTS to encrypt something other than a disk, particularly in a cloud setting --- but as a backdoor, it's a pretty crappy one, because it requires your computer not only to be on and "unlocked", but also for you to continuously update the targeted blocks.

It's an especially dumb backdoor for Microsoft, who could literally backdoor the Comic Sans TTFs to greater affect. Why would they tamper with the most sensitive code on the entire system, to get a fifth-rate backdoor, when they could get a first-rate remote backdoor out of virtually any code on the whole system?

Subtextually: I just generally dislike FDE, as anything more than a "my computer got stolen out of the back of my car" mitigation.

If you are worried about attackers with continuous high-touch access to your computer, no FDE system helps you. Reliance on FDE more or less made the DOJ's case against Ross Ulbricht, for instance.


> as you write and rewrite the same sector, you're doing so against the same set of "tweaks".

> but as a backdoor, it's a pretty crappy one, because it requires your computer not only to be on and "unlocked", but also for you to continuously update the targeted blocks.

The hypothetical backdoor in this example would be in boot/hibernate/wake/suspend/whatever code - i.e. not "on and 'unlocked'". That's the entire point of this backdoor. If the computer needs to access encrypted parts of the disk during boot - even if encryption is done by an e-drive rather than in software - code that causes repeated writes (or even a single write) to an important sector could be designed to subvert FDE.

But it doesn't really matter about this particular hypothetical. We both agree that there is plenty of room to backdoor Bitlocker without having to rely specifically on bitlocker code.


> Reliance on FDE more or less made the DOJ's case against Ross Ulbricht, for instance.

I was under the impression that the FDE used by Ulbricht complicated things a lot, and took special measures to get around, and that they were purely lucky to even be able to do so, and he compromised sensible operational security to allow them to do so.

The story goes that they got him in a public place, distracted him momentarily, while someone else swiped the notebook off his desk making sure not to close the lid and kept it from auto locking over time by continuously remaining active with it.


It's a little disturbing to see Schneier recommending a disk encryption package that offers to encrypt drives using CAST, GOST, and Blowfish.


He's currently recommending, for Windows users, either Bitlocker (256-bit AES) or BestCrypt (256-bit AES, RC6, Serpent, or Twofish). Not whatever link in the article you found those in. Unless I overlooked them in Bitlocker or BestCrypt's spec pages...

About those, though, CAST-128 isn't trustworthy (chosen-plaintext attack), GOST is probably there for Russian market, and Blowfish is fine given all the beatings it survived (good sign of security). I still use Blowfish and even IDEA in my polymorphic ciphers that semi-randomize a combination of strong ciphers along with counters.


Blowfish and IDEA are not fine. They're block ciphers with 8-byte blocks. They are both materially less secure than AES. Recommending them is borderline malpractice.


You have an attack on them? One that works with a Top 500 supercomputer? With a IBM Bluegene? With a FPGA? With Intel hardware most black hats have?

IDEA particularly was a thorn in the side for NSA via the most famous app using it. Academics gave it plenty of beatdown. It's still secure in practice. So is Blowfish if working around weak keys and rotating frequently. My motto in high security is "tried and true beats novel and new." An new thing shows up with new methods, new proofs, and a few years later new attacks. I'll take the thing that works despite tons of effort to make it not work.

That said, remember a cascade is my usage case. AES finalists received enormous peer review. At least one of them is always used. The weaker ciphers are in the middle. Then, there's stream ciphers such as Salsa20. Ciphers that are the same can't be used in a row (meet in middle). Any pro looking at the construction hasn't identified a realistic risk in years far as the crypto goes. Protocol implementation, RNG's, parsers, etc were the usual worry areas.

So, I call FUD on this stuff. It's one of the things cryptographers always bring up without evidence of any practical risk. I'm confident that, accounting for weaknesses, certain algorithms are strong after still being secure for 10+ years of significant use and cryptanalysis. That's a good track record without a break in INFOSEC, right?


Yes, there are real attacks on constructions build from 8-byte blocks. And no, that's not how real-world crypto attacks work; we don't use supercomputers to mount brute-force searches.†

A 2015 recommendation for Blowfish is cryptographic malpractice.

unless you're crazy enough to prefer DH to ECDH.


I believe it's possible. It's why I only use it in cascades. Yet, where's the practical attack where IDEA and Blowfish traffic are decrypted with ciphertext? If it's malpractice, then they're either fully decryptable or will be soon based on existing attacks on specific numbers of rounds or implementations.

I would appreciate you to cite evidence of people decrypting files protected by Blowfish or IDEA by breaking the cipher.


"Pizza? Now that's that I call a taco!"

https://vimeo.com/90127834


Exactly. Nada again.


What is it about a smaller block size that makes it less secure? Wikipedia mentions that it's recommended not to encrypt files larger than 4GB because of the block size but it never specifies why.


Two that jump immediately to mind:

* Constructions like CTR need to divide the block, in CTR's case to hold both a counter and a nonce. The convention is to use half the block for the nonce and half for the counter, which, in an 8-byte block, means you're working with a 32 bit nonce (much too small) and a 32 bit counter. We actually broke an embedded system that, because of protocol headroom issues, used small counters for CTR mode.

* (Related) That issue more or less rules out all the modern AEAD modes, all of which are specified on 16+-byte blocks, for that reason.

* The smaller block size makes malleability attacks easier. For instance, when you're bitflipping CBC, the edits scramble a block and alter the next. With an 8 byte block, you have much more control over what gets scrambled and what gets edited.


Is this a warrant canary? /s

What do you recommend for Windows users?


Bitlocker, and then authenticated encrypted archives (for instance, PGP'd ZIP files) for anything sensitive, including your mail spool.


Except everybody's trying to run him out of town on a rail for even suggesting Microsoft (!!!!!) Bitlocker


Since the death of TrueCrypt, I've been using VeraCrypt (https://veracrypt.codeplex.com/).

It's cross-platform FOSS instead of the "Hey, buy now!" BestCrypt that this article is pushing.


VeraCrypt looks good, but incompatibility with TrueCrypt volumes makes me uncomfortable with switching. I've also looked at CipherShed and DiskCryptor, but the fragmentation gives me no assurance that I'll be able to access my encrypted volumes several years from now.

So I'm still stuck with TrueCrypt 7.1a. After all, it's the only disk encryption software for Windows that has been independently audited. None of the purported replacements and proprietary alternatives can lay claim to that distinction, no matter how much Bruce Schneier might personally trust the developers.


Veracrypt now has a truecrypt mode, which can read truecrypt volumes. I believe it can also convert them to Veracrypt ones by changing the algo then unchecking truecrypt mode.


I've found about DiskCryptor trough ReactOS driver signing page: https://reactos.org/wiki/Driver_Signing They say that they review the code the result of which they offer signature for. Is or isn't that enough for "independently audited" label?


Review != security audit.


BestCrypt sounds like it's cross-platform, at least.


VeraCrypt https://veracrypt.codeplex.com/wikipage?title=Downloads seems to have Linux and OSX as well as Windows - or is there something else going on?


"BIOS mode only, UEFI/GPT not supported"

It's getting increasingly difficult to buy a Windows computer that's not UEFI/GPT, so no OS level encryption.


Their store looks like they only have volume encryption on Windows, and container encryption for Mac, Linux and Windows, unless I'm mistaken.

https://www.jetico.com/online-shop/shop/index/all-products


A-ha, I missed that distinction.


One problem I have with BitLocker is that it's only supported on Ultimate/Enterprise (on 7) and Professional and up (on 8)

I guess one could argue about not having those editions in a business setting, but the vast majority of pre-installed Windows in a market is Home Premium, and I can't think of enough justifications (especially in small businesses) for higher editions, and besides, many people, in home setting would want to have this extra protection for their computers. (After all, they do banking, tax, etc.) -- It seems like non-pro 8.1 does BitLocker for system drives, but then it also comes with a bit of "only if's" (InstantGo, SSD, non-removable RAMs, TPM, etc.)

As someone else mentioned here, it seems like choices are starting to become narrow as fairly limited solutions can support UEFI/GPT, too...


Windows 8.1 and above now have a type of "poor man's bitlocker" simply called drive encryption. It works on non-pro/non-enterprise systems.

It requires a TPM, uEFI, and Microsoft Account. But once you meet the requirements it gives you a "basic" level of encryption which for a petty criminal is hard to break. Most Surface Pros 3 will have this enabled already.

http://www.howtogeek.com/173592/windows-8.1-will-start-encry...

Legit Bitlocker is superior in many ways (in particular not having to store a backup key in a Microsoft Account, and having more choices about how to decrypt). But for consumers it is a very welcome addition.


> It requires a TPM, uEFI, and Microsoft Account. But once you meet the requirements it gives you a "basic" level of encryption which for a petty criminal is hard to break. Most Surface Pros 3 will have this enabled already.

Are you sure this is accurate? I'm using a Surface Pro (comes with Windows Professional), and when I go to the BitLocker settings (in Control Panel) it's shown as enabled. I haven't changed from whatever the default settings are.


I think you missed reading part of the comment that implied it worked on Professional:

> It works on non-pro/non-enterprise systems.


Right, I'm wondering about the assertion that the SP3 came with a "poor man's BitLocker called Drive Encryption" enabled. The BitLocker preferences tell me I have BitLocker enabled.


If it ever turns out that Microsoft is willing to include a backdoor in a major feature of Windows, then we have much bigger problems than the choice of disk encryption software anyway.

That might be so, but proper encryption is still valuable. Say you have a disk full of sensitive information. Say your computer was turned off as the adversary gets hold of it. If you have a proper encryption program, no OS backdoor will be able to decrypt it retrospectively (that is, when it's activated after the bust). Broken encryption makes you vulnerable even when you're offline or the PC is turned off.


Way to miss the point.

If you run Windows, Microsoft has complete control of your computer. Unless you never turn it on, MS can log all the keys you press, all the data on the disk, all the network traffic, or really anything else they want at will.

If you trust them not to do the above, why wouldn't you trust them to encrypt your disk too? (Unless you don't trust their competence. But then, you are trusting them to secure your computer while it's on, but not when it's off?)


> Microsoft has complete control of your computer.

So does any Linux distro if you don't compile from sources yourself. Who knows, maybe Debian openssh fiasco was an inside job.

> why wouldn't you trust them to encrypt your disk too?

I trust them that they wouldn't be actively spying on their customers, because that's a good way to kill your company. I don't trust them they tried their utmost to secure our computers (we'd be running Singularity/Midori/Verve otherwise), because they are likely being coerced to comply with various orders and regulations.


> So does any Linux distro if you don't compile from sources yourself.

Yes, that's a good start. You just didn't go far enough.

Endpoint security is a messy problem, where everything is expressed in shades of gray. Yet, spying on your computer when it's powered, and backdooring your disk encryption for when it's off do not differ in luminosity enough to be called different colors.

By the way, yes, Microsoft was already caught spying on their users. More than once.


> You just didn't go far enough.

99.9999999% don't go far enough. Evidently everyone using Debian for those couple of years didn't go far enough. Not even two whole bytes of entropy is insane.

> By the way, yes, Microsoft was already caught spying on their users.

Surely allegedly. I've heard a few incidents attributed to them (flame, etc), but I've not seen it confirmed that intentional action was taken. Care to provide the links?


> I trust them that they wouldn't be actively spying on their customers, because that's a good way to kill your company.

Many, many companies actively spy on their customers, including some of the most successful companies in the world.


Like Canonical....


Since he knows Niels Ferguson and understands cryptography, why doesn't Bruce get some proper analysis or statement regarding the damage of removing the diffuser? Seems like that's one obviously big elephant in the room here.


What analysis are you looking for? The purpose of the "diffuser" is well-understood, as are the security implications of losing it. This comes up on HN about once every other month, on threads you've been a part of. What part of the explanation you've gotten here seemed inconclusive?


Bruce cites removal of the diffuser as a specific reason he no longer thinks Bitlocker is OK. If that's really what he thinks, why doesn't he say exactly why? There are practical attacks on Bitlocker without the diffuser, but Ferguson stated he didn't believe there would be with the diffuser (in the original Bitlocker paper, I think).

Your explanation, that the diffuser doesn't make it Hard, is understandable and it's not something I'd worry about. And the way you phrase it, it seems unarguable and straightforward that it isn't a big deal, that no one should ever rely on FDE for integrity. But Bruce understands a lot more than me, and he's citing this as a big deal. And MS/Ferguson originally made a big deal out of it. So was it always just a useless add-on? Wouldn't someone have published a paper showing how Elephant doesn't really help? Certainly a practical attack on it would be notable?

I hope you can appreciate my confusion.


We don't have to read tea leaves about what Schneier does or doesn't think, because the engineering issue here is straightforward. Ferguson wrote a paper about the CBC+diffuser construction, and what it's goals were. On a previous thread, 'pbsd even pointed out that the performance issues made sense: the CBC+diffuser construction couldn't easily benefit from hardware AES instructions, which harmed performance.


I'm only trying to be rigorous in my evaluation of the evidence, since I'm in no position to judge the actual strength of the diffuser. Yes, this is probably just academic, as I wouldn't trust a disk with suspected tampering. (Or if that was required, like storing VHDs on a hosted block device, I'd store a hash of the disk or executables.)

Anyways, my reasoning:

1. The diffuser didn't really help for serious threat scenarios, and hurt perf, so no one should be upset by its removal. This seems to be true.

2. But MS previously called the diffuser critical and has made no statements as to the security impact of the removal. (Indeed, there are easy attacks without the diffuser, but no known attacks with it.)

3. A well-known cryptographer (Bruce's actual cryptography credentials are undoubted, correct?) cites the removal as a reason to be suspicious.

If 1 is true, which I think it is, then 2 and 3 are at odds with how reality should be. No matter how confident I am in 1, points 2 and 3 require a shift in the probability of 1. Or, I am misunderstanding 2 and 3.

How else should I evaluate these things? Sorry for being dense or wasting your time.


Bruce Schneier has said other dubious things about crypto in the past; for instance, about not using elliptic curve crypto, and preferring instead conventional DH.

Again though: if the lack of a "diffuser" is a reason to be "suspicious" of Bitlocker, it should be possible to explain why that would be. I tried to address that concern head-on. It bugs me that we keep having to flee back to "but Schneier is concerned".


Especially egregious was the episode where he misunderstood the "xkcd password scheme" and lashed out against it in his blog, and then kept on defending his flawed opinion against all explanations.

That did a lot of damage to all attempts to teach password security to laymen.

OTOH he was one of the first "famous" security people to publicly recommend writing down passwords.

So all in all he is certainly a net win.


Cryptography Engineering is a good book (with some flaws).

Schneier is much less of a factor in real-world crypto than generalist developers think he is.

I feel bad blaming him for it; it's not his fault that people who don't do serious work in cryptography have made him a cryptographic folk hero. On the other hand: there are still people using Blowfish because his name is on it, so maybe I don't feel bad blaming him.


Oh, certainly (I was just talking about passwords)!

Cryptography Engineering is something I hold very dear and have lent out to colleagues. Very well written, so it's an enjoyable read. And a good length, not a thousand pages tome.

My main problem with Schneier at this point is that he pivotted (Ha! This is HN after all...) from cryptography and technical analyses to political commentary. And I haven't found him very convincing in that regard. Even annoying at times.


OK reading the rest of this subthread, I believe I understand it now. He's probably just misguided or simply incorrect. And I admit that most of my crypto knowledge comes from reading his and Ferguson's book, Practical Cryptography (and the 2nd edition, Cryptography Engineering - I'm not sure there is a better "popsci" crypto book.)

To make a simple comparison: if someone smarter than me says 2+2=5, my probability of 2+2=4 will move go down from (1 - epsilon) to something a bit lower.


This is the worst kind of controversy. Schneier is not wrong: removing the diffuser did, technically, reduce the security of Bitlocker.

Unfortunately, it reduced the security of Bitlocker in a way that is only marginally relevant to Bitlocker's goals, and in a way that is very, very difficult to explain to people who don't routinely work with cryptography.

So it's hard to crisply refute Schneier on this point, even as we have to watch the alarming spectacle of him recommending a rando disk encryption program that offers Blowfish, CAST, and GOST encryption.


Bruce's article linked to this article (https://firstlook.org/theintercept/2015/06/04/microsoft-disk...), which does have a statement about the elephant diffuser, including why MS removed it and its overall impact to bitlocker.

I think it's best summarized as:

"Removing the Elephant diffuser doesn’t entirely break BitLocker. If someone steals your laptop, they still won’t be able to unlock your disk and access your files. But they might be able to modify your encrypted disk and give it back to you in order to hack you the next time you boot up"


That is true of practically every full disk encryption package, because of the nature of encrypting disk sectors rather than files: with no format flexibility or straightforward place to store metadata, and no awareness of message boundaries, it is difficult to meaningfully authenticate data.

Vanilla CBC makes it easier to mount attacks, and XTS makes it harder, and the diffuser may have even made it incrementally harder. But no notion of difficulty here deserves the uppercased "Hard" we're looking for with cryptography: at best, you concede attackers "only" the ability to randomize targeted ranges of stored bytes, which --- especially for the mountains of C code we call "operating systems" --- is a devastating vulnerability.

If you are seriously worried about Highly Capable Attackers, and you lose custody of your laptop, you should consider writing it off.


There are some motherboards that can store the encription key so that you don't need to type the pasword again when booting. BitLocker supports it. What a great technology. It saves your life!


Sounds like it completely defeats the point of encryption.


You could store the key (or password to the key) in the TPM in order to prevent BIOS, bootloader or option ROM tampering.

An issue is that too many things could affect the TPM PCR values (for example, plug in a USB stick even though you don't boot from it- I think this is a mistake in the spec), so users get used to typing in the backup password and may not notice that a root-kit was installed.


In my experience of using Bitlocker for years, it's very rare that I have to enter the backup password. Even when I changed the default PCRs used.


It works great until $MALICIOUS_ENTITY unexpectedly takes your laptop, turns it on, and gathers unencrypted data.


I think the point of this feature (TPM? at least as I've seen it implemented on laptops) is that you can have a signed bootloader and encrypted drive such that, you don't need a passphrase to boot the OS (the TPM has it, or equivalent), your TPM verifies you are booting a trusted OS (that means it will actually authenticate users in the usual way and check their authorization before granting them access to data)

... and nobody who just picks up your laptop can either read your files without your password, or remove the drive and gain access to your files without the TPM.

Whether that chain all works as I imagine or not, I can't say and I haven't investigated to be able to say. I don't know if all it takes to gain access to the TPM in reality is (physical access to the machine), a USB stick with EFI booting, and any signed OS that will verify you are a root user authorized by that USB stick, which anyone who has $99 can get on their own, or just download Ubuntu or RedHat on your own can get.

That would be a pretty big let-down if that was really all you needed, though.


The threat is that they could do a cold-RAM attack and read the key off the RAM. That does require a bit more physical access than just turning it on.

The other big threat is that TPMs are probably cheaply built, and anyone with decent hardware skills can probably pop the key out of the TPM. I haven't seen the TPMs in laptops boasting anything like FIPS-140-2 Level 4 validation or any real tamper proof abilities.

But for common users, a TPM+Bitlocker transparently deals with the issue of "left laptop in taxi". Microsoft, AFAIK, allows TPM+Bitlocker in lieu of a smartcard for remote access.


The TPM doesn't verify that you're booting a trusted OS as such. Each component of the boot process is hashed and that hash copied into the TPM, and the secret is encrypted in a blob that includes the expected hash values. If the hash values differ, the TPM will refuse to decrypt the secret. So booting a different signed OS won't give you the secret - even though you're a trusted OS, the TPM hashes will be different and the secret will remain encrypted.


That makes good sense. Thanks for explaining.


> BitLocker is Microsoft's native file encryption program. Yes, it's from a big company. But it was designed by my colleague and friend Niels Ferguson, whom I trust.

Nullius in verba


Certainly there is 0% chance that the author is asking us to read between the lines here... #FaceValue


There may not be any deliberate backdoor in BitLocker however I think it's safe to assume that NSA has access to the source code, probably found some angle to exploit.


What's the disk encryption offering you think NSA doesn't have source code to?


That's a good point. Any product of significance that runs in software on untrusted users computers can be compromised by determined attackers, NSA or not. Even hardware-protected secrets have been bypassed. Hence, the security argument must rest on something that works regardless of whether enemies know the mechanism.


I think that's the Kerckhoffs' principle[0].

Regarding to state actors who have the resources to attack any system, I think it's important to make it as hard as possible, even if it's "known" they will find a way. Why?

Because it will drive the costs very high with years of R&D having as result that they'll only use new attack techniques on high-level targets and that means risk of revealing attacks goes up(assuming high-level targets are more sophisticated and spill the beans - as in Kaspersky case[1]).

[0] - https://en.wikipedia.org/wiki/Kerckhoffs%27_principle

[1] - https://securelist.com/blog/research/70504/the-mystery-of-du...


I agree. I've been holding High Strength Attackers, as I call them, off for a while. The formula that works on software attack side is a combo of (a) strongest security engineering we have, (b) obfuscation at every level, and (c) diversity of hardware & software components with predictable interface. One simple example that worked well for years was using a hardened Linux/BSD box on a PPC behind a guard. Guard narrowed communications to app level while wiping out covert channels and modifying patterns to resemble other OS. All evidence from fingerprinting tools would suggest they're connecting to an x86 box, even Windows maybe. All their attacks, some I still don't get, failed to work on the box.

Most deployments use more tricks than that. I was just amazed at how long that one went on without compromises. Used same trick for desktops with custom client-server apps. These days, I'm working on CPU's that protect pointers & code while tools randomize (i.e. diversify) the application automatically. Orange Book era solution to secure networking, email, & databases still work with minor tweaks. I use hardened client-server schemes instead of web apps to avoid... more than I can count. TEMPEST safe + 100yds of space where applicable. High assurance security = build on what we know works & eliminate anything risky where possible. Obfuscations & diversity I add as you said to just slow down our shadowy friends with deep pockets & large staff. Works well so far.


Plus encryption routines aren't that big; for a major organization there's no real need for source code at all.


Do we have good container encryption that mounts them as drives on windows? I never understood the point of whole disk encryption stuff.


>when Microsoft released Windows 8

Um, no.


I don't think Microsoft is ever going to risk putting a Backdoor™ in Windows after NSA_key or at the very least after the Snowden document leaks + OPM hack (which both prove the US government's incompetence in storing classified information securely, which means the company's backdoor could be exposed at any time).

But that doesn't mean Microsoft isn't going to make it easy for the NSA to bypass its security. We've seen several reports of that from the Snowden documents, and it affects OneDrive, Outlook, Skype and probably even Bitlocker.

All Microsoft needs to do is not fix a vulnerability it finds out about (not a third party that reports the vulnerability to the company, as they would have no choice but to fix that). And it doesn't even need to do that indefinitely. It could fix it when a new vulnerability appears, and it can rotate them every 6 months or so.

Then it can either directly give that vulnerability to the NSA through all the "cyber sharing programs" where Microsoft has been a "volunteer" for years (way before Apple), or it can let NSA "discover" it on its own, which can be as easily done as Microsoft's security researchers talking about a new vulnerability internally through channels that don't have strong end-to-end security.


> Windows after NSA_key

_NSAKEY wasn't a backdoor. People need to let that go. Not a single line of code was ever discovered that indicated the NSA was utilising it as a backdoor into cryptography, so the entire basis for the conspiracy theory is that the variable which holds the "backup key" happened to have been named that (and that includes the NT4.0/2000 source code leaks).

https://en.wikipedia.org/wiki/NSAKEY

Also Microsoft shares the Windows code with many institutions[0]. Yet none of them, nobody at Microsoft, and not even the Snowden leak indicated a backdoor in Windows.

Microsoft MIGHT have made it easier for the US Government to tap Skype calls (and I believe that they did based on available evidence). Aside from that for all the mudslinging almost none of it ever sticks.

[0] https://www.microsoft.com/en-us/sharedsource/


Please stop spreading FUD about "NSAKEY". It's an Alex Jones Infowars-level conspiracy theory that has been comprehensively debunked.


I've never read the debunking, but the NSA's reliance on codenames (even before Microsoft) for everything leads me to believe they wouldn't ever be so obvious.


Actually, I looked into things as promised. The Wikipedia links and whatever popped up in my Google searches. Turns out the only citation you had, a HN news commenter, was just repeating the claims of a MS spokesperson with some added opinions. I found a more interesting source which has similar claims:

http://cryptome.org/nsakey-ms-dc.htm

On the surface, it seems believable. However, like Duncan, I'm seeing glaring problems with these responses. It boils down to this: the Microsoft rep claimed they came up with all details (including the key & its name) on their own without NSA intervention, generated/control both keys themselves, submitted the design for review, and got NSA approval. Yet, without NSA's involvement, a developer named their key NSAKEY and when decrypted its email was "postmaster@nsa.gov." Also, as Duncan noted & my HSM guy as well, it's standard for HSM's to let you export keys for backing them up or multi-site usage.

So, their claims are highly unusual, suspect, and weak. The evasive behavior that led up to this isn't typical of even Microsoft. They usually come up with some CYA BS rather quickly. Also, the odds of a Microsoft-controlled key they made entirely on their own being called NSAKEY with NSA's email on it without NSA participation are low albeit possible (dev joke).

Far from Alex Jones stuff, there's actual substance to worries over what NSAKEY did or does. I agree with critiques that it's a lower concern given all the attack angles on Windows from outside or inside perspective. Yet, dozens of reports didn't show anything along lines of "comprehensively debunked." Instead, we had a series of weak stories from Microsoft that raised more questions than answers with plenty of evasion on their part. And they're known to work with NSA. So, people should still not trust that and use the steps that one person published to replace the key with their own.

Solves the problem with minimal fuss, eh?


This is a very long source from 2000 that says in 6500 words what 'geofft said in just ~300: that this is a code signing key for crypto libraries. If Microsoft wanted to backdoor your Windows machine, they already have complete control over Windows code signing. They do not need a special key literally labeled "NSA_KEY" to do that.

It's not "lower concern". It's not a concern at all. Microsoft's code is comprehensively reverse engineered. People have reversed the most boring, tedious libraries on the system looking for memory corruption bugs. You posit that maybe they just forgot to look into this "NSA_KEY" business.

For a software security professional, you really come loaded for bear with a lot of weird advice:

* Avoid elliptic curve and use conventional Diffie Hellman and RSA because the NSA controls the ECC patents (?!).

* Use Blowfish because it has a long security track record (?!).

* Watch out for Google because they don't understand endpoint security (?!).

* NSAKEY isn't a high-priority issue but it's something that people should be concerned about (?!).

* Here's an unencrypted HTTP website that you can cut-and-paste GPG commands from instead of reading the manual (?!).

* Firewalls are just some made-up crap that don't actually provide any security (?!).

* Use MatrixSSL and PolarSSL instead of OpenSSL because OpenSSL's code quality is crap (?!).

I get the feeling that we do very different kinds of security. You talk a bit about formal verification and EAL levels. I had the misfortune of losing a couple months of my life in the early 2000s to Common Criteria work. If you're coming from a CCTL background, some of the very weird perspectives you have on this stuff start to make sense to me.


They forgot to look into the subversion, BIOS, peripheral firmware, and covert channel risk despite me repeating those on forums all over going back years. Then the shit ended up in TAO's catalog (including my "amplifying cable" concept aka RAGEMASTER) & most stuff was weak to it. Same argument could applied to widespread open source software that people find "previously unknown" bugs in despite that code existing for years and "widely reviewed." So, I don't have to posit anything given the horrid state of INFOSEC and app review going back years: they should instead prove they did better than usual in the security analysis and show what they found for independent review. This is, coincidentally, a part of the scientific method as well.

You instead could've settled this argument if you linked to a group that reversed engineered all of that, found that it did exactly what they said, and had no conflict of interest with U.S. government. You say you know these exist but still haven't produced them. You instead expected people to take your word for it or comb the Internet looking for the proof you allege. Both a tall order.

Those of us in INFOSEC against highly subversive opponents don't deal in pure faith [esp in similarly evil organizations]. So, where is this evidence that it was totally reverse engineered and proven to be functionally equivalent to claims? I'd like to read it, determine its credibility, pass it onto peer review, and share it widespread if it passes enough of that. As promised, I'll even add it to the Wikipedia article that tops search to this day.

Waiting on you and your evidence.


No part of this comment addresses anything that I said. This is a pattern with our interactions: I bring up something specific, and you change the subject, usually to a flurry of Snowden jargon (this time it's the spy-mall catalog).

We were talking about the transparency of the Windows kernel.

It's a little funny that you feel like you need proof that the code has been reverse engineered, as if there were like 4 people in the world who could do it, one of whom is in a mental institution, 2 of whom are in Russia, and the last is hiding in a monastery in Tibet. You know, as opposed to something you could literally learn from a book that was on the shelf at Borders, back when they still sold books retail.


To address your edit which added all the bullet points:

1. Avoid ECC's for commercial activity because NSA & a private company were asserting 130+ patents existed on them. Be ready with lawyers otherwise. Anyone following all the patent battles would be concerned. If not concerned or doing FOSS, I always recommend using Bernstein's NaCl or a double signature scheme with a post-quantum algorithm as secondary for those worried about such things. Lots of good work on Merkle Tree's recently, for instance.

2. Use Blowfish in cascading ciphers with other strong one's such as AES candidates. Cryptographers make barely substantiated claims about things' risk all the time. Worse, their proofs on "secure" constructions sometimes don't even apply to the real protocol rather than an abstraction of it lacking key details (eg padding). Always include things the NSA and other strong attackers have failed to beat for 10+ years. If it was so bad, they'd be dominating it across the board. They not. So, it's either not so bad or good obfuscation to add to stronger stuff.

3. Google's most clever work, which I praised, was NaCl. It ended up being one of the weakest CFI schemes in practice because they sacrificed too much security for performance and maybe other reasons. They continue to build on it despite better stuff available. Much of their other stuff was COTS implementation quality with no specialist security engineering that I could tell. They're weak on endpoints like majority by relying on insecure tools and endpoints. My advice is to think of them like any other vendor rather than on another level. Depending on architecturally-weak TCB's such as Linux doesn't add confidence to any notion you have of strong endpoint security. There are no strong endpoints in mainstream lol.

4. NSAKEY is possible evidence of subversion from a company with a whole history of screwing customers for profit on their own and with government. By itself, not the biggest worry. It's just one more circumstance among many that should get people away from Microsoft tech. Seriously, how many times do BSD or Linux users get into huge debates about something like this where the NSA's actual name dropped in there with secret functionality and evasiveness by developers? If it happens at all, you could name it on one hand. Getting away from such companies is a positive.

5. A risk indeed I took on a PC that was already compromised by my main opponent as far as I knew in a sandbox. I cross-referenced the command against the documentation as I indicated in the conversation. Yet, most people who use software do similar things without cross-referencing. People were likely to Google signify and OpenBSD issues as well along with following steps they saw online. That risk is pervasive. Nonetheless, unlike you, I decided to change my position back to being more careful after a good critique by a commenter. You've only increased your number of attacks with opposition to... anything.

6. Firewalls, as industry uses them, are a weaker form of security than high assurance guards that predated them, strong endpoint security that predated them, or even firewalls on dedicated PCI cards with more secure TCB's & I/O offloading that existed in 90's. The firewalls were easy to port, cheap to make, and often pricey to sell, though! They're regularly bypassed on endpoints and in many organizations' networks. They include almost no assurance activities, which are critical for security. A number of attacks springboarded off of them for this reason and leveraged their privileged position in the network. My position is firewalls are a filter for hackers without talent along with providers of other security or non-security functionality buyers find useful. Industry needs to switch back to guards with real assurance. And not that Linux or Solaris based crap vendors are pushing recently, either.

7. Avoid OpenSSL because every code review of it showed it to be utter crap with all kinds of issues, about the worst coders one could find, and whose reviewers could barely follow it or justify a number of things in it. Consider alternatives like MatrixSSL, PolarSSL, Cryptlib, Botan, and whatever that haven't been shown to be utter garbage yet. They might be better. Shockingly, there were still people recommending OpenSSL even as LibreSSL team dug one problem after another in the worst horror story of bad coding and security of that time. Malpractice, indeed.

"I get the feeling that we do very different kinds of security."

We do. You trust Windows, think such software isn't black box because people can pay millions to fully R.E. it, abandon stuff that works in a use case because someone said to without evidence, trust subversive companies because someone else said to, and so on. Lot's of faith-based activities and misrepresentations of others' comments by stripping context in a totally-unrelated discussion.

My form of security says you (a) have clear requirements proven in practice to work; (b) a design that meets them with strong argument; (c) a security model that makes sense; (d) evidence you use it; (e) implementation designed for review with good layering, modularity, and interface protections; (f) strongest software and hardware protections you can use; (g) extensive testing of successful and failure states; (h) covert storage and timing channel analysis; (i) robust compilation to object code; (j) secure SCM setup; (k) independent evaluation and pentesting of these; (l) delivery of product with source & build tools with hash that matches independent evaluation; (m) preferably mutually distrusting evaluators. All this applied as much as possible from processor up or using diverse hardware with careful interfaces to counter hardware risks a bit.

Yes, a lot of this was borrowed from higer EAL's and papers on high assurance processes/products. Many things like this survived NSA pentesting for years. The other approach, which you & INFOSEC industry in general recommend, has produced things with endless vulnerabilities at high severities and subversions. China, Russia, and TAO are having a field day with it all. Among others. That you all continue to promote such methods despite them having no results for decades hasn't started to make sense to me outside studies on psychology and networking effects. That industry doesn't adopt even a fraction of stronger methods is amazing despite empirical evidence and strong anecdotes showing they produce more robust systems. The industry continues to focus instead of churning out low-quality offerings and deceiving buyers on their security for the high profits involved.

Keep it up, though, as black hats need the job security. Mainstream INFOSEC has always helped them with that. The oft-ignored high-assurance community will continue doing what we can and helping any willing to learn understand strong security. Although, in this thread, only the firewall point even applied to my expertise as it's a watered-down guard in terms of assurance. The others are a random assortment of comments with no supporting context given. Nonetheless, you seemed to be confused about what strong security engineering takes and I was happy to break it down for you. Have a good night.


Keep it up, though, as black hats need the job security. This is not the kind of comment that I come to HN for.

Cries for tptacek to produce evidence, for example, of the reversibility of Windows code, would be more pleasantly satisfied by some deeper investigation, such as discovering React OS. It would be surprising to me for anyone to conclude that Windows code has not been totally, and publicly, reverse engineered.

It is not a big leap from there for you to investigate from that code the true nature of NSAKEY. My guess is that you would find that there is no there, there.

Your statement My form of security says you (a) have clear requirements proven in practice to work; reminds me of my youth, when I discovered the writings of Dijkstra and others, talking about proving programs correct. While this is a wonderful idea, and a small handful of folks have actually done this, modern business doesn't seem to want to stand still for Category 5 maturity.

While you denigrate OpenSSL and its code quality, keep in mind that several of the high-severity bugs, such as BEAST and CRIME were not code quality bugs, but essentially protocol bugs. The perfectly-specified requirements (implement compression in HTTPS) could have been backed up by code proved correct, and still exhibited the flaw. Similarly for BEAST.

Also, one wonders if you have tried to break real-world crypto. If you haven't, I suggest that you give some of the online exercises a try. It is rather eye-opening.

Finally, many statements in your comments mischaracterize the points arguments make. This has become more than annoying. Please stop.


"It would be surprising to me for anyone to conclude that Windows code has not been totally, and publicly, reverse engineered."

The subject we were talking about was massively debated by IT people. Nobody showed up with the code to say, "Here's the reverse engineered code. We know exactly what it does." If he was right, that should've happened. Instead, all references I found to the subject by INFOSEC professionals are trying to guess what it did. So, I think it's fair to ask for a reference to slam-dunk evidence supported by reverse engineering if none is to be found with Google & nobody involved acted like it exists. Only source so far is tptacek's word.

"While this is a wonderful idea, and a small handful of folks have actually done this, modern business doesn't seem to want to stand still for Category 5 maturity."

I agree. It's why I don't work in high assurance security any more outside consulting, R&D, and free advice on the Internet. I'll add to your comment that this is not just for heavy-weight processes. Cleanroom & Fagan Inspections, which I used, drove the defect rate close to zero while often reducing the labor due to less time debugging & with little impact on time-to-market. The Cleanroom stuff, when apps were similar, even had statistically certifiable bug rates we used to issue warranties on code. Despite no extra time or cost, virtually no company we presented those processes to were interested in using them. Demand in IT and INFOSEC space is so against quality you can't even sell it to majority even if it makes money rather than costs it. It's that bad.

"he perfectly-specified requirements (implement compression in HTTPS) could have been backed up by code proved correct, and still exhibited the flaw. Similarly for BEAST."

Totally true. It's why I mention getting requirements and specs right. Even CompCert, an amazing piece of engineering, eventually had flaws during a test because the specs on very front and back ends were wrong. Other side of the coin: implementation of of all specs flawlessly implemented them with middle end having zero defects. Every other compiler, from proprietary to FOSS, had bugs throughout. So, the existence of protocol errors doesn't argue against verified software processes in any way. Just means your protocol better be as good.

And if it wasn't clear, my gripes on OpenSSL were based on the commit logs of LibreSSL team commenting on each piece of code as they went through it. I'm not talking the protocol at all: just terrible coders making garbage that got widespread adoption without hardly any review. Unsurprising given my above comments on how IT market works.


Only source so far is tptacek's word

This is not only false, but falsified by this exact thread.


If they rotate vulnerabilities in Bitlocker, that could be discovered via reverse engineering. And a big vulnerability (like incorrectly calculating the IV for a sector) would require rewriting the disk.

I'd also wonder what vulnerabilities would exist in such software. The encryption part is well described, and one would expect it'd be done right (and if it's wrong, that requires rewriting the disk to fix). Other than that, what are we talking about? Bugs in the TPM/BIOS PCR checking? Accidentally writing the keys somewhere? I'm probably being very unimaginative here.

Edit: Let me say I'm assuming you have a password in addition to the TPM. Obviously if you can boot the machine up and have Bitlocker decrypt, and you have access to the machine, you can somehow extract the key if you have the resources.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: