"We believe that the intention of secure boot is to protect against
malicious use or modification of pre-boot code, before the
ExitBootServices UEFI service is invoked. Currently, this call is
performed by the boot loader, before the kernel is executed.
Therefore, we will only be requiring authentication of boot loader
binaries. Ubuntu will not require signed kernel images or kernel
modules."
That's completely different from what Fedora is doing (signing all kernels and modules). I hope for them Microsoft agrees with their interpretation and won't revoke their signed binaries. I'm not sure what advantage they would get from a signed boot loader, if you can run any arbitrary kernel from within the loader.
I was wondering about this too. From the user's perspective, this is great, as it basically means that everything downstream from the custom UEFI bootloader can be unsigned, user-defined code.
But at the same time, it pretty clearly defeats the purpose of the UEFI signature chain. A plausible malware vector would thus be to install the ubuntu loader, which then loads your malware payload and chains to windows, compromising the "secure" boot.
Basically, it undoes secure boot entirely. Which is a good thing. I hope Microsoft is willing to look the other way on this, but I fear that they are not.
A technical user, sure. But that's an awfully big step down in security guarantees: "Hardened, secure boot with guaranteed validity and authentication at each step." to "Wait, did we install Ubuntu on this box?"
Also it could have the effect of associating the Ubuntu splash screen with malware in the users mind. Given that many users associate a black linux commandline prompt with malicious hackers, this might cement in their minds that linux is bad.
They should have a splash screen saying: "Now booting Ubuntu Linux. If one of the next screens says "Welcome to Microsoft Windows", your system is probably infested with malware.
Who says "the user" will see that splash screen? Firstly, I think typical users will switch on their PC, then look elsewhere for a minute or so. Secondly, there is the case of having a nefarious sysadmin (Internet cafe, hotel, etc)
A splash screen that required acknowledgment would work for case #1, but would be annoying, too.
I don't see how. Flame managed to create signed malware with an MD5 prefix attack... but MD5 had known problems for over 10 years.
And Flame is widely thought to have been produced by a government intelligence service -- it still takes massive talent and CPU time to do something like that.
I'm not aware of any MS private key ever being leaked or cracked.
There will be weaknesses in specific UEFI implementations, but I don't think they'll be able to produce anything general purpose.
I think the point was that Flame was signed with a Microsoft key.
It's true that key shouldn't have been trusted for what it was used for, and that the MD5 attack basically elevated the rights of the key, but the parent's point isn't 100% wrong (nor is it 100% right..)
Flame used a prefix collision attack that had not been seen before. The concept was demonstrated a couple of years ago but the attack itself was novel.
While that's true, what enabled Flame to use that to sign code was a chain-of-trust mistake as nl pointed above -- and there's no guarantee that such chain-of-trust mistakes will not happen in the future.
Not every malware, but does, for instance, the NSA run Microsoft signed binaries or are they able to sign their own? If they have valid signing keys, how much can you trust they (and other agencies) will always use those keys for your own good (and that you'll agree it's for your own good) when they use them.
I think you need to be more careful with that argument. While I agree in principle that this sort of flaw is inevitable in the future, and puts a hard cap on the value of measures like secure boot (and I'd go even further and argue that it makes the costs of secure boot higher than the benefit), it's not correct that the signature process is inherently compromised. Public key encryption works, and it works very well. There have been a handful of goofs, and there will be more in the future. But the number of key regimes that attackers would want to compromise (consider even banal stuff like the signing keys for console games, which remain secure after many years) vastly (vastly!) outnumber the few exploits.
It is a fallacy to assume that because private keys have been leaked in the past, private keys will necessarily be leaked in the future.
Remember, the DRM-can-never-work argument doesn't apply here. DRM-can-never-work is that the user must be supplied the decryption key with the encrypted content. That does not apply to signing; you must be supplied the public key, but the private can be held private.
Many private keys have remained private. (So far as we know... and note here I'm talking about private as in asymmetric public/private such as can be used for signing, not "keys that were meant to be private but got leaked".)
In fact, I'd observe the Microsoft private key wasn't even leaked. Another private key was created that due to flaws in MD5 allow someone with vast, vast resources to figure out how to forge another one that would be accepted. One can equally read this as proof that the system is pretty strong, if it took government-level resources to attack a known-weak system that I would imagine won't be in the next signing standard.
We can not assume that private keys will leak. We can not even assemble an argument that the probability is high, which is because it isn't.
This year. The next year, it will be half as much. In 10 years, a thousandth. Are we willing to expire boot signing keys every couple years? Are we really comfortable only governments have such power because governments can do no wrong?
In the encryption wars it goes the other way. Encrypters get to make decrypters exert exponentially more effort for only polynomially more themselves, and the systems get stronger over time, not weaker. We've long since passed the point where handheld devices like cell phones can use encryption that would take resources in excess of the entire universe for the rest of time at the maximum theoretical computation rate to brute force. We don't always use that, and there may be (and probably are) weaknesses that can cut that down, but that's the direction this goes in over time, and I can't think of anything that has any chance of changing that dynamic. Even a proof of NP = P wouldn't do it (that only potentially nails certain forms of encryption used today, there are others that would still not be vulnerable), and if that's not enough....
I know all that, but you have to agree UEFI makes everybody put a lot of trust on a series of black boxes we cannot inspect. Even if we assume getting a set of signing keys requires more computing power than physically available, we cannot rely on it not being available through less compute-intensive ways.
Actually, it only demonstrate it's possible for them to be leaked. This is a rather obvious conclusion.
However, if the signing keys remain valid forever and signed binaries don't have to be re-signed when keys expire, you have essentially an infinite amount of time to leak (or crack) the signing keys and the likelihood of a leak will approach 1.
I am much more concerned by the increase in computing power than with leaks. The value of a valid signing key in a UEFI secure-boot world is high enough to ensure someone somewhere will spend inordinate amounts of money and/or computing resources to obtain a valid key. How much does leaking a key cost?
Do you leave your door unlocked because locks have been demonstrated to be easily broken?
The first rule of security is that security is all about layers.
Also, I sent a copy of your comment to Phil @ Apple, I think he's going to drop all the DRM restrictions on iOS binaries and release Apple's private keys used for signing iOS apps after reading your comment.
Okay, where are the private keys for Apple's iOS signing key, Motorola bootloader for the Droid phone, XBox360 bootloader signing key etc. etc.?
Are they imminently going to be released?
While there are definitely flaws in implementations and leaks, assuming them to be foregone conclusions is a mistake.
"We believe that the intention of secure boot is to protect against malicious use or modification of pre-boot code, before the ExitBootServices UEFI service is invoked. Currently, this call is performed by the boot loader, before the kernel is executed.
Therefore, we will only be requiring authentication of boot loader binaries. Ubuntu will not require signed kernel images or kernel modules."
That's completely different from what Fedora is doing (signing all kernels and modules). I hope for them Microsoft agrees with their interpretation and won't revoke their signed binaries. I'm not sure what advantage they would get from a signed boot loader, if you can run any arbitrary kernel from within the loader.