Very much recommended, even if you skip or only skim the mathematical details in chapter 5, as I did.
What I don't understand, is why the patent application was not objected in the secrecy order phase. Or was it secretly partly in the US, because the actual US patent apparently only covers the recommendation to pick your own P and Q, and not the actual mechanism itself. It is not clear to me what the last few sub-chapters are meant to show.
I think that this is overall a good paper, but I have one major issue with it. They quote from a presentation given by Richard George from the NSA twice in this paper. I had seen that presentation before, and in it he speaks to why the algorithm was put in the NIST standard[1] and directly answers a question regarding how the P and Q in the standard were generated[2].
His claim was that the NSA had issues in the past getting encryption devices certified for use in unclassified environments because NSA controlled the certification process for classified encryption algorithms (which were themselves generally classified), but NIST controlled the certification for government use in unclassified settings (and these algorithms could not be classified). The NSA wanted to be able to use an existing algorithm in an unclassified setting. As the article does point out, Mr. George stated that the algorithm was intended primarily for their own internal use and user should be able to use any P and Q, though they needed their own P and Q certified for their own use.
Later on in the presentation he states that the P and Q were both non-deterministically generated random numbers. He states that the NSA has office devoted specifically to producing random numbers for their own use, and they just got the P and Q from there.
Whether or not you believe this claim is an entirely different and reasonable question (there's no real way to verify it), but why bother quoting him if you're going to selectively quote like this? If the P and Q are in fact purely random as Mr. George claims, Dual_EC goes from being a grand conspiracy to just a slow, crappy PRNG. I think selectively quoting to make this guy sound more sinister makes it look like the author wants it to be a grand conspiracy and just undermines the argument.
Btw., the entire presentation is worth watching if you have time - he goes into the history of DES, espionage between the US and Soviets, getting cryptographic equipment working in tanks, etc...
There is now strong circumstantial evidence about Dual_EC: Certicom had contemporaneous patent filings for key escrow using PKRNG. It is weird that Certicom had a patent for key escrow using backdoored RNGs, and weird that they had a patent for escrow using backdoored PKRNGs, but a contemporaneous patent for elliptic curve PKRNG key escrow?
Alternately, it could just be that Certicom saw a means to profit off of an emerging standard by modifying it to support key escrow. It's not like key escrow in general was something that no had thought of before in 2005. If this was an NSA backdoor, why would the NSA reveal it publicly by having Certicom patent it? Not to mention that it this was a horribly unpopular algorithm long before any suspicion of it being a backdoor. If it was an NSA campaign to break public cryptography, it was a miserable failure long before Snowden ever hit the scene.
Dual_EC was the default PRNG for the RSA BSAFE library. The only actual numbers I've seen regarding how popular the library was this researcher's findings[1] in which he did a scan of 21.8 million IP addresses and managed to find 720 servers using it (to be fair, that's a lower bound as there were two implementations of BSAFE and only one was detectable remotely). It was generally easier use a PRNG provided by the OS or use an open source library for free. As a testament to how unpopular Dual_EC was, there was a bug in OpenSSL for years that prevented it from working at all when Dual_EC was enabled, and it wasn't discovered until after Snowden.
In 10 years of doing software security assessments I saw a total of zero (0) systems use BSAFE. It may have been popular with government contractors.
Later
There was also a period during which the Red Hat Secure Server that shipped with Red Hat Professional linked against BSAFE. Its release timing doesn't square with Dual_EC though.
So how long until this "Extended random" thing is purged from all TLS implementations and standards? And with such a blatant (in hindsight) trap in the TLS standards, how can the rest of it be trusted?
It was never standardized and is in no TLS implementation currently shipping, nor was it ever enabled in any mainstream implementation. The extension was proposed specifically as a compatibility measure for DoD networks, and was never recommended for normal use.
> The extension was proposed specifically as a compatibility measure for DoD networks
"Compatibility"? The excuse for the extension was detailed in TFA:
(Quoting the proposal:) "The rationale for this as stated by DoD
is that the public randomness for each side should be at least twice
as long as the security level for cryptographic parity, which makes
the 224 bits of randomness provided by the current TLS random values
insufficient.
(The article authors comment:) "Cryptographic parity" is not a common phrase among cryptographers. It is
not defined in the document, and its intended meaning is highly unclear." But "Extended Random reduces the Dual EC attack cost from 2 ^ 31 to 2 ^ 15, since
the attacker no longer needs to guess 16 extra missing bits."
The excuse actually sounds like "the bigger the number of bits you use, the more we need you to disclose publicly." Which was obviously true from the point of view of the attack.
There's not much new in this paper; it's basically a survey. Read the original IETF mailing list posts for extended-random and OpaquePRF. Both proposals were discussed, and the presumptive reason for both of them to be standardized was to make TLS acceptable for deployments on USG networks that had weird crypto requirements.
What you're quoting from Bernstein's paper is the rationale DoD had for requiring a change to TLS for its deployment on their networks. This too was discussed on detail on the mailing list: their interpretation of the requirements of Suite B were that the randomness in a crypto transport had to be of arbitrary length, contributed both by client and server, and tied to keying.
Nobody at the time claimed that these were requirements for Internet cryptography. Just that DoD had these requirements internally, and it was better to define a TLS extension to meet them than to have DoD running some horrible custom protocol that the private sector would eventually have to implement too.
The "compatibility" I'm referring to is with DOD policy, not a technical measure.
I am not arguing that NSA was pure of heart in proposing OpaquePRF/extended-random. Perhaps they had a key escrow plan in place for their own networks? What I am saying is that there was no serious discussion that I can find for the deployment of OpaquePRF/extended-random on civilian networks.
I quoted authirs saying ""Cryptographic parity" is not a common phrase among cryptographers. It is not defined in the document, and its intended meaning is highly unclear." What was written by NSA that was clearer then? Link or quote? If it's "the randomness in a crypto transport had to be of arbitrary length, contributed both by client and server, and tied to keying" then it's just a somehow rephrased "the public randomness for each side should be at least twice as long as the security level" which I (and the article authors before) also quoted as a DoD excuse.
I don't care what NSA's rationale for extended-random is. You're still discussing this as if extended-random was an attempt to change the version of TLS that you and I use. It was not. It was so much not that, that the primary objection to it in the TLS-WG is that the extension worked in such a way that it made an end-run around the TLS standards process (in allocating extension numbers) in order to avoid being standardized for normal TLS. Because its authors didn't expect us to use it.
Bernstein cavalierly mentions Rescorla in this paper, but never provides the additional color that Rescorla himself on the mailing list said he had no idea why the extension needed to exist, and did not understand its parameters.
Rescorla is an extremely sharp person. Do you think he thought, "I don't understand this extension, but NSA says it's important, so we'll all use it?" Of course he didn't.
And I likewise don't care about Rescorla, I've never mentioned him in this discussion and I don't know why you bring him up in this thread. I see that Bernstein mentions him, but I really haven't.
It's obvious that DoD guys managed to add to the standard texts (most of the people reading would never know that it should be ignored, the text is there among all the other important texts) the mode in which enough output of the RNG will be part of the initial communication. The step 2, "now just implement it, here's the link" maybe hasn't occurred this time but it was almost a kind of backdoor entry to the standardization process. I can imagine that even if it wasn't really successful under your criteria, the DoD guy who managed to push it even that far could have been promoted. Or if somebody outside was involved with such a goal, could have received nice fees ("see, it's there, looks like the text of the standard, quacks like the text of the standard..."). Exactly like this discussion we have now: I can easily quote the text of that proposal (it's easy to Google it) and "it looks legit" (heh) but I have honestly no idea how I'd even find what was written in the relevant mailing lists at that time (or even more honestly, I know that I'd probably find it when spending more time than I'd like to spend, so I prefer just communicating, like most of the potential "targets"). For completeness, even if you already wrote what you consider important, I'd surely welcome your links to the relevant posts you mention.
Are we sure of that? It was also thought that nobody would have enabled Dual EC by default, and however somebody did. Even without an allocated extension number, somebody could have used an unassigned one for "testing".
Dual_EC was actively recommended, on the auspices that deployed cryptosystems already depended on number theoretic security guarantees, and that it made sense to minimize the number of things one's cryptosystem depends on.
Extended-random was proposed as a way to allow DoD to deploy TLS on its networks, rather than using custom crypto protocols. OpaquePRF in particular was explicitly about getting TLS to play well with the Suite B policies. It was never recommended as a deployment default.
You have probably never used a crypto tool that used either Dual_EC or extended-random, for what it's worth.
> You have probably never used a crypto tool that used either Dual_EC or extended-random, for what it's worth.
Dual_EC was default PRNG in BSAFE-SSL-C for almost a decade. Shodan matched 1700 banners from this time, online now. The chance that a person interacted with a https service in that time is quite high I would think.
Thank for following this up.
I think you are right. RH SWS was EOL before the Dual_EC default switch was made. If the only public banners for this product are from before this time, I will conceed that DUAL_EC likely never saw use on the Public Internet.
Never is a bit ambitious. There are 100s of implementations out there. I wouldn't be surprised if some of them were NSA sponsored as Trojans as well. Oh wait...
What I don't understand, is why the patent application was not objected in the secrecy order phase. Or was it secretly partly in the US, because the actual US patent apparently only covers the recommendation to pick your own P and Q, and not the actual mechanism itself. It is not clear to me what the last few sub-chapters are meant to show.