To summarise: the authors speculate that the NSA invested in a lot of resources into breaking Diffie-Hellman key exchange for certain 1024-bit primes. Just a few hardcoded 1024-bit primes were used in the majority of VPN handshakes and a significant minority of HTTPS and SSH handshakes worldwide in 2015. This would give the NSA the ability to recover the shared symmetric key used in these encrypted communications, and therefore decrypt them.
Since then several protocols have shifted towards preferring Elliptic Curve Diffie-Hellman key exchange which doesn't suffer from the same attack, or at least using larger primes for plain DHKE (1536-bit and above). However I don't know the extent of this - a lot of VPNs at least are still using the old, weak DH keys.
Bingo. Garbage primes that were either pushed by NIST to vendors or bribed to be included as default. The absolute arrogance of the authors in the Snowden papers in detailing the leak was probably the most powerful driver of things like dh parameters that roll every few days, and ed25519 that flat out rejected the nsa premise that primes were at all trustworthy
Fast forward to today, and devs/cryptographers absolutely threw the nsa and cia out on their asses for their SPECK kernel argument hinging on the seemingly ironclad credibility of "it's classified."
Corporate players will still sneak backdoors;it's what they do. But the real blow to the intelligence community is the loss of default trust and the active denial by the open source community.
My understanding is that it wasn't that the primes were bad but that they were hard coded, never changed, and the bit size was small enough to make attacks based on that practical.
I'm on the fence about speck. It's so simple that there isn't much room in there for a back door, meaning that if there is one it implies that the NSA knows something we don't about ARX ciphers.
If the NSA knows something we dont about ARX, then could they also know something about ChaCha?
Your understanding is correct. Before these leaks and when DH was widespread you could generate your own dhparams file for DH connections, both for sshd and for TLS connections. It was advised to do so in corners that understood what that meant. It's just that many people didn't bother, so the NSA took advantage.
The parent comment to yours gets a bit excitable about ed25519 being a prime-rejecting miracle cure. Indeed 2^255-19... is prime. Elliptic curve cryptography still relies on large prime order subgroups for its security and that's not going away until and if quantum computers get good enough to break it. The main advantage of ed25519 is that it is fairly hard to implement badly. No k=3 randomness issues as with ECDSA (you can also use deterministic ECDSA to get this property). Constant time curve constructions. Etc etc.
The same team who designed SIMON and SPECK designed SHA2 so, worth thinking about that.
My personal opinion is that there's nothing wrong with SIMON and SPECK. However there's also not a lot wrong with AES and so little reason to downgrade to something else in almost all use cases. The reason Germany and Israel kicked them out of ISO was likely entirely politically motivated. I mean if I were them and I'd maybe been stung by some of the Snowden leaks e.g. using weaker crypto, or had lost trust in the NSA, I'd probably do the same.
The argument for lightweight crypto is not at all clear. I've heard of people implementing AES in less than 1000 gate equivalents, with the target GE in the NIST lightweight competition being 2000. There is somewhat more justification for lightweight hashing because of say the large internal state size of Keccak not being particularly embedded friendly.
But we're talking about super constrained devices here. RFID tags or encrypted flash storage to go with a cortex-m.
It's entirely justified to remove SIMON/SPECK from the Linux kernel. To run vanilla Linux you need an MMU, and the cortex-m series of processors that power most embedded stuff don't have one. Even then, they're actually powerful enough to do full TLS anyway if coded properly (bearssl for example).
The reason you'd want a super-constrained cheap AXR in Linux isn't because Linux runs on the tiny devices but because Linux talks to the tiny devices.
But the politics meant it doesn't matter in the end. People convinced themselves the NSA backdoored XOR or that the number five is secretly composite or whatever and you can't use logic to fix something like that.
True true, it does. I'm not sure to what extent it is needed in kernel but I'm sure there are some use cases.
I was aiming, if perhaps badly, at the crypto really, as I'd seen some arguments that we needed it in Linux to run on constrained devices. I don't really have a problem with including it in the kernel to talk to other devices, but I would like to discourage "lightweight" is "better" when we have perfectly fine AES for your desktop needs (someone somewhere will encrypt their disk with it with LUKS for no logical reason).
Yes indeed. The arguments against it are entirely political.
Sure, but I do not think that "48-bit blocks are a bad idea for encrypting gigabytes of data", "70% of Simon+Speck have been broken", and "the algorithm designers refused to explain the reasoning behind some choices regarding the algorithm and attacked us instead" are political attacks.
We probably have different ideas of what "political" means here. I simply mean that the motivations for dismissing SIMON and SPECK likely have more to do with it being the NSA that proposed them and the Snowden leaks at the time than any particular backdoor in SIMON or SPECK. I am not saying that is a bad thing - it is a line of reasoning and a perfectly valid one. I just think we should be clear that is the reason.
I'm not sure "attack" is helpful in this context. I'm not accusing anyone of attacking anyone else.
As an example, my view that lightweight crypto is largely useless is somewhat political, based on practical concerns. We have working AES accelerators and AES instruction sets in chips today we can exploit and for most of my company's customers, that's absolutely enough. I have a view; others disagree and have interesting counter arguments.
On the arguments quoted, there are multiple modes of the algorithms, and a constrained device is unlikely to transmit multiple gigabytes of data using the 48-bit mode mostly because if it can only cope with the 48-bit mode I'd be skeptical it can transmit multiple gigabits of data at all. ARM pointer authentication can use as little as 3 bits of QARMA, although according to Qualcomm the Linux kernel uses 24-bit PACs. This is entirely fine because the point is to require a low-latency check that is reasonably hard to forge over a short window during which exploitation is possible, not gigabytes of data. This is an example of an actually valid use case of lightweight crypto.
Second argument, AES is 100% broken. Yep. Key recovery over full-round AES. If I present just that fact I could make all kinds of arguments for not using AES, but of course I am neglecting to mention the fact that it only improves things by an order of 4 and that 2^126 computations are still required - impossible for 100 NSAs all joined together - to perform it. The time complexity of the attacks on 70% of Speck48/72's rounds is 2^71.8 so... likewise only a marginal improvement, and this isn't even full-round Speck like the AES attacks.
I wasn't at the standardization meetings, so I can't really speak to that, but if the NSA behaved badly then they probably ought to have known better given the fallout from Snowden. I'm not American and I don't owe them any special deference - I also don't trust them and I'm not advocating you should, either. I'm simply saying that to the best of my knowledge the algorithms seem fine. We've made this tradeoff to trust SHA2 before and to circle back around to the original purpose of this article I would be quite surprised to learn there is a backdoor in SHA2. This is mostly based on the fact that there is a huge motivation, in the form of embarrassing the NSA, to find either weaknesses or a backdoor in any publicly acknowledged NSA algorithm and consequently putting "made in Ft. Meade" on an algorithm is a sure fire way to ensure it gets a lot of cryptanalysis.
Anyway, let's leave it there, we've diverging somewhat from the original topic.
Rotating DH groups periodically is difficult. SSHv2, for example, has support for this. It takes a lot of computation to generate new groups, but then how do the client and server agree on a group? Well, the server tells the client, and the client has to like it -- i.e., the client has to trust the server's group. This isn't better than just having a bigger nothing-up-my-sleeves group to begin with.
That, ultimately, has been the solution the community landed at: nothing-up-my-sleeve curves for ECDH and EdDSA.
The nothing-up-my-sleeve part is all about setting / agreeing to obvious and hard guidelines for generating and selecting curves before doing the selection, then you can see that a curve was generated and selected without any hidden agendas. We have several of these now that are generally thought to be secure, though, of course, it's hard to say for sure. It's a pretty good outcome.
Rotating the group was never the point or the problem. The problem was that typically Linux distributions shipped a 1024-bit hard coded DH parameters file and most people didn't bother to change it.
Groups with low security remain the default for things like IPsec configurations in firewalls and so on, even now.
It's perfectly fine to keep a fixed DH parameter choice. In some senses that's exactly what using ed25519 is. The point is that the strength of the group that results is sufficient to mean that even the NSA with their budget can't break it, even targeting that one curve. It's just that elliptic curves are much more efficient than picking a single 2048-bit or 4096-bit DH group, so why bother with that when we can pick a 256-bit EC group instead?
The nothing up my sleeve part isn't particularly relevant to this. The choice of elliptic curve is quite involved so it doesn't make sense to simply spit out a random field and random curve coefficients. Moreover given the server communicates a prime and a generator for the DH group, the client is quite free to do some primality testing and reject the DH group if it wanted to.
Also, there's a nice body of work showing NUMS can be manipulated. You might be interested in https://bada55.cr.yp.to/
I think you're muddling the concepts a bit here. Nothing up my sleeve isn't the main and only argument that justifies the move to ed25519; the original group wasn't backdoored in the first place. Just potentially within range of the NSA.
> If the client doesn't trust the server, the crypto protocol is a little irrelevant.
What? No. The protocol is the only thing that is relevant. Peers don't generally trust each other a priori at all. They trust the protocol. If they can authenticate each other within the bounds of the protocol then they trust each other. If one party no has reason to distrust a certain protocol, then it should not be used as a basis for establishing trust. If the two peers can't agree on a protocol: stalemate. If I compromise your server and only serve weak protocols a responsible client won't authenticate me whereas a vulnerable client would take my word that my protocol is secure.
If don't trust a server, then any strong protocol that results in a secure shared secret or session key is trivially sidestepped by the server intentionally leaking these secrets or keys.
It's subtle but basically NSA supported primes but they didn't say anything about hard coded primes. They knew that if we primarily used primes we would probably use hard-coded ones for the majority of the traffic so they could decrypt them. But of course they never said that directly. They just pushed towards that direction.
That's absolutely not true, there are tons of deeply flawed crypto systems that have been developed without any input from NSA. If you are not an expert you are virtually guaranteed to create such a flawed system.
Conversely NSA has pushed AES very hard, to the point that there are "suite B" implementations approved for safeguarding classified information. Nobody has found a backdoor in any of those.
A novel (to me) thought: if we count on the government not to break our crypto (edit: originally said break the law, I don't want to comment on the legality of nsa actions here, it strikes me as off topic), we are already screwed. A million other crypto discussions have surfaced: if someone wants your keys, the can break your knees (rhymes too). If VA stops paying benefits, unless the organization that funds and controls them steps in, a lot of people are very screwed. While it's admirable to ensure crypto at every level (especially to address threats from other states or private individuals), ultimately as long as all three branches let the NSA eat our proverbial lunches and sneak under my mattress (gag orders, huge budget, sealing court records, etc), secure crypto is going to be a sisyphean at best. Obviously there are good reasons to push the stone upwards, but the fitting solution to problems like infiltration and gag orders seems like it will be a political one not a technical one.
1800: The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause...
2000: lol tech lets us read it all
2025: Oh no, crypto is good now? They want us to get a warrant and use non-dragnet surveillance to pursue leads? THE HORROR!
> There was once a far away land called Ruritania, and in Ruritania
there was a strange phenonmenon -- all the trees that grew in
Ruritainia were transparent. Now, in the days when people had lived in
mud huts, this had not been a problem, but now high-tech wood
technology had been developed, and in the new age of wood, everyone in
Ruritania found that their homes were all 100% see through...
> One day, a smart man invented paint -- and if you painted your house,
suddenly the police couldn't watch all your actions at will...
> Indignant, the state decided to try to require that all homes have
video cameras installed in every nook and cranny. "After all", they
said, "with this new development crime could run rampant. Installing
video cameras doesn't mean that the police get any new capability --
they are just keeping the old one." [...]
This story didn’t make enough sense to really nail the issue. I’m surprised this parable is popular.
Why would you build houses out of wood if the wood is transparent?
It would make more sense if it were discovered years later that the police had special polarizing binoculars that let them see through the wood, which no one knew they had until it was too late and all the old mud huts had been phased out.
Also, describing someone as a “paint technologist” when they are anti-paint is a confusing decision for a parable: a story intended to make a complex issue easier to understand. I initially assumed they were going to have a role analogous to a cryptographer, not an anti-privacy provocateur.
Perhaps having the police attempt to foil insidious flower thieves or dastardly rebel plots to undermine the government controlled potato cartel would also be more in spirit with a parable. The use of awful real life crimes (against children, even) make this a difficult one to share.
Because it's cheap. Because transparency may at times be useful to the owners. Because construction costs are lower. Because it's the fad or fashion. Because building regulations require it. Because of convenience. Because the transparency was an unanticipated side effect. Because it gives houses capabilities previously unattainable. Because the wood-house vendors have driven (by competition, monopolistic methods, patent and IP restrictions, superior lobbying, advertising) all the mud-house vendors out of business. Because ....
A parable is, by definition, metaphoric language. "Mud" and "wood" and "paint" and "tranparency" are metaphors for analogue physicle vs. digital electronic data, crypto, and surveillance.
Why do people choose, or find themselves with no alternative than digital data systems? There are numerous reasons. Many resemble, in some fashion, the list I've given above.
Reading parables overly literally is a category error. All metaphors melt if pushed loudly enough.
The big thing that changed was in regard to long range messages.
The biggest problem in the past was secure coordination. For example, the battle plans at the Battle of Antietam were discovered in cigars.
Thus, the calculus was that people could keep their stuff private, but if they were trying to coordinate with others, communications were able to be cracked.
However, now, people can do secure, encrypted communication easily with dozens of people worldwide. This changes the calculus of what they can accomplish. A person with a locked briefcase at home is pretty limited in what kind of stuff they can coordinate. A person securely communicating with dozens of people worldwide, can accomplish quite a lot without discovery.
Because the timeline is a simplification and because I didn't see (2015) in the title and thought "it's probably reasonable to speculate this mess will be cleaned up in the next 5 years." Joke's on me -- it probably did get cleaned up in the last 5 years, the resulting cutoff of illegal dragnet surveillance probably caused the panicked rhetoric about "going dark" we've seen from the intelligence community, and I probably would have been better off putting 2020.
Calling 2000 as the "year of dragnet surveillance" has similar issues, because you could argue that dragnet surveillance really got started in the telegraph era and scaled during -- was it WWI? If I think too hard about these dates I'm going to wake up tomorrow drooling on the keyboard from a wikipedia binge.
I think it was on a post about FBI surveillance, someone pointed out that one of the big advantages of all of these surveillance methods are not just that they may miss some cases, but that they are much cheaper than lawful interventions. I would venture to say that if we spent more on pursuing issues that actually matter (not most drug crimes) we would be able to do better while ensuring a greater general level of privacy than we do right now, while also solving more problems.
edit: I realize I wasn't clear what the more serious problems I'm referencing are. They are human trafficking, murder, domestic abuse and so on.
I'm saying that law enforcement probably doesn't need such intrusive surveillance, and that better results without violations of civil liberties could be acheived if we spent more money on it (or spent our current outlays better).
c.1850: The feds camp out at AT&T telegraph offices and transcribe every single message with no warrants. Or something like like. It's in The Puzzle Palace.
I like the analogy but I don't think it works. Basically the bad guys get a huge advantage (secure, in fact unbreakable communications over long distances) while the good guys have to live by 1800s rules?
I think every new impactful tech requires a reevaluation of laws to keep the balance from tipping in the wrong direction.
Yes, but so far it has been a unchecked power grab. The balance has tipped in the wrong direction, because both extremes are the wrong direction. That's how balance works. If we push back against the power grab and come up with a compromise, then we can call it a reevaluation and throw around the word "balance."
I also object to your wording around "bad guys" and "good guys." When Joe LEO wants to use the state surveillance apparatus to spy on his ex, is he a "good guy"? When the FBI mailed MLK Jr a suicide letter, was MLK the "bad guy"?
I think it's still worthwhile to make it as expensive for them as possible. Sure, they have supercomputers that can break any crypto a normal citizen is using. But can they break every piece of encrypted data that everyone is using? Maybe not. And if so, they're going to spend a lot of money on hardware and electricity to do so (especially as Moore's Law putters out). Now, that doesn't apply to cases when they can get their hands on a centralized key through back-room court orders. But we do also have the ability to avoid using at least some of those products in the first place.
Consider that we have openssl and libressl (a fork) for open source ssl libraries (if I'm missing any please correct me). With gag orders and infiltration, is it unrealistic to assume that the NSA at least could backdoor both of them? Also consider things like adding bogus/weak crypto to NIST, intel ME and other hardware backdoors. Yes, with enough collective effort there are ways around all of it (option a), but ultimately with that much collective time and money it might be more effective to vote, advocate, and advertise (option b). Furthermore, if you go option a, anyone who poses a threat to whoever holds the reins can still be successfully abducted. (edit: and free speech/safety in political dissent is one of the major prizes everyone has their eyes on here)
Presumably, there's at least one US citizen who is on the LibreSSL "core team." With a gag order, get them to write a subtle backdoor (see [0]), and add it to the codebase however they normally do. The developer is presumably trusted, so their code probably gets minimal code review, at least compared to some unknown person from the NSA trying to add it themselves. Alternatively, the US and Canadian governments are close enough that the NSA could conceivably get Canada to do that instead.
Wouldn't that require an NSA operative in that role? I'm probably mistaken, but what powers does the NSA have to force programmers to write code? National Security Letters, the ones that come with gag orders, are for requiring the handing over of surveillance data aren't they?
Once you write significant software, it's frequent that the NSA or police will offer you collaboration (eg. How CloudFlare started with Project Honeypot, or how Facebook got investment by the CIA, Crypto AG that got investment, etc).
The intelligence services don't have to coerce you through a gag order; they can just make a generous offer.
One of the LibreSSL author(s) maybe has financial problems. In this case, what is better for him ? Accept a juicy 5M USD consulting contract, be a nice boy, solve all the problems and beyond, or be the guy that will discover what it means to be sued to oblivion on frivolous charge (NSA will never charge you but they can share information).
If the US companies respect orders of the NSA, it's because they don't want to end up sacrified.
If the DoJ for example needs help from Facebook to capture documents from Airbus to give them to Boeing, you think Zuck will give away all his shares, stock, money, family, or just collaborate ?
The CEOs will fight the decision until they can (because of the PR risk), but at some point if the pressure is strong enough they won't be able to refuse (and I respect their choice to protect their families and by extension, protecting the families of their employees by protecting their job).
If someone knows your weaknesses he has unlimited power.
It doesn't matter if it is a legal or illegal organization.
> One of the LibreSSL author(s) maybe has financial problems. In this case, what is better for him ? Accept a juicy 5M USD consulting contract, be a nice boy, solve all the problems and beyond, or be the guy that will discover what it means to be sued to oblivion on frivolous charge (NSA will never charge you but they can share information).
Yeah that's my first thought: why would they use gag order when they can just bribe someone? Doesn't even have to be $5MM; a few $100k here or there could change someone's life.
And then there are actual government contracts, e.g. how the US can dangle the GovCloud or other lucrative contracts over Facebook/Google/Red Hat/Microsoft/whoever, with some implied or explicitly demanded "add this to the stack..." requirements.
> if someone wants your keys, they can break your knees
This doesn't seem relevant here. The government can make all the secret attempts to read your communications remotely that they want. If they want to take the strategy of torturing you until you hand the information over yourself, at a bare minimum they need to be willing to admit to doing it. It's not a threat to worry about, from the US government.
I think that if a government agency thinks it can get away with away with wiretapping on the PRISM scale, it or another government agency also will think it can get away with abductions, torture, and killing.
The US government is known to kidnap people [1], torture people [2], and kill people [3]. It wouldn't be that surprising to learn that some people were victims of all three.
> It's not a threat to worry about, from the US government.
You are first of all assuming the worries are coming from US citizens. People outside the US can obviously worry about the US government or its allies breaking their knees, it is a widely documented fact of life in many US war zones. We know that the US has officially been torturing people it suspected of terrorism for information (Guantanamo, CIA prisons in Eastern Europe) .
Second of all, we also know from the past that the FBI has been involved in campaigns of infiltration, blackmail, threats, incitement to violence, and almost certainly assassination of civil rights organizations and leaders (COINTELPRO). This was investigated at the time by Congress and new laws were put in place to prevent this type of behavior, but we have no guarantee that it didn't resurface in some form, especially in the current political climate (especially by not only the 'war on terrorism' started by Bush and escalated by Obama, and enthusiastically supported by Trump).
Overall, I think it's perfectly reasonable to fear the US government may physically force you to give up secrets, obviously so if you are not a citizen, and quite likely even if you are one.
Realistically speaking, if your scenario is that the US government might capture and torture you for encryption keys, then your number 1 priority should be physical security of your communication devices and the people who operate them. Your number 2 priority should be preventing other side channel attacks, e.g. the operating systems you're running on your endpoints and things like the Management Engine on Intel chips and the equivalent on AMD chips, as well as other possible backdoors in the hardware and your supply chain. Remember, the NSA intercepts mail-order hardware and modifies it and the CIA runs hardware companies.
Once you have taken care of these priorities, you can start worrying about the soundness of your encryption. Inventing safe encryption if you're not overly concerned about performance is really not hard, even experienced laymen can do that by using existing cryptographic primitives. You can even make it quantum safe. (It should be, in the described scenario.) If you think that is not within your capabilities, then you're probably right, but then you've already failed at task 1 and 2 anyway.
For the remaining 99.9999% of the population this is not a realistic threat scenario, and it's best to use a well-established cryptographic library with the recommended defaults.
This kind of thinking is what inspired by (root) parent comment. If we don't like our government putting people in the "capture and torture" risk management scenario, we need to act politically to prevent that from happening, because there is very little we can do technically to prevent such a thing.
The way magic numbers and curves are elected is not a naive process.
The obvious backdoor in Dual EC-DRBG is a good example, and you have companies helping the NSA in very extensive ways (e.g. Crypto AG).
It's just logic, the goal of the NSA is to protect interests of the US, if it involves pushing rigged / weak algorithms for the greater "good" (at least from a US perspective), they'll do it.
The real question I have is where is this money going? I've looked at positions at NSA, and I know at lease one person that works there. They aren't making heaps of money.
How are they recruiting so many qualified individuals?
From what I understand: tech, mission, and tickets.
Say what you want, the NSA probably has some of the most interesting tech problems you'll encounter. Part of that is due to the unique job that they do-- by definition, they're doing things that nobody else in the US is allowed to do, so they need people to solve problems that don't exist elsewhere. Part of that is due to having a huge amassed knowledge particular to their area of work, which allows them to work on things that other people/organizations simply don't even know about.
Also: the NSA has supercomputers that make your eyes bleed. A buddy of mine who worked there says that even their power bills are classified. Back in the day, they developed a lot of highly specialized, one-of-a-kind hardware. Cray processors had a popcount instruction that was supposedly put in place at the NSA's behest. They can afford to get specialized treatment and demand specialized features from folks who build their computers. If speculation regarding their signal collection capabilities is true, then they also have to have serious signal processing and communications tech, too.
The NSA has interesting challenges and cool tech to work with. That can attract a LOT of people.
Another issue is mission. Look, you may not agree with the NSA's techniques and methods. But at least the original goal of the NSA-- breaking codes used by foreign adversaries to gain an intelligence and military advantage-- is pretty standard military fare, and not inherently evil. SIGINT played a major role in shortening WWII, and who knows may have happened since then (since it's all classified). The folks I know who work there are proud of their work, and believe they're doing the right thing. I'll note that these are GOOD people whom I respect, which leads me to believe that the NSA isn't just moustache-twirling and evil cackling. Some people work there because they believe they are genuinely serving their country.
Finally: tickets. Several of the folks I know have gone to the NSA, worked there a few years, then jumped ship to take industry jobs with clearances. Government contractors pay a premium for cleared workers, and if you show up with a TOP SECRET clearance in hand, your job interview chances just got better. It's a good way to set yourself up for decent job security and comfortable pay.
At a guess, it was likely a tradeoff between security and computational cost.
I used to do freelancing for an ICT magazine and in early-to-mid 00s ran some numbers, as part of an article I did on limitations of applied cryptography. You can't do 1024-bit math on 32-bit or even 64-bit integers. Not without bignum libraries, which in turn make use of applied number theory. Under the hood the multiplication of two large numbers is an O(n^2) operation.
Performing an RSA or DH calculation with decent desktop computer in ~2003 took about 20ms with a 1024-bit number, and 80ms with a 2048-bit number. IIRC the first was expected to be theoretically breakable in a decade or so, assuming Moore's Law held true. The second was thought to be computationally infeasible for at least 100 years. These days the professionally paranoid use 4096-bit keys. (Or more recently, the nice 25519 curve which is more compact.)
So raising the security margin from "reasonable" to "paranoid", back in the day, would have required 4x the hardware investment to serve the same amount of traffic, and it would have imposed severe latency issues for everyone. We also have to consider the fact that most commercial secrets have a shelf life of years to a couple of decades. So before "E2E for everyone" became a thing, there simply wasn't widespread interest for crypto that could protect personal secrets for a lifetime.
With regards to using cryptography for personal privacy, cypherpunks were well ahead of their time. It's only become a mainstream issue in the last 5 years or so.
Multiplying 1024 bits by 1024 bits the naive way on a 32 bit processor involves 1024 multiply operations. Even in 2003, they were typically pipelined with a throughput of one multiply per cycle[1], so you should be able to do that in ~1040 clock cycles. Which is 750 nanoseconds on a 1999 era Pentium 3.
Since DH key exchange isn't done on all the data, only to exchange keys, and no machine then or now was setting up 1 million encrypted connections per second, I don't see this as a performance bottleneck at all.
> Multiplying 1024 bits by 1024 bits the naive way on a 32 bit processor involves 1024 multiply operations.
That's 1024 32-to-64-bit wide multiply operations. On x86, where regular mul is wide (it sets both edx and eax), that's the same as a regular operation. Some architectures that have just a MUL and a MULH instruction would make it two instructions. If you have neither, that would require 3 multiply instructions plus a few shifts.
> Even in 2003, they were typically pipelined with a throughput of one multiply per cycle[1], so you should be able to do that in ~1040 clock cycles.
The inner loop is much more than a wide multiply. You have to load the two words you're multiplying (although one of them should already be loaded in the outer loop). Then you need to two adds-with-carry, store the results back. Next you need to handle the extra carry bit, which in the most naïve case is another load, add-with-carry, store. And finally, you need the regular loop management instructions.
That's 4 loads, 3 stores, 3 adds, and 1 multiply, repeated 1024 times. I don't know the exact port availability and timing of Pentium 3 processors, but if there's only one load /store port, and it's a 4-cycle latency operation, you're looking at probably 25-ish cycles per inner loop iteration, or about 25,000 clock cycles.
I think RSA and DH call for exponentiation (modulo prime), not just multiplication. So multiply by 1024 and some significant efficiency derating as per sibling comment, and you should get close to 20ms.
Harvey-van der Hoeven. And yes it is. But this is an asymptotic cost meaning you have to have absolutely enormous integers before the cost will be lower than the old algorithm. In fact, the crossover is so large it is quite possibly out of the range of anything we'll practically ever compute.
In particular, crypto typically works with integers zillions of times smaller, so you certainly don't want the Harvey-van der Hoeven algorithm for that.
I am almost certain that no crypto invented today could withstand 100 years of technological progress (i. e. computing power, possibly even mathematics).
It is notable that the only historic cryptography that has lasted more than 100 years has relied on security-by-obscurity. There are still a lot of archeological secret messages nobody has been able to decode, sometimes despite pretty big efforts.
That assumes that those messages are decodable, which is not a given. For example it's not at all sure if the Voynich manuscript even can be decoded (it might be a hoax, glossolalia or similar).
Wouldn't it be easiest to just backdoor all clients, everywhere? And ideally you'd even do some basic threat analysis at the device to save you some compute resources.
Silent updates are extremely dangerous. This is why you can see some Israelis, for example, refusing over-the-air updates on their phone while traveling.
Does that mean the crypto used to authenticate the updates is compromised, or that the providers (app stores?) are? In either case, why is it safer in Israel?
Can't speak to the post, but, 'random Israeli citizens' likely are also "ex-military Israeli citizens" given the mandatory 2 year term for most of those who grow up there. So any OTA update concerns the military has are likely common social practice as well.
How would that be easier? It's a lot easier to passively observe communication than to actually change the communication infrastructure of parties you have no control over. That's more or less the premise on which the NSA is built.
Any backdoor method I can think of has downsides that could be be compensated for by a cryptographic backdoor, but I'm sure they have invested a lot in backdoors and every level.
I think in the long term, we'll move away from relying on one algorithm only, like AES or DHKE, but combine them. So for key exchange we will use DH plus ECDH plus whatever comes out of the post quantum NIST competition, and use all three as inputs for both parties to a KDF to produce the derived key. Attacks against any one of the algorithms won't turn into feasible attacks onto all of them.
Combining two algorithms is very hard and when it does it can become insecure(new attacks can be found). Even if you can combine and its secure then it becomes infeasible to use it often as the length of keys, IV, etc. increases
I didn't mean deep combination that changes how an algorithm works internally. Of course that would be a stupid idea. The algorithms would still work the same, would use separate random numbers, etc. You'd just combine them at the integration level, treating each algorithm like a black box.
Think of two servers first doing DH and then doing ECDH. You'd get two shared keys. This is inefficient because it has more roundtrips than doing the algorithm's steps in parallel, but helps with understanding.
These two secrets can then serve as inputs to your KDF which derives multiple secrets for each of the data ciphers you use. E.g. first AES with the first secret then encrypt that with ChaCha20 with the second secret. As long as your KDF is preimage resistant, you can't get from one algo to another.
And yes, length of keys and computation do increase. But I argue that this will be a cost worth paying at least in some scenarios. Especially in a time we make our CPUs slower by double digit percentages for all computation in the name of security.
Elliptic curves produce smaller keys which meant that addresses could take up less space and be more portable, so perhaps that was part of the motivation.
Since then several protocols have shifted towards preferring Elliptic Curve Diffie-Hellman key exchange which doesn't suffer from the same attack, or at least using larger primes for plain DHKE (1536-bit and above). However I don't know the extent of this - a lot of VPNs at least are still using the old, weak DH keys.