Hacker News new | past | comments | ask | show | jobs | submit login

Key point:

>According to WikiTribune’s source, experts in the delegations have clashed over recent weeks and the NSA has not provided the technical detail on the algorithms that is usual for these processes. The U.S. delegation’s refusal to provide a “convincing design rationale is a main concern for many countries,” the source said.

So it's not just "We don't trust anything the NSA puts out." It's "The NSA is refusing to explain their algorithms in lieu of saying 'Trust us,' and we don't."




It's worth bearing in mind that the documentation issues here are basically process concerns more than they are substantive concerns. Both Simon and Speck are straightforward designs. Cryptographers are capable of evaluating a deliberately-simple lightweight ARX cipher!

But in real standards competitions, academic cryptographers bundle their designs with rationale essays and point-by-point explanations of how the designer mitigated attacks, like differential and linear trails. Standards groups didn't get that from NSA, and when academic cryptographers poked at the ciphers and asked questions about linear trails, the NSA designers got standoffish.

I think there's a subtext to all of this where the NSA is dismissive of, well, basically all academic cryptanalytic work. The converse of that, of academics and the NSA, didn't (I think?) used to be true, but might gradually be taking this shape, so that the two groups are just mutually dismissive of each other.

So, where in the past the NSA got some deference that enabled them to submit standards proposals that didn't follow process, now the opposite is true, and academic cryptographers expect deference.

It's no tragedy. NSA brought this on themselves, and really, what we're "losing" here is kind of a marginal design anyways, right?

(I write this in the hopes that someone better connected to these issues will correct me on lots of it!)


I think there's a subtext to all of this where the NSA is dismissive of, well, basically all academic cryptanalytic work. The converse of that, of academics and the NSA, didn't (I think?) used to be true, but might gradually be taking this shape, so that the two groups are just mutually dismissive of each other.

This isn't new. My paper about exploiting shared caches in Intel Hyperthreading as a side channel to steal an RSA key was rejected by the Cryptology ePrint archive "because it wasn't about cryptology", while some people in the computer security community dismissed it as "just a theoretical cryptography thing".


[My paper about exploiting shared caches in Intel Hyperthreading to steal an RSA key was rejected by the Cryptology ePrint archive "because it wasn't about cryptology"]

Why shouldn’t it have been rejected?

I can’t see how it was about cryptography either as they seem to define it based on the center of gravity of their papers: https://eprint.iacr.org/2004

Separately, if what you said about the security community downplaying your results as too theoretical was not just the occasional opinion of a maverick, then clearly that was incorrect and unfortunate in multiple ways.

Finally regardless of any of that, great work on your contributions. Nice insights and efforts, coming so early on in the lifespan of an important problem.


I can’t see how it was about cryptography either as they seem to define it based on the center of gravity of their papers: https://eprint.iacr.org/2004*

A quick search shows eight papers which have "side channel" in their titles, so I think it's a bit of a stretch to say that they don't consider side channel attacks to be cryptography...


I was trying to guess that 2004 was the most proximate archive year of their papers prior to when yours was rejected, hence that list: https://eprint.iacr.org/2004

Are you saying any of these papers make a significant argument about side channel attacks? Or of you saying there are eight papers that make some reference to it? If it’s the latter that’s quite a big difference and it’s easy to see the logic of rejecting your paper based on its central theme.

I didn’t notice any of the papers made a significant argument about side channel attacks. Maybe 2004 was not the most proximate year prior to take as sample data? Or maybe I’m just overlooking the eight your referring to?

Btw I wouldn’t begrudge you any wtf thinking if you had any. It would definitely suck to do good work and not get proper and timely recognition for it, especially when it could have sped a solution or helped mitigate a real life problem.

It’s just that to whatever degree this suckness happened, I can’t see how it was due to irrational or biased reasoning on the part of the Cryptology ePrint archive.


I'm saying there are eight papers on that list which have "side channel" in their titles. I assume, based on those titles, that the papers have something to do with side channels...


Are you claiming to have prior knowledge and academic precedent for Meltdown?


Basically, yes :-)

https://www.daemonology.net/hyperthreading-considered-harmfu...

http://www.daemonology.net/blog/2018-01-17-some-thoughts-on-...

A 2005 paper presented is linked, where he demonstrated such an attack and worked with the usual people to implement fixes.

In fairness to cperciva he clearly distinguished his work from Meltdown/Spectre - "These new attacks use the same basic mechanism, but exploit an entirely new angle."

I think that since the world was surprised by how bad it really was in practise, its fair to say cperciva (as well as others) predicted the explosion, but not necessarily the timing or the blast radius.

There are I am sure many other papers in corners of the net that explain the next one to come bite us.

PS cperciva was the Security Officer for FreeBSD and tends to know more about this stuff than the average bear.

(Again HN shows its ability to have someone with truly detailed knowledge just one comment away.)

NB _ I may have some details wrong, please correct if needed.


A couple of people do, if I recall correctly.


I definitely remember rumblings about "Branch prediction runs code outside normal execution? There's got to be a security hole there somewhere." That sentiment was common enough that it's certainly not hard to imagine someone sketching the shape of an actual attack with it before the detailed proof came down the line.


Speck and Simon have the benefit of simplicity. Like RC4, Speck could even be memorized. It's like 30 years of block cipher design has been condensed into the smallest possible algorithm.

Simplicity is useful. I've seen on multiple occasions bugs in more complex algorithms, like ChaCha20. Test vectors don't help as much--or at all--when you're creating a bespoke CSPRNG as in the OpenBSD and Linux kernels that repurposes the core round functions.

Moreover, if we're talking about backdoors, then code complexity--even just sheer number of lines of code--is the spy's friend. For more complex algorithms it would be more practical to trojan COTS and FOSS software to, e.g., substitute an operation so you'd still get the same logical output but lose side-channel resistance.

I'm not an EE, but assuming all the standard reviews happen, I'd much prefer that hardware vendors use something like Simon. Hardware acceleration is the very definition of a blackbox. Hardware developers can copy+paste+munge as well as any software programmer, but there's rarely any subsequent external review. The value of simplicity just can't be overestimated here. Because hardware products lack the extra layers of transparent, open review, we really want to minimize the potential for accidental screwups. The simpler the algorithm, the fewer degrees of freedom they have to be creative.

The smaller block size and smaller key size profiles were dubious, but that's a judgment call. The NSA probably sees so much bad crypto out there that the Simon & Speck designers could have earnestly considered them a step up. Note that the debate about these weaker profiles was never about choosing them over some stronger algorithm. Rather, the alternative argument was that if a hardware design was so low-power and so low-bandwidth that those profiles were useful, it would be better to not have any crypto at all so nobody would have a false sense of security. From an engineering perspective I think most would agree with the latter; but as a practical matter commercial vendors no doubt will sell half-baked crypto in such tiny devices, and without a known quantity we'll probably be worse off. In any event, those weaker profiles have already been ceded.

Assuming the algorithms continue to hold up to review, I think it would be a net loss to lose Simon & Speck. And, frankly, I'm more suspicious of the motivations of, e.g., Chinese and Russian security services.

As for why the NSA designers haven't been cooperating as much as the community has desired, it's anybody's guess. AFAIU, these designs were something of a 20% project for these engineers and they're probably not getting much support from management for pushing these designs. I don't even think they work in one of the departments these designs normally come from; IIRC they've claimed they tossed it over the fence to one of those secret departments and got a thumbs up. But who knows. And it shouldn't matter, especially for something so simple. All evidence suggests that the NSA no longer possesses extraordinary skill when it comes to cipher and hash design, so provenance shouldn't color anyone's judgment. Academic and private industry designers can and have worked for security services, too.


Low powered hardware? You can give it more power. Why should security be sacrificed at all by IoT devices just to make them cheaper? Make them secure in the first place, point.

With today's speed, I'd go with a traditional Feistel cipher with moderately high security margins any time.

It's annoying that NIST approves standards with very low security margins - AES is an example of an algorithm with unnecessary low security margins, for instance. Speck and Simon are even worse in that respect.


> Low powered hardware? You can give it more power.

That would mean a shorter battery life. Not all of IoT is mains-powered.


Simplicity also reduces defense in depth...


that they think this has even a tiny chance of happening after Snowden is really surprising.. I guess we really need to make sure people don't forget about that going forward


They brought this on themselves.

The root problem is they have two conflicting missions: one is to help USA secure itself and the other is to read all comms worldwide.


The way it's supposed to work is that they try to have the best code cracking, and then they want everyone to use the second best codes so no one else can crack code. If they aren't the best the plan can't work and you get junk like clipper chip or whatever it was called.


Anyone who falls for this kind of reasoning is incredibly naive.

If you ever hear that somebody has a plan to have the best offense and make sure that everyone else has the second-best defense so they're the only ones who can do something bad, the first thing that should come to your mind is that they are planning to do something bad. It's not an "will they" situation. It's a "can they" situation. If they have the capability, they will do it.

That is a terrible plan. And don't tell me that they are on our side so it's okay. Secretive, crypto-state actors are on their own side. Ask yourself: if this power eventually falls under the unilateral control of the executive branch, will that always be a good thing under any conceivable future administration? Believing the justifications is extremely dangerous.


This seems like a very easy problem to fix. We can split NSA into two and now the defensive people are not tainted by the people trying to break into everything. I mean the defensive people can focus on trying to secure everything, everywhere and against everyone including their former coworkers. Is this too naive?


I think the practical barrier in the way of this is that one of the main ways the NSA “invents” new Suite A ciphers is by the cryptanalysis of cryptosystems of foreign militaries. The “defending the US” job is a consequence of the “attacking others” job; the crypto experts got to be crypto experts by breaking rival crypto.

This is also, in large part, why these cipher suites are classified—not because declassifying them would make them easier to cryptanalyze; but because it would tip off foreign powers that the US knows how their crypto works!

(And this is, of course, just as true of other weapons-systems development projects as it is of state-run cryptosystem developments.)


I don't think it's naive and would likely be better than the current system where the poachers are way more heavily supported than the gamekeepers.

I mean it's not like division of intelligence agencies is anything new (FBI for counter-intelligence, CIA for foreign-intelligence etc).

Of course to make it work you'd need a watchdog with actual teeth and historically the NSA has regarded oversight as well something they think they don't need or want.


IT is a little more complex than that - they need to also figure out what codes the US government uses for classified secrets that we don't won't other countries to find. It is not clear how they resolve the inherent conflict in these goals.


For these ciphers, it seems less likely that NSA has a backdoor that no-one else could find. Notably in the case of dual-EC there was a recommended curve chosen by the NSA. That was easy to backdoor by knowing how the curve was generated.


More importantly, pretty much the whole point of a PKRNG is to make the random state recoverable. It's not as if competing RNGs have designs that enable the kind of backdoor Dual EC does. That's what was so weird about it, and why there was some doubt about what it was --- not doubt that people should use Dual EC (of course they shouldn't, and it's been amazing to see companies like RSA and Juniper actually adopt it; the cryptographic incompetence behind those decisions was shocking), but that NSA could really be using such blatantly awful tradecraft.


I don’t know anything about how the NSA interacted with ISO, but it is worth mentioning that the NSA has material explaining these ciphers:

* https://csrc.nist.gov/csrc/media/events/lightweight-cryptogr...

* https://eprint.iacr.org/2017/560.pdf

* https://www.nist.gov/sites/default/files/documents/2016/10/1...

Those include statements about how far cryptanalysis have weakened the ciphers, which the NSA claims was roughly what they had expected during design.

If the NSA published its own cryptanalysis, would you believe it, or would you assume they had told less than the whole story? What if they paid an academic to publish cryptanalysis (“of course he would say that, he was paid $X by the NSA!”)? The NSA appears to be in a catch-22 here.


I think when someone like the NSA provides you an algo you either decide you can’t trust them or need to ask some heavy hitting questions to make sure it’s not broken somewhere along the line for their benefit.

I’d opt for not trusting them, but even if they did provide some details elsewhere I’d imagine ISO had some questions the NSA didn’t feel like answering...


But we can also assume that everyone else working in security has their own other bias: Chinese and Russian services must be at work too.


The nice thing about mathematics is that truth and falsity doesn’t derive from trust or authority.


Though, the NSA did lose the keys to their van full of fun toys to the Shadow Brokers just a couple years ago. In my mind that adds 'incompetent' right there next to 'evil/criminal' on the list of reasons they are untrustworthy.


I think that speaks more to the vastly more difficult task of playing defense than offense.

There's a great dad joke that relates: if you have a boy, you only have to worry about one little prick, but if you have a girl, you have to worry about all the little pricks out there.


One could reverse genders in that joke, but many would call that version sexist ;) And both versions are, in truth.


It's like a burglar offering to change your locks. And promising he doesn't have a copy of the key. But you aren't allowed to check. Just trust him.


On the other hand, an honest burglar is probably the most reliable source for all the known exploits and can judge a good lock when they see it.


Yes, but the NSA is hardly honest.


However, Speck is out in the open, the specification is public. It's surprisingly simple to implement.

Too simple, as some cryptographers would say...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: