Hacker News new | past | comments | ask | show | jobs | submit login
Open-source crypto is no better than closed-source crypto (kudelskisecurity.com)
83 points by throwawaymath on Oct 4, 2018 | hide | past | favorite | 58 comments



The titular thesis of this article is that Linus' Law doesn't really apply to cryptographic security vulnerabilities.

To lend some authority to that (probably controversial) claim: the author of this article is JP Aumasson, a well known cryptography researcher and engineer. There aren't many cryptographers who work in both theory and application the way Aumasson does. He recently wrote Serious Cryptography and has done professional cryptanalysis since 2012. He is also a co- or primary author for several cryptographic algorithms; including BLAKE/BLAKE2, NORX, SipHash, and Gravity SPHINCS.

In the course of establishing his argument he groups cryptographic vulnerabilities into four main categories. The basic idea is that vulnerabilities in cryptography require a different set of skills to find than "regular" security vulnerabilities do. They also have different requirements for exploitation. Aumasson's overall point is that the higher difficulty of finding bugs in cryptography for each category means there won't be a meaningful difference in security between open and closed source code.

And for what it's worth he's only talking about open versus closed source implementation here. He's not talking about Schneier's Law or closed source designs.


Did you mean Kerckhoffs's principle ("a cryptosystem should be secure even if everything about the system, except the key, is public knowledge")?

Instead of Schneier's Law ("any person can invent a security system so clever that she or he can't think of how to break it").


I did, that's right! Thank you for the correction, I should've double checked that :)


The conclusion seems sensible with regard to implementations of a given, secure algorithm.

But the title suggests a much wider claim: that closed source crypto in general, hiding everything from system design and primitive selection to implementation is just as secure as open source crypto, where the author expects and receives public scrutiny and must not rely on any obscurity. This wider claim still seems false to me.


The wider claim that "Open-source crypto is no better than closed-source crypto" points to a problem with both the philosophy behind the open source development methodology which makes promises which it can't always deliver such as "open-source software has fewer bugs than closed-source software because more people have access to OSS and to its code" (as the article author put it). And there are significant problems with the logic of the article.

This also points to why free software and open source aren't the same thing.

Free software makes no promise to fewer bugs or other development methodology claims. Free software argues that the inequity of software non-freedom is unethical; software non-freedom is no way to treat computer users. Software non-freedom works against users spirit of cooperation and community. It is precisely this focus on ethics that the open source developmental methodology was founded to run away from.

There are still show-stopper problems with the article's ridiculous claim: by definition nobody knows all that happens in so-called "closed source" software, better known as proprietary software. Any claims as to proprietary software security are unverifiable even if one is deeply familiar with a particular popular implementation of some proprietary software. So the title ("Open-source crypto is no better than closed-source crypto") immediately makes a claim it can't defend.

The article also claims, without evidence, that "Paid security audits help but won’t find all the bugs, as they tend to be broader than they are deep.". That's quite a claim made on behalf of "paid security audits" -- they're "broader than they are deep". I'm not entirely sure what that means, but there's nothing in the article to indicate where one can find that survey data needed to back up that claim.

Ultimately the article poses no challenge to a free software ethics-based understanding of proprietary software: no matter how difficult cryptography software is to fully understand, no matter how insecure current FLOSS cryptographic implementations are, that's no reason for equating software freedom with software non-freedom. Cryptographic software is no exception to the need for respecting computer user's software freedom. Software freedom lets us do as we wish with our copies of the published free software and thus control our computers and treat each other ethically. Software non-freedom invites dependency and apparently incentivizes malware (see https://www.gnu.org/proprietary/ for around 400 examples of proprietary malware). As that page says, the "initial injustice of proprietary software often leads to further injustices: malicious functionalities.".

Why? https://www.gnu.org/proprietary/ explains:

"Power corrupts; the proprietary program's developer is tempted to design the program to mistreat its users. (Software whose functioning mistreats the user is called malware.) Of course, the developer usually does not do this out of malice, but rather to profit more at the users' expense. That does not make it any less nasty or more legitimate.

Yielding to that temptation has become ever more frequent; nowadays it is standard practice. Modern proprietary software is typically a way to be had."


> The article also claims, without evidence, that "Paid security audits help but won’t find all the bugs, as they tend to be broader than they are deep.". That's quite a claim made on behalf of "paid security audits" -- they're "broader than they are deep". I'm not entirely sure what that means, but there's nothing in the article to indicate where one can find that survey data needed to back up that claim.

The claim is the authors opinion. It’s a blog post not a study. I believe is he correct. Paid security audits are indeed broad and rarely deep.

The reason why is so obvious to anyone who’s engaged in them that what you’re asking for is like demanding a study that restaurants are more crowded around lunch time.

Paid security auditors must deliver vulnerability findings. The end. Deliver findings, get paid. Failing to deliver findings means not getting hired by that client again, or, in the case of bug bounties not getting paid at all. It really cannot be understated that if you are a professional auditor sitting at some client site you absolutely, positively need to be adding findings to the report. If you develop a pattern of not finding things then your colleagues will not respect you or want to work with you anymore.

Therefore, it is only rational to look broadly at the target for common classes of bug. There just isn’t time to deeply understand the target, and it’s foolish to try when everyone else just looking for common mistakes gets more findings and by virtue is more productive/valuable than you.


You're essentially giving us a tautology -- the claim is true because it's true -- about something non-obvious (based on what you say is an opinion) and not evidence-based. Plus you give us a tangent about what happens to those who don't do work they were hired to do which was never the point. You should be more careful about accepting opinion without evidence. There's nothing in "Paid security auditors must deliver vulnerability findings." that means auditors will go no further or are somehow structurally restricted from doing no more than you believe they do. Hence the need for evidence to back and clarify the assertion claimed.

People have been known to make serious errors in judgment before examining evidence because they felt so sure about their widely-shared opinion. Glenn Greenwald pointed this out in regards to Omar Mateen's killings in the Orlando, Florida nightclub ("Pulse") in https://theintercept.com/2018/03/05/as-the-trial-of-omar-mat...

"Mateen’s alleged motive in choosing Pulse — that he wanted to target and kill LGBTs due to some toxic mix of self-hatred over his own sexual orientation and his fealty to Islam — has been treated as unquestionably true in countless media accounts, statements from public officials, and ultimately in the public mind. But ample evidence now affirmatively casts serious doubt about whether there is any truth to this widely accepted belief about Mateen’s motives in attacking Pulse. While some of this conflicting evidence has been reported in the same media outlets that originally disseminated the narrative that Mateen sought to target the LGBT community, it has been downplayed to the point where few in the public are even aware that the original theories about Mateen’s motives have been undermined."

The point being that people formed their view before the evidence was in. After the evidence was examined people learned that their view was nowhere near as justified as they initially believed and told others. They should learn that it's not helpful to guess about why something happened without the evidence. Although what we're talking about in this thread could theoretically involve life and death matters (whereas the Pulse murders objectively do), I think there's something to learn in calling for evidence before formulating one's opinion. Something that blog post doesn't do and we'd be wise to not join the author in that framing of the issue.


I don't have strong opinions on what motivation that murderer may have had. I just don't have any experience shooting up gay nightclubs that I could draw on to help me offer insight into that one.

I do strongly support the claim that security audits are usually broader than deep because I do that kind of work and am personally familiar with the pressures and motivations described.

In my own experience, and in the experience of all of my friends who do this work, with whom I have discussed this exact topic at great length, the majority of the time we are in fact structurally restricted from going deeper. It doesn't mean this is all the time, but it is the case for most of the billable hours doing this stuff. It pays reliably going broad and not deep.

But you're totally right. That's just like, my opinion man.


> The basic idea is that vulnerabilities in cryptography require a different set of skills to find than "regular" security vulnerabilities do

That may be true for traditional low-level hand-optimized implementations. But if we consider the trend is clearly toward the high-level proof-based code[1], which is way easier to audit, then open source is obviously better.

[1] "Simple High-Level Code For Cryptographic Arithmetic - With Proofs, Without Compromises" https://deepspec.org/entry/Paper/Simple+High.2DLevel+Code+Fo...


Most issues (apart from legacy asymmetric primitives) are actually either in the implementation (common in widely deployed software) or the design (even more common in widely deployed software) of higher level protocols. Many implementations issues in crypto code are just regular code bugs. Meanwhile, design issues are not so readily apparent.

There is no linter that will tell you that using the same AES key for years and keeping a block counter in an untrusted location is a bad idea. And if you make your code obscure enough ("abstract" in all the wrong ways), then people will have a really hard time figuring stuff like this out.

And no one does proofs on protocol implementations in practice. Yes, there are some. Yes, there has been some progress to apply it to e.g. OpenSSL — after everyone has been using it for critical infrastructure for decades — so there's like, literally, one practical implementation that has seen a sliver of formal verification.


There's tooling like Cryptol and SPARK to make this a lot easier than it was in the past. Cryptol can generate software or hardware that implements the high-level spec of the algorithm. Far as practical use, certainly not just one even though still extremely rare. Here's some more:

Altran/Praxis Correct-by-Construction for a cert authority

http://www.sis.pitt.edu/jjoshi/Devsec/CorrectnessByConstruct...

Same company ports Skein from C to SPARK, finds trouble

http://www.skein-hash.info/sites/default/files/SPARKSkein.pd...

SPARK Ada is available under GPL and commercial licenses. The book below is quite easy to read compared to most stuff with formal methods. The companies building stuff like this usually also have tools to generate tests (property-based testing) from contracts/specs and can automatically turn anything you can't prove into runtime checks. Shocked it's not used more often in areas like smart contracts or cryptocurrency protocols.

https://www.amazon.com/Building-High-Integrity-Applications-...

Galois built Cryptol for NSA but open sourced it.

https://www.cryptol.net/

Rockwell Collins built SHADE and some other tools for high-assurance crypto. They build stuff for the defense sector mainly funded by NSA. They even have a separation-preserving, verified CPU.

http://www.csl.sri.com/users/shankar/VGC05/Hardin.pdf


There aren't that many people in the world that could build and use a Tamarin model for a crypto protocol, let alone prove it formally. Proof-driven cryptographic engineering is even more specialized than standard cryptographic engineering.


> But if we consider the trend is clearly toward the high-level proof-based code[1], which is way easier to audit, then open source is obviously better.

You disprove your own assertion. Cryptography is very specialized. All specializations in this world require specialized knowledge and niche experience, cryptography included. Assumptions about the obviousness of a specialization is almost itself the definition of an error.


Let’s invert the thesis: closed source crypto is no better than open source crypto. So why on earth would you ever choose closed source components?


Security audits, good implementation and proper reserach require the work of rare (as in there aren't many) and talented individuals. The funding model for open-source security is bug hunting and having big corporation who use your open-source product to give you from their money or their talent.

A funding model where you pay the company to hire auditors, implementors and researchers directly could be argued to be the better funding model.

I'm not saying it is better, since open-source has proven itself many times, but it has also failed miserably many times because of lack of funding or interest.

Open source, especially open-source that is produced and maintained by companies with different agendas (for instance google) as opposed to a foundation is prone to having that agenda color the way it works. Commercial product's agenda usually align better with it's customers then some random company. Some would say that in a security product it is vitally important that the creators' motivations are aligned with the users'.


If I can hire a code auditor to audit closed source, I get one audit. If I hire an audit and someone else hires an audit on the same open source code, I get two audits.


> If I hire an audit and someone else hires an audit on the same open source code, I get two audits.

So there's incentive to save money and wait for someone else to do audits.


That incentive only weighs so heavily against the incentive to run audited code.


I think webkik is saying, Open Source is better because if you DO spend money on security it can be spent by the community in aggregate / across successive efforts to pay experts to AUDIT the open source project.


Hypothetically speaking, because you can get a better support contract with a closed (or semi-closed) source software vendor. Alternatively because closed source software is easier to monetize and likely has better resources to direct to feature development.

But in practice almost all cryptography is deployed via open source software. Large tech companies like Google and Apple can develop reliable and safe closed source cryptography. But most companies don't because that's expensive and usually unnecessary.


>> Hypothetically speaking, because you can get a better support contract with a closed (or semi-closed) source software vendor.

I knew a guy who worked at a medium size company (few thousand people). They tried to drag him back into an old product he worked on years ago to help because.... They still had one customer paying support on it, but someone realized they couldn't find the source code. This guy couldn't help and fortunately the customer hadn't asked for any changes to the software in several years. So no, paying for software doesn't necessarily mean it's supported in any meaningful way.


I have to be honest, I don't think that one example can tell us anything meaningful about whether or not closed source software is typically better supported.

Note that I used the word "can", not the word (which you used) "necessarily."


Agreed. My point was simply that the level of "support" you get by paying for software can actually be zero. I don't claim to know what's typical.

My estimation is that typical support consists of listening to customer complaints and suggestions and prioritizing development around that and available resources. I doubt many companies do extensive analysis or hire consultants to do security audits or much of anything that isn't directly customer driven.

The company I mentioned previously actually does extensive design, testing and regression testing on most of their products but the smaller side project just didn't get that attention. YMMV even within one company.


The parent obviously didn't mean that ALL TIMES UNDER ALL CIRCUMSTANCES DO YOU GET PERFECT SUPPORT. It's simply a data point in the overall calculus. Support vs internal maintainability vs cost vs ability to find supporting talent vs license provisions vs IP concerns vs regulatory requirements vs a million other reasons.


Because you get to pay money for closed source. And if you're paying, you can justify it for yourself that it's somehow better, because it costs money.


You probably wouldn't! Aumasson isn't arguing in favor of closed-source cryptography. He's just dispelling a particular myth about the security differences between the two types.


I don’t know that this holds for crypto but for security systems more generally history has shown that ability to staff professional security & legal teams has more to do with practical security than the advantage open source provides.

So in a situation where your choice in crypto is open vs closed & legal/security funded you might pick the closed solution.


That's a logically false inversion, assuming you take 'no better than' to mean '<=' (as I think makes sense in context).

OSS <= CSS does not imply CSS <= OSS...

> why on earth would you ever choose closed source components

There are often no open source alternatives to closed source things that are meaningfully equivalent. For example, the iPhone is a secure and closed source phone. There is no open source equivalent to that package. Android devices have critical closed source components with almost no exception, but also are not equivalent to an iphone in many meaningful ways.

I think a better take-away is the nuanced "evaluate things on a case-by-case basis, don't let a security system having closed source components needlessly bias you"


I took “no better than” to mean approximately equal. I feel otherwise the author would have added “and possibly worse than”. I think your assumption however is probably more correct.


I can easily write closed source crypto that will be absolute garbage. I can also write open source crypto that will be total shit. With enough seller skills I could convince some people to use the closed source crypto, I guess with open source crypto I would be called out quicker.


A counterpoint to this is the billion dollar success of cryptocurrencies such as IOTA and Verge that are fundamentally broken from a cryptographic point of view.


But they have been called out many times


But dumb money is dumb. Masses of people are seduced by marketing and internet forums.


Real world open crypto libraries have sometimes really atrocious code. Openssl being notorious.

Open source is way to go imo, but the automatic assumption that it is good or someone reviewed it is wrong.


> or someone reviewed it

Sometimes I wonder if everyone assumes everyone else has reviewed it, therefore nobody has reviewed it


It is awful lot of hard work and requires a lot of special knowledge not that many have. Plus it is guaranteed political fight afterwards.

Average programmer don't know what to look for and can't do it over wekend.


I covered this in an essay about security of various models of source sharing. It came down to talent and methods of developers and reviewers being what determines trustworthiness of a specific project/product. Although my stuff is text (old school), roryokane on Lobste.rs kindly made a better-formated, HTML version here:

https://gist.github.com/roryokane/d02addfa9329c579f15daef5b4...

EDIT: This also answers another commenter's question about when I'd trust closed-source software. Essentially if I trust the developer and/or reviewer. Then, I have to know I'm using the same thing. So, they have to publish signed hashes and so on of either the binary or source.


It can be said for most software categories. But for specially for crypto, trust matters a lot. Specially today with governments pushing for forcing backdoors in crypto.


This is a strange way of thinking:

  For this kind of bug, OSS doesn’t necessarily win, when 82% of projects on Github have fewer
  than 45 stars (a figure I just made up, but you get the idea). 
It is true, but also irrelevant. Most OSS are orphaned hobby projects, and that's what he sees here. But when comparing to CSS, the relevant OSS is the small parts of it that has users, builds a community, etc.. For these projects, Linus' law is a serious win for OSS. This won't help the last category of hard bugs, but is a great booster for OSS that' missing from CSS.

He's probably dead right about the hard bugs, BTW

Additionally, OSS has another less visible bonus: Everybody tests their tooling on OSS simply because they can. If you write something like AFL, coverity, ... , you load tons of code from github and see how your tool behaves on it. Then you write a blog post with your great results. So you either have to file bugs against the OSS projects you scanned, or the community will think yo're an asshole.

On the other hand, CSS has a less visible malus: The horrible quality of in-house tooling at governments, banks, etc...

I've seen a case where a government Architect decided to encrypt every person's personal data with AES, using the birth date + a few extra digits as password, and write this in a spec posted on their site. An auditor saw AES and declared the protocol secure enough for transport over the public Internet. It took most of a year before our team got through to him that this amounts to about 20 bits of key space, i.e. brute force-able in a few minutes on a phone with its some interpreted language, live coded before his eyes in a meeting. (He then decided to pad the key with zeros until he got it to 256 bits. For some reason you're not allowed to whack government officials with their own spec, even when their IQ is below that of an average doorknob. But after long meetings we eventually got to a minimally sane encryption). And we were the only organization that 'caused so much trouble', the others either didn't understand the issue or simply didn't care.

I am really really really scared about the security of anything governemental now.


I would say he misses a big point on the hard bugs though. Saying something is proprietary does not mean it makes enough money to hire such high caliber professionals, on the contrary.

I have worked with a lot of extremely competent developers on open source, but my experience with proprietary software was quite the opposite: professionals are untrained (to the point where they can't even use versioning properly) and would be incapable of finding such bugs anyway.

Now assuming that a proprietary cryptocurrency has the resources to hire a respectable audit firm... it is more far fetched than assuming that such star developers will just work on an open source one for fun.


"Open source" doesnt mean it doesnt have bugs or backdoors. It just means that you can take the code, read it, and maybe find the bug/backdoor.


> Open-source crypto is no better than closed-source crypto

If the weakest link is the hardware (e.g. Intel ME, or Chinese Trojan chip), then this proposition doesn't even matter much.


I think the author's point is that all things held equal, OSS crypto isn't better than CSS crypto in their experience.


The advantage of using open source crypto is that when you get hacked so do a lot of other sites, so you become only one part of a big story.


Open-source crypto wins because it's never just crypto. It's a whole system with crypto as one element.

The crypto may be the only hard part, but it's still just one part. Like in a password manager: Even if the crypto is somehow known-good, if the whole thing's closed-source, I don't know if the non-crypto parts aren't exfiltrating my plaintext passwords so my accounts can be sold to botters. In an open-source project, there's a good chance something like that will be caught without some bored person deciding to run Wireshark at the same time they're running a password manager.


I don't think it's a matter of open vs closed source, it's more a matter how many maintainers and contributors the projects has.

As most bigger projects (>3 maintainers) show, the more contributors the worse is the quality of the code, and logically the rate of bugs is higher.

Closed source projects typically have the best number of maintainers: 1-2. With open source it could easily degrade into a minefield, like with openssl, apache, linux, gnome, python, perl, ruby, node, ...


The title is a bit vague and the article doesn’t really clarify. The point of the article is that open-source implementations of designs involving cryptography are no better than closed source implementations. The purported reason is that cryptography is difficult and therefore reviewers are unlikely to spot issues.

I presume that much open-source software is written by hobbyists while a closed source software development company may hire a crypto expert for the sensitive code.


At least the open-source crypto can be proven to be good and audited by anyone.

I'd rather use an open-source crypto than take a security russian roulette on a closed-source one.


Open Source crypto is no worse than closed source crypto. This is fantastic news and a strong validation of the open source development model for security systems.

;)


Ignoring the biggest problems of proprietary software (e.g. the fact that such software can be deactivated at any time for no reason, backdoors) and citing pseudo problems on free software.

"You can also argue that OSS tends to be reused by other OSS, which amplifies the impact of a given bug."

This guy clearly as a horse on that race.


Well I guess the only logical solution is to roll your own, then.


If the company devloping the crypto dedicates the same amount of resources, would open or closed source crypto be more secure?

For me the answer clearly is open source crypto.


Open source != good quality code


Doesn't the very fact that he was so readily able to review these blockchain projects negate his thesis?


No. He's comparing it to closed source software he has also audited.

CSS doesn't mean no one can see the code or that there aren't third-party audits.


I understand. But that is a pretty silly comparison for him to make. He's sampling two different universes. In the open source universe, everything is open to him, by definition. He has a complete sample space. In the closed source universe, he may only sample the companies who choose to contract his services. That is an extremely biased sample.

Not to mention the basic fact that he's almost certainly not controlling for other factors, like project age, usage level, etc.


> CSS doesn't mean no one can see the code or that there aren't third-party audits.

It does mean that there are no meaningful third-party audits. An audit (or benchmark, for that matter) that aims to be approved by the vendor is as useful as a rubber stamp.


This is sometimes true, but not always.

There are many auditors who will do a good and fair job.

This is especially true if the audit is not intended to be publicly shared, but rather for the internal consumption of the company in question.

I'd still say an audit where the reviewers are skilled professionals, but biased by the source of income, is better than the average open source project where few people will do such a thorough audit ever; likely none after the original author, excepting special cases like Linux.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: