A while back I wrote a "Cryptographic Right Answers" document that recommended people use BoringSSL if they could get away with it, otherwise stick to OpenSSL. Today, I'd write "just use OpenSSL". It's a different, better project than it was just a few years ago, and it gets much more attention than alternate libraries.
If you're already using Rust, this looks neat. I'm not sure OpenSSL API compatibility is that much of a win? But if you're porting C code to Rust, sure.
If you're already using Rust, you should use Rustls directly and jump on the opportunity to avoid having the absolutely fucking insane OpenSSL API in your stack at all.
Ideally such a misdesigned API would not exist at all. Library interfaces should be engineered to prevent mistakes. Here are a handful of the problems I've run into recently when dealing with legacy OpenSSL code (legacy being the reason it uses OpenSSL, not the reason it was bad):
Some error codes are `int`s, other are `long`s. Different error codes need to be passed to different stringification functions, and these have different allocation and string-loading semantics. Why on earth do I have to manually load error strings in the first place? I don't pass a string-table handle into the stringification, so I don't even get context isolation (multiple instances of OpenSSL in one thread have conflicts)!
The entire BIO framework is insanely overcomplicated and could be stripped down to a minimal buffer-based encryption API (c.f. the BSD libtls API).
Then you have API like `SSL_CTX_set_verify` which simply ignores irrelevant flags rather than returning an error about them. This is terrible.
ABI compatibility, or even just source compatibility, with the OpenSSL C API (as bad as that API is) is a huge win as there is an enormous base of OpenSSL-using applications. Being able to remove the vast majority of bugs from OpenSSL in those apps is a big win. Of course, the quality of Mesalink is important, but there are many bugs it may be immune to by dint of being coded in Rust.
I agree that OpenSSL is likely the most robust library out there, and if you're talking about provisioning your web server / doing some https from a high-level language then it's definitely the sane choice. (fwiw, I had some work looking at a BouncyCastle TLS server and...eek)
The one counter-example is the embedded space, particularly where code addition/modification for hardware integration is required. Here OpenSSL's footprint and APIs are a tad unwieldy compared to some alternatives.
Basically I just wanted to give mbedTLS a shout-out as, IMO, the clear winner in that particular space.
If you're already using Rust, you should probably not use this but rustls directly. As I understand, this is a wrapper intended to provide the same API that OpenSSL for easy integration with existing C projects.
I would use OpenSSL before I used LibreSSL at this point, but that's more a statement about how much more serious I perceive OpenSSL to have gotten as a project.
Check if the language/runtime/ecosystem you picked brings a default crypto lib and use that.
Go's C FFI support does exist and there are OpenSSL bindings. But CGo, which will be used in this case, kinda sucks, and you'll have a lot of pain getting the two sides, OpenSSL and the go HTTP lib, to talk properly to eachother.
I think my comment wasn't clear. I am saying, given that I can choose any lang/platform for my new project, and crypto lib quality is the most important, which lang/platform + crypto lib should I choose? Ignoring other factors, what is the best implemented crypto lib out there right now (specifically for TLS or in general)?
For me at least, I chose Go on a recent network project specifically because of the blessed, quality crypto impl for things I needed.
C. It has the widest range of support for crypto libraries, basically any reference implementation of modern crypto algorithms happens in C. Almost all existing libraries are either written in C or support C FFI.
So the most widely supported platform for crypto libs would be C.
But C is probably not a good choice due to other factors outside crypto lib support.
I think the best implemented crypto lib out there atm is NaCl (djb). It has very few functions that take obvious parameters with which you can't blow your leg off (the only danger is repeating a nonce but generating a random nonce or sequential nonce is within the realm of "I expect most people to be able to read /dev/random".)
Second would be any crypto library that is implemented similar: few leg-blow-off-safe functions that do all the hard parts for you.
I haven't really followed those developments too closely. Do folks think that OpenSSL gets enough attention these days to overcome the structural deficiencies relative to NSS?
I laud your efforts.
Whenever someone announces a project like this one, one of the top comments is invariably one that promotes Openssl over other TLS implementations.
I find this "leave it to the experts" attitude unjustifiable, since it more or less translates to "trust the experts". Experts? Take a look at the source. Look up the scrolls of bug reports. Experts indeed.
This tends to hurt efforts to create Openssl alternatives, and makes crypto/security protocols seem like the black art of a chosen few.
Security is hard - all the more reason to encourage more people to be involved. And in any case, experts have to start somewhere.
I love seeing projects like this and I hope that a sane alternative to OpenSSL emerges.
A big dream of mine is that we'll eventually have a comprehensive and hyper-detailed set of automated tests that we can use to validate cryptographic libraries [0]. Someday I hope that every new CVE for a cryptographic library will also come with a corresponding automated test to check for that vulnerability.
There is an ongoing project [0] implementing a formally verified and performant TLS1.3 library. They already have verified and optimized assembly for AES and SHA256. I think it will be a big game changer if it succeedes.
Some of the vulnerabilities are harder to test as they are side channels specific to the hardware the code is running on.
Sure you can check that basic timing side channels don’t exist, but the more esoteric ones would require a bunch of hardware to make sure they aren’t leaking data via power use and other channels.
Can you go into more details on situations that would be hard to test? This has all only been a thought experiment for me so far, and in my thought experiment I handwave these sort of issues away by thinking "that's what emulation/simulation is for" – but I hadn't considered leading data via power use, so clearly I need to learn more there.
Many people have far less to fear from e.g. Chinese government than e.g. 5-eyes governments.
Frankly though this is a pretty shallow criticism. It's open source. Identify all those backdoors you just know are hiding in plain sight. Then we can either PR or fork it.
While I agree that Chinese sponsorship is not a good reason to avoid a library, every part of the reasoning in this comment is bad.
For most people, the threat model that focuses on FVEY includes China (China is a key part of FVEY's remit). If China has backdoored something, the FVEY threat model assumes FVEY has it too.
In particular: while FVEY governments theoretically have local legal limitations to their collection capabilities, they have no formal limitations to foreign collections --- foreign collections are the entire point of SIGINT agencies. Moving your data overseas to hide it from the NSA is thus, ceteris paribus, an extremely dumb idea.
Further, it is extremely difficult to extract cryptographic weaknesses from software, and relative to the scale of the whole problem, source code availability is a marginal factor. If you're going to say "identify all those backdoors", you might just as well say "YOLO".
We have different understandings of threats from "FVEY". I don't assume that e.g. FBI uses China's putative backdoors when trolling for subjects to oppress. In comparison, China is using them for precisely that purpose; this is the hypothesis from which this discussion started. There simply isn't much intersection between the two sets of oppressed subjects.
FBI might know of some such backdoors, if they exist, but there is no structural reason to expect that they do. FVEY is not a monolith. If there were a "national security" reason to be interested in some target, that might inspire enough cooperation to generate a Baidu vuln. Most subjects, however, are simply targets of opportunity for law enforcement, with no real connection to national security.
Sure, YOLO, but TFA is a repo. That is a pure thing, the worth of which is totally unrelated to all this other stuff.
If reproducible builds are not used for these projects, it is not easily possible to verify whether end user distributed binaries are equal to the available source code. Never mind that in many cases actually replacing the SSL component may not be possible.
Sure, but that amounts to a criticism of particular devices or clients that can't be verified. If it was meant as a criticism of this repo, the answer is still "PR and fork".
That doesn't matter for the purposes of bootstrapping; it's broken the chain. 1.19 -> 1.25 is much, much, much easier than the thousand-odd builds you'd need to bootstrap from the OCaml implementation.
This is why I mentioned "elbow grease." It's not trivial, but if it's something you're concerned about, it is possible.
sorry for being ranty on HN, I'm abusing my anonymity to vent.
please support a bootstrap method. I can't use any project in a programming languages that require downloading a specific binary to use in certain setups.
It's not a hyperbole to bash rust, we're crazy people who make sure our projects can be bootstrapped like so. I was very interested in mrustc. but it's going to die very quickly when every version bump is an extra rust (it took me 9 hours to build rust at some point).
Having to give up projects because they threaten to use rust is painful. It's a good language and someone went out of their way to fix the bootstrap problem. please give him a hand.
Every project has to pick what problems matter, and can be addressed with the current resources at hand. People, time, servers, whatever.
At this time, there's just not enough people that care at this degree of paranoia; all of the Linux distros are fine with this, even Tor is fine with this. I can appreciate that something like this would be more convenient, but that's a teeny, tiny group of people this would serve, and it would be so, so, so much effort to put in. It just doesn't make sense. (Given that you're anonymous, I don't know who exactly you're asking for here.)
> but it's going to die very quickly when every version bump is an extra rust
It doesn't need to live; by existing it's already broken the boostrap chain.
I understand some people's motive to fork OpenSSL but don't hear much about the success of these forks in the wild. Does anyone have any information on how BoringSSL or LibreSSL is used? It'd be great to hear from maintainers or anyone who has a large deployment with them.
One upside I can see to this implementation is that it ONLY supports TLS 1.2 and 1.3, so there's significantly less danger of downgrade attacks. I believe you can build OpenSSL without TLS 1.1 and below, but the code is still there, so there's some non-zero risk.
It was in kind of the parent post. I don't get his disdain for some implementations, especially when a monoculture proved so troublesome. He also missed NSS, which is quite an important one.
My understanding is that MesaLink uses the low-level crypto from BoringSSL (transitively via ring) and the high-level protocol code from ring, which is written in Rust.
Most memory safety problems in TLS implementations come from the higher-level protocol code, not the implementations of the raw crypto primitives. In fact, the latter benefit from being written in as low-level of a style as possible to mitigate timing attacks.
MesaLink relies on ring for its crypto operations. Ring is written in a mix of Rust, C and assembly, with most of the C and assembly code coming from (Boring|Open)SSL.
I'm glad this exists, and I'm sure it's very useful for organisations large enough to put engineering effort into adopting it, but it seems to me that the real win from Rust is going to look more like its use in Firefox: incremental replacement of code in an already-widely-used project, rather than an attempt at a full reimplementation.
Of course, nothing's stopping MesaLink becoming that replacement in a larger user of OpenSSL. But I doubt I personally will benefit from it any time soon.
Anybody know the ratio of OpenSSL bugs related to memory management compared to "normal" bugs, TLS state machine weirdness, compiler optimizations, etc.?
If you're already using Rust, this looks neat. I'm not sure OpenSSL API compatibility is that much of a win? But if you're porting C code to Rust, sure.