I think this alone is an excellent reason to study the code, but only after you've already had a mental exercise of considering how you could write such an implementation without any dynamic allocation. One of the things that I've noticed in a lot of codebases is superfluous dynamic allocation, which can often be eliminated to make things simpler, more efficient, and removes a possible error-path.
I think MbedTLS requires a memory allocator, which if true then BearSSL seems quite attractive. At least for simple scenarios. The advantage of MbedTLS is that it's starting to have more support for hardware accelerators.
I've been using MbedTLS which is a similar project and loving it. But the last time I looked at the BearSSL code, it was very clean and beautiful in a way that MbedTLS isn't. Does anyone here have experience with both libraries who could illuminate the tradeoffs? For example, does BearSSL have TLS 1.3? I'd be willing to give up all previous versions of TLS just to have 1.3. I want it so badly I'm about to implement it into MbedTLS myself.
- Its sans-I/O design. Too many networking-related libraries out there mix the protocol state machine logic and the I/O. In doing so, they impose a certain style of I/O and make it impossible to integrate with an existing event loop, unless you’re willing to sacrifice scalability by spawning a thread+semaphore for each connection. I’d love it if I could use libssh2 or libmysqlclient’s state machine with io_uring or DPDK! Other TLS libraries do have stuff like memory BIOs, but they’re managed by the library and force an extra memory copy to get your data to its final destination.
- No mutable global state. The fact that all functions are fully reentrant means that it’s easily usable in an interruptible context, common in freestanding environments or runtimes with fibers/coroutines.
The tradeoff is that it lacks TLS 1.3 and DTLS support. The latter in particular is something I look forward to so that a QUIC state machine in C is possible.
Also, Bear is unlikely to do e.g. 0-RTT which might be a TLS 1.3 feature you really wanted unless you just "want it so badly" because it's a bigger number.
TLS has always been weak to a few classes of attacks (historically both MITM and replay attacks), but these developers are taking that weakness very seriously. Also, most HTTP requests are idempotent, which is going to be the use case of 99% of people.
The other two reasons they give are about abstractions that they defined for themselves. The idea of buffering 0-RTT requests is also kind of silly, and completely untenable if you want to avoid memory waste.
Heh. In the Cosmopolitan documentary there's a scene where all these other animals just stand around waiting for the honey badger to find food before diving in. https://youtu.be/chtdRCrZKuw?t=247
Interesting to see BearSSL mentioned today because I have just started to replace OpenSSL with BearSSL in my webserver.
The reason for that is that the server and the computer I develop on run different versions of Debian that have different versions of OpenSSL,
so the server suddenly stopped working because the program linked dynamically to a newer version of OpenSSL that the server didn't have.
I tried to link OpenSSL statically but at that point i thought it was better to move to BearSSL.
If I'm going to link statically, I prefer a smaller library, both for size and security reasons.
I was also getting some issues when linking OpenSSL statically and I think (or hope) that will be easier to do with BearSSL.
There are pros and cons always.
But it is not good either if there is only one primary developer.
There is a high risk that single author becomes blind for is own bad coding practices, if there are some, and they will repeat. It is also difficult to understand every possible use-case alone.
Impact of the bugs in OpenSSL are so significant, that they always end up in to the news.
BearSSL is still a quite little project compared to it, and because of that no CVE:s are being made if the author finds a bug by themself from his own code.
On the other hand, every bug in OpenSSL gets CVE mark and will end up into the news.
It gives distorted view and comparison of the software quality between many projects.
And Heartbleed bug was in the extension, not in the core software.
> (citation needed, btw)
Not really. But if you insist, OpenSSL is defacto crypto library [1] , at least on server side.
Browsers and OpenSSL takes the most burden for securing every-day use of internet in the world. There are billions testers every day.
Depends what you mean by "popular". seL4 [1] is formally verified, open source and is used as baseband firmware in millions of phones, among other uses. Google just recently started an embedded OS based on it as well.
CompCert [2] is a formally verified optimizing C compiler.
Most formal verification happens in compilers or low-level mission critical systems due to the cost.
If you want to write some formally verified C, you can check out Frama-C [3].
>all the time
>not common
>the cost
That's what I thought... and seL4 was the only one I could think of as well. I always see arguments around auditing as well but I think realistically unless a project has some enormous resources capable of funding this things themselves, you can't expect it to happen, but so many people poo-poo projects simply because it's "not audited", but to me that's silly because 1. most projects can't afford that, 2. who gets to say what has been audited "well enough", and 3. auditing results are only valid for specific snapshots of a repo and are completely invalidated as soon as any new changes are made.
Given the lack of mention in the paper, I suspect they’re unable to distinguish OpenSSL from its forks, notably BoringSSL. OpenSSL might be used in an offline mode but I don’t think anyone really trusts it for online these days.
BoringSSL arose because Google used OpenSSL for many years in various ways and, over time, built up a large number of patches that were maintained while tracking upstream OpenSSL. As Google's product portfolio became more complex, more copies of OpenSSL sprung up and the effort involved in maintaining all these patches in multiple places was growing steadily.
Currently BoringSSL is the SSL library in Chrome/Chromium, Android (but it's not part of the NDK) and a number of other apps/programs.
Can you clarify how you came to this conclusion? No idea about self-hosted stuff of course, but I'm pretty sure BoringSSL is one of the gold standards for the big cloud providers.
Counterpoints:
> Encryption Implemented in the Google Front End for Google Cloud Services and Implemented in the BoringSSL Cryptographic Library
> We use a common cryptographic library, Tink, which includes our FIPS 140-2 validated module (named BoringCrypto) to implement encryption consistently across Google Cloud
> It may (or may not!) come as surprise, but a few months ago we migrated Cloudflare’s edge SSL connection termination stack to use BoringSSL: Google's crypto and SSL implementation that started as a fork of OpenSSL.
> We are pleased to announce the availability of s2n-quic, an open-source Rust implementation of the QUIC protocol added to our set of AWS encryption open-source libraries. <snip> AWS-LC is a general-purpose cryptographic library maintained by AWS which originated from the Google project BoringSSL.
AES is used for encryption while GHASH is a hash(checksum) algorithm. Quite different. Also the algo time is a function of the input size. Not sure how you can encrypt 1byte and 2TB in constant time. You seem like you need to read more on this stuff.
RSA has been deprecated for a while. It still has nothing to do with the fact that you compared a hashing algorithm with a symmetric encryption algorithm, a literal apple to orange comparison
First, I'm not the parent, I didn't compare a thing. Second, that page has information on a lot of crypto algos and not only on RSA so I don't know what you are talking about.
I looked at this two years ago, and it didn't have TLS 1.3 support, so I went with openssl and have replaced that with s2n (which uses bits of openssl under-the-hood) last year for some things.
The features page looks like TLS 1.3 still isn't supported, and the git changelog doesn't show much happening...
I expect that the focus on size constraint means this attracts a lot fewer "Hey, here's a cute thing somebody might want" changes, it also means the skillset needed to contribute at all is probably more demanding, if you're used to just calling malloc() whenever you need somewhere to keep a structure this project doesn't want your code.
I think TLS 1.3 has some "features" that essentially take a giant dump on trying to make them work in a resource constrained, no dynamic allocation environment. That might have blunted the enthusiasm to work more on it.
I owe probably 50% or more of what I've learned about crypto and TLS to Thomas Pornin. He's extremely bright, used to be(and may still be) super active on SE, and one of the few people who both understands it well enough to speak in specifics, but does so freely. I'd absolutely trust anything he writes.
In addition to the bug issue, OpenSSL has been around a long time and has accumulated a great deal of extra bits and bobs due to it being the Swiss-Army Knife of encryption on a lot of systems. If you’re writing a webserver, you’re unlikely to need support for, say, S/MIME for email encryption, but it’s in there. Smaller libraries like Bear or Boring that were designed specifically to do TLS and little else don’t have the extra pieces, which reduces attack surface, simplifies the code, and makes it easier to remove old encryption ciphers and add new ones.
Assuming it has good functionality: it will have different bugs from the “other package”. So if someone finds a vulnerability in the other pkg your system won’t be vulnerable.
And/or you want specific functionality, like the lack of dynamic allocation.
Been using BearSSL in side projects on and off, and it's an excellent little library.
That said, I use MbedTLS for production work on embedded hardware because it just integrates better with the other bits and pieces of code, and often comes as part of the platform SDK itself or has been tested with the platform. This plays a big part in getting people to use it.
This has worked well enough for an Arduino IoT project. Small footprint and acceptable performance. I think it comes recommended by the chip manufacturer or vendor.
I just tried it. It was very nice. It compiled with a simple `make` and there were no errors or warning (on Linux). There are few test programs that seem to work.
I think this alone is an excellent reason to study the code, but only after you've already had a mental exercise of considering how you could write such an implementation without any dynamic allocation. One of the things that I've noticed in a lot of codebases is superfluous dynamic allocation, which can often be eliminated to make things simpler, more efficient, and removes a possible error-path.