Hacker News new | past | comments | ask | show | jobs | submit login
BearSSL: A smaller SSL/TLS library (bearssl.org)
143 points by snvzz on Oct 29, 2022 | hide | past | favorite | 63 comments



No dynamic allocation whatsoever

I think this alone is an excellent reason to study the code, but only after you've already had a mental exercise of considering how you could write such an implementation without any dynamic allocation. One of the things that I've noticed in a lot of codebases is superfluous dynamic allocation, which can often be eliminated to make things simpler, more efficient, and removes a possible error-path.


I think MbedTLS requires a memory allocator, which if true then BearSSL seems quite attractive. At least for simple scenarios. The advantage of MbedTLS is that it's starting to have more support for hardware accelerators.


Excellent point!


I've been using MbedTLS which is a similar project and loving it. But the last time I looked at the BearSSL code, it was very clean and beautiful in a way that MbedTLS isn't. Does anyone here have experience with both libraries who could illuminate the tradeoffs? For example, does BearSSL have TLS 1.3? I'd be willing to give up all previous versions of TLS just to have 1.3. I want it so badly I'm about to implement it into MbedTLS myself.


Two strong points for BearSSL:

- Its sans-I/O design. Too many networking-related libraries out there mix the protocol state machine logic and the I/O. In doing so, they impose a certain style of I/O and make it impossible to integrate with an existing event loop, unless you’re willing to sacrifice scalability by spawning a thread+semaphore for each connection. I’d love it if I could use libssh2 or libmysqlclient’s state machine with io_uring or DPDK! Other TLS libraries do have stuff like memory BIOs, but they’re managed by the library and force an extra memory copy to get your data to its final destination.

- No mutable global state. The fact that all functions are fully reentrant means that it’s easily usable in an interruptible context, common in freestanding environments or runtimes with fibers/coroutines.

The tradeoff is that it lacks TLS 1.3 and DTLS support. The latter in particular is something I look forward to so that a QUIC state machine in C is possible.


> TLS 1.0, TLS 1.1 and TLS 1.2 are supported. By design, SSL 2.0 and SSL 3.0 are not supported.

TLS 1.3 isn't mentioned.


Not yet. https://bearssl.org/tls13.html

Also, Bear is unlikely to do e.g. 0-RTT which might be a TLS 1.3 feature you really wanted unless you just "want it so badly" because it's a bigger number.


Those aren't very compelling reasons for me to not be allowed to serve cat photos with better latency. Looks like BearSSL doesn't share my values.


Interesting reasons.

TLS has always been weak to a few classes of attacks (historically both MITM and replay attacks), but these developers are taking that weakness very seriously. Also, most HTTP requests are idempotent, which is going to be the use case of 99% of people.

The other two reasons they give are about abstractions that they defined for themselves. The idea of buffering 0-RTT requests is also kind of silly, and completely untenable if you want to avoid memory waste.


By now, you saying you're about to implement something yourself is probably an indicator for everyone else that they don't have to do it.

"We're good, everyone. Justine is on it."


Heh. In the Cosmopolitan documentary there's a scene where all these other animals just stand around waiting for the honey badger to find food before diving in. https://youtu.be/chtdRCrZKuw?t=247


mbedtls just recently got 1.3.


Interesting to see BearSSL mentioned today because I have just started to replace OpenSSL with BearSSL in my webserver. The reason for that is that the server and the computer I develop on run different versions of Debian that have different versions of OpenSSL, so the server suddenly stopped working because the program linked dynamically to a newer version of OpenSSL that the server didn't have. I tried to link OpenSSL statically but at that point i thought it was better to move to BearSSL.


How does that help? Unless bearssl has a perfectly stable api aren't you just trading one library for another?


If I'm going to link statically, I prefer a smaller library, both for size and security reasons. I was also getting some issues when linking OpenSSL statically and I think (or hope) that will be easier to do with BearSSL.


In this case, the size might not give better security. OpenSSL is one the most tested products on the planet.


However, it is also complex and has been worked on by many people which increases the chances of bugs due to mis-understandings between team members.

In contrast BearSSL is kept simple, and is primarily the work of a single author, who knows a lot about cryptography


There are pros and cons always. But it is not good either if there is only one primary developer. There is a high risk that single author becomes blind for is own bad coding practices, if there are some, and they will repeat. It is also difficult to understand every possible use-case alone.


Eyes on the code help find bugs.

Smaller code helps prevent bugs, as there's simply less code where bugs could be.

Thus it is not as clear as you picture it.

What is clear, however, is that it should be easier to make BearSSL's code bug-free, by virtue of having less code.


This is true. But BearSSL does not have so many eyes on looking it, nor haven’t had the same duration as OpenSLL have had.

Which makes it complicated.


I don't know anything, but chuckled because of today's news: https://news.ycombinator.com/item?id=33380500


Impact of the bugs in OpenSSL are so significant, that they always end up in to the news. BearSSL is still a quite little project compared to it, and because of that no CVE:s are being made if the author finds a bug by themself from his own code.

On the other hand, every bug in OpenSSL gets CVE mark and will end up into the news. It gives distorted view and comparison of the software quality between many projects.


Which of course doesn't affect libressl.

I still can't comprehend how the industry didn't simply move to libressl early on.


Being "one the most tested products on the planet" (citation needed, btw) did not prevent Heartbleed.


No software can be quaranteed to be bug-free.

And Heartbleed bug was in the extension, not in the core software.

> (citation needed, btw)

Not really. But if you insist, OpenSSL is defacto crypto library [1] , at least on server side. Browsers and OpenSSL takes the most burden for securing every-day use of internet in the world. There are billions testers every day.

[1] : https://crocs.fi.muni.cz/public/papers/acsac2017)


> No software can be quaranteed to be bug-free.

Sure, this is done all the time via verification and other formal methods. It's not common in the industry, but not entirely uncommon either.


Nope, not even formal proofs can save you, see WPA2/KRACK.

Using proofs just shifts the bugs into the assumptions/axioms (i.e you think your proof is proving X but it's actually proving Y)


got any examples of this being used in popular FOSS software?


Depends what you mean by "popular". seL4 [1] is formally verified, open source and is used as baseband firmware in millions of phones, among other uses. Google just recently started an embedded OS based on it as well.

CompCert [2] is a formally verified optimizing C compiler.

Most formal verification happens in compilers or low-level mission critical systems due to the cost.

If you want to write some formally verified C, you can check out Frama-C [3].

[1] https://sel4.systems/

[2] https://github.com/AbsInt/CompCert

[3] https://frama-c.com/


>all the time >not common >the cost That's what I thought... and seL4 was the only one I could think of as well. I always see arguments around auditing as well but I think realistically unless a project has some enormous resources capable of funding this things themselves, you can't expect it to happen, but so many people poo-poo projects simply because it's "not audited", but to me that's silly because 1. most projects can't afford that, 2. who gets to say what has been audited "well enough", and 3. auditing results are only valid for specific snapshots of a repo and are completely invalidated as soon as any new changes are made.


It does happen all the time in commercial contexts in certain industries. Look up Spark Ada for lots of commercials applications.


Given the lack of mention in the paper, I suspect they’re unable to distinguish OpenSSL from its forks, notably BoringSSL. OpenSSL might be used in an offline mode but I don’t think anyone really trusts it for online these days.


BoringSSL is more for client side tbf. It was built for Chromium.

It is not recommended to use for general parties, even Google does not recommend.


It was not built for chromium AFAIK

To quote: https://boringssl.googlesource.com/boringssl/

BoringSSL arose because Google used OpenSSL for many years in various ways and, over time, built up a large number of patches that were maintained while tracking upstream OpenSSL. As Google's product portfolio became more complex, more copies of OpenSSL sprung up and the effort involved in maintaining all these patches in multiple places was growing steadily.

Currently BoringSSL is the SSL library in Chrome/Chromium, Android (but it's not part of the NDK) and a number of other apps/programs.


Can you clarify how you came to this conclusion? No idea about self-hosted stuff of course, but I'm pretty sure BoringSSL is one of the gold standards for the big cloud providers.

Counterpoints:

> Encryption Implemented in the Google Front End for Google Cloud Services and Implemented in the BoringSSL Cryptographic Library

https://cloud.google.com/docs/security/encryption-in-transit

> We use a common cryptographic library, Tink, which includes our FIPS 140-2 validated module (named BoringCrypto) to implement encryption consistently across Google Cloud

https://cloud.google.com/docs/security/encryption/default-en...

> It may (or may not!) come as surprise, but a few months ago we migrated Cloudflare’s edge SSL connection termination stack to use BoringSSL: Google's crypto and SSL implementation that started as a fork of OpenSSL.

https://blog.cloudflare.com/make-ssl-boring-again/

> We ported our SMTP server to use BoringSSL, Cloudflare’s SSL/TLS implementation of choice

https://blog.cloudflare.com/email-routing-leaves-beta/

> We are pleased to announce the availability of s2n-quic, an open-source Rust implementation of the QUIC protocol added to our set of AWS encryption open-source libraries. <snip> AWS-LC is a general-purpose cryptographic library maintained by AWS which originated from the Google project BoringSSL.

https://aws.amazon.com/blogs/security/introducing-s2n-quic-o...

This last one is not 100% clear but I would imagine AWS dogfoods their own encryption libraries to build their internal cloud stack.


Interesting, thanks for pointing put.


For example BearSSL uses constant-time algorithms, openssl not.

BearSSL GHASH is still fast whilst secure, openssl AES not.


AES is used for encryption while GHASH is a hash(checksum) algorithm. Quite different. Also the algo time is a function of the input size. Not sure how you can encrypt 1byte and 2TB in constant time. You seem like you need to read more on this stuff.


Maybe you need to read more on this stuff: https://www.bearssl.org/constanttime.html


RSA has been deprecated for a while. It still has nothing to do with the fact that you compared a hashing algorithm with a symmetric encryption algorithm, a literal apple to orange comparison


First, I'm not the parent, I didn't compare a thing. Second, that page has information on a lot of crypto algos and not only on RSA so I don't know what you are talking about.


Is this still under active development?

I looked at this two years ago, and it didn't have TLS 1.3 support, so I went with openssl and have replaced that with s2n (which uses bits of openssl under-the-hood) last year for some things.

The features page looks like TLS 1.3 still isn't supported, and the git changelog doesn't show much happening...


I expect that the focus on size constraint means this attracts a lot fewer "Hey, here's a cute thing somebody might want" changes, it also means the skillset needed to contribute at all is probably more demanding, if you're used to just calling malloc() whenever you need somewhere to keep a structure this project doesn't want your code.


I think TLS 1.3 has some "features" that essentially take a giant dump on trying to make them work in a resource constrained, no dynamic allocation environment. That might have blunted the enthusiasm to work more on it.


I owe probably 50% or more of what I've learned about crypto and TLS to Thomas Pornin. He's extremely bright, used to be(and may still be) super active on SE, and one of the few people who both understands it well enough to speak in specifics, but does so freely. I'd absolutely trust anything he writes.


Saved my ass on an embedded project years ago, when I couldn’t spare a few KB for the TLS buffers. Great library, well designed.


Dumb question: why would someone use an alternative library for something as important as encryption?


In addition to the bug issue, OpenSSL has been around a long time and has accumulated a great deal of extra bits and bobs due to it being the Swiss-Army Knife of encryption on a lot of systems. If you’re writing a webserver, you’re unlikely to need support for, say, S/MIME for email encryption, but it’s in there. Smaller libraries like Bear or Boring that were designed specifically to do TLS and little else don’t have the extra pieces, which reduces attack surface, simplifies the code, and makes it easier to remove old encryption ciphers and add new ones.


Assuming it has good functionality: it will have different bugs from the “other package”. So if someone finds a vulnerability in the other pkg your system won’t be vulnerable.

And/or you want specific functionality, like the lack of dynamic allocation.

Monoculture is a dangerous trap.



there isn't an automatic, global, eternal best choice just because a topic is important.


Been using BearSSL in side projects on and off, and it's an excellent little library.

That said, I use MbedTLS for production work on embedded hardware because it just integrates better with the other bits and pieces of code, and often comes as part of the platform SDK itself or has been tested with the platform. This plays a big part in getting people to use it.


This has worked well enough for an Arduino IoT project. Small footprint and acceptable performance. I think it comes recommended by the chip manufacturer or vendor.


We’re trying this in production right now for our tunneling feature. So far looks very promising!


There is a port to the esp8266 which is used as the TLS library for the esp8266 Arduino core.


I just tried it. It was very nice. It compiled with a simple `make` and there were no errors or warning (on Linux). There are few test programs that seem to work.


Is BearSSL still maintained? It looks like it hasn't been updated in almost 4 years.



Does it support RFC 7250 (Raw Public Keys)? I can’t easily tell.


great project but it is inactive for a few years


The last commit was in June.


well, the last release was 4 years ago, and it's in beta. There are 30 commits since then and most of them are bug fixes.

I really hope this one gets more attention though.


is there a tutorial how to run bearssl on lighttpd/devuan.org and achive good marks in ssllabs.com/ssltest?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: