Hacker News new | past | comments | ask | show | jobs | submit login
The latest OpenSSL vulns were added fairly recently (twitter.com/hanno)
179 points by pentestercrab on Nov 2, 2022 | hide | past | favorite | 73 comments



A lot of Linux people have the impression that LibreSSL is largely incompatible with OpenSSL (not true), that the ABI breaks every six months (not true), or that it requires heavy patching of downstream software work to maintain (not true anymore).

Here's a recent presentation about LibreSSL and some of those points: https://www.youtube.com/watch?v=bF1d_aCSzS0

Years ago there was also a big article from Alpine, one of the distros that tried to switch to it and had to switch back. The now-outdated article seems to be the main citation for those opposed to even giving LibreSSL a chance now. In fact Alpine is reconsidering a switch back from OpenSSL after the 3.x branch was shown to be such a disaster.

One of the LibreSSL developers summarized this recent OpenSSL issue in a commit message worth reading: https://marc.info/?l=openbsd-ports-cvs&m=166731803502387&w=2


The CRITICAL vulnerability here is the development process. A lot of projects get by due to a really frickin' good project lead, really good contributors, and really good collaboration. Clearly that's not the case with OpenSSL.

I've suggested in the past that the OS should handle transport encryption. People moan "oh no then we can't fix anything! oh no we can't innovate!". But adding encryption routines to the OS does not remove the ability to use OpenSSL. People will continue to invent their own userland tcp/ip stacks regardless. But having the OS team handle the encryption at least gives one large well-funded organization the burden of doing it right or being really embarrassed. The upshot is every application can simply pick up encryption functionality without depending on a 3rd party library. A new flag to existing syscalls could wrap opening any socket with some standard encryption method, so the application would never need to bother with "doing encryption", unless it wanted to. And the system keychain can manage credentials. The whole point of the OS is to make life easier for apps and users; why not let it do more of the heavy lifting?


And then every time there's a vulnerability in the kernel implementation, we need a kernel patch (something that often takes longer to test and release than an update to a userland library) and a reboot. Updating OpenSSL just requires restarting daemons that use it.

At any rate, I don't see why you think that tying TLS to the kernel is required to improve the security posture. OpenSSL can remain a library, and the same companies that fund the Linux kernel can fund OpenSSL. Hell, the people who maintain the existing crypto functionality in the kernel can work on OpenSSL if they so choose.

Then there's the matter of adoption: the main reason that LibreSSL has failed on Linux is because developers don't seem to want to move to it in their applications, despite it having very few differences from OpenSSL. What makes you think developers will adopt a completely different Linux-specific API that doesn't work on any other OS?


OpenSSL is part of the OS as far as users are concerned. It's right there after installation, in /usr/bin/openssl and in /lib64/libcrypto.so.3. Applications normally just pick it up. And indeed it does not prevent applications from shipping it separately.

But, depending on the OS you're using, the organization is not necessarily well-funded. Even those who are and use OpenSSL (Red Hat) haven't caught this one.

I assume that you're not mixing up "OS" and "kernel", because adding more attack surface to a monolithic kernel is never a good idea.


I mean the kernel. But there's a lot of different ways to implement this functionality. The kernel already has a variety of crypto baked in, it's just a matter of deciding on the implementation, which doesn't have to be 100% in-kernel. The point is to offload the maintenance onto a team that is better staffed and has better development practices, and possibly also add additional functionality to existing syscalls.


I think those are two separate problems. When you want to benefit from a process (better staffed project), you don't need to impose technical conditions (make something part of syscalls). While the Linux kernel doesn't publish userspace OS components (I imagine there's overlap with projects like glibc), I believe Windows "kernel" is composed of many pieces, including some running in userspace, and that would be the organizational entry for your idea.


Linux kernel publishes util-linux, iproute2 and perf at the very least. I don’t see why they couldn’t publish an so.


Basically you’ve described how TLS works for most native Windows (SCHANNEL) and MacOS/iOS apps. The OS provides a library that apps can depend on being present and maintained. The same library is used by user space and kernel alike.


We'd need some developer in Debian and/or Fedora to step up and propose a switch to Libressl.

The rest of the ecosystem would simply follow, as maintaining an alternative is extra work, and, as openssl is simply worse, pointless.


If I recall, back when HeartBleed hit, the OpenSSL Project only had 1 FTE worth of paid developers & managers working on their code.

Wikipedia claims that (as of 2019) they have 2 FTE's worth, plus a dozen or so volunteers...who are a big overlap with their management committee. And their total budget is < $1M/year.

Not to suggest that volunteer coders are automatically lesser coders...but for widely-used, uber-critical, uber-complex code, that sounds pretty profoundly under-resourced.

Edit: Adding the full quote from Wikipedia: "As of May 2019,[7] the OpenSSL management committee consisted of 7 people[8] and there are 17 developers[9] with commit access (many of whom are also part of the OpenSSL management committee). There are only two full-time employees (fellows) and the remainder are volunteers."


The exact financial situation of OpenSSL has always been unclear to me; they don't seem to publish financial reports, and get income from various sources (donations, consulting, sponsored work). The references on that Wikipedia page don't contain the claims in the article, and last year they hired a dev and manager[1], and this year a "Business Operations Administrator"[2], which seems to suggest they have more financial resources than what's suggested on the Wikipedia page.

I've always been somewhat skeptical that funding (or rather, the lack thereof) is main reason for OpenSSL's problems. The whole funding thing is mainly a question of fairness, rather than security or quality.

Certainly heartbleed was IMHO not really caused by a lack of funding. It was an experimental extension that no one really used and no one really needed either that was nonetheless enabled by default. That was just a bad call, which happens – live and learn – but no amount of monetary units can protect you from mistakes like that. The entire heartbeat code ended up being removed in 2019 as no one used it.

[1]: https://www.openssl.org/blog/blog/2021/11/24/hiring-manager-...

[2]: https://www.openssl.org/blog/blog/2022/05/18/hiring-business...


> Not to suggest that volunteer coders are automatically lesser coders

They might be every bit as competent, but it's unreasonable to expect volunteers to put in as much time as someone who does it as a job.


Also processes. If I'm a company I might be liable. There might also be higher QA standards as there are people who's very job this is to verify things. I'd say it's easier to enforce standards.


And if the brick-wall learning curve, for being able to work on the most-complex 0.5% of the code, is X,000 hours tall...


Libressl has no such issues.

We should not reward bad practices with funding.


It looks like there are 27 contributors with over 50 commits since Heartbleed was revealed

https://github.com/openssl/openssl/graphs/contributors?from=...


All of this could be used to explain why feature or bugfix velocity at OpenSSL is slow, none of it excuses bad code getting in. Slow feature adds to as low as required to maintain as close to zero as possible security bugs. OpenSSL is not a place to cut corners, this is not a programming failure, this is a management failure.


I haven't really stayed up to date, but I recall the primary issue with openssl at the time of heartbleed was that it was basically a poorly manned project with little funding, yet billions of people rely on it daily. Has that situation changed at all since? It's ironic the OP laments them "not learning lessons" since heartbleed, but if there was any lesson to learn it is that if everyone is going to rely on a piece of software it should get some love from the broader community. It's good he found it but his harsh scolding tone is unfair given the situation...unless openssl has multiple SV salaried employees working full-time on it by now.


Well, he shows the bug can be found after a second when fuzzing!

Some years ago, I have reproduced Hanno Böcks fuzzing to find Heartbleed "again". It wasn't that hard to do and I was completely new to the whole thing. Everybody had time to get up to speed with that as I did and implement it in the workflow.

The manpower problem becomes worse over time when you do poor quality work, because things are not really done and you cannot fully concentrate on further work because there is so much maintenance and rework. Stellar work doesn't have to cost much. Good, reliable software can get built extremely cheaply.

Of course, OpenSSL and many other projects face many typical problems. Protocols are under specified, sometimes extremely complicated or the way the protocol is described is extremely unreadable and for all these reasons the specifications are not crystal clear. Then you have practical implementations that can vary a lot, if the standard is poorly written/ thought out or one implementing party has a monopoly and can do whatever they want they tend to diverge. Then of course, there are protocols which originally were simple but got extended over time with things that are used everywhere but not really standard and such. Also, you can have wrong tools or use your tools badly. C and many other languages require a lot more discipline than some viable alternatives that would in many cases cream at you or handle the situation for you. C is like a table saw without safety. Yes, it does get the job done but you might loose a finger or a hand in the process and even the best do at some point. Parsing anything in C seems to me to be a clear "danger" zone, where you tripple check everything.


> Well, he shows the bug can be found after a second when fuzzing!

Unless I missed something he claims that, he doesn’t show it.


He doesn't make any time or effort claims but 10,000 iterations of libfuzzer generally doesn't take that long.

With the rate he is showing (38/second) it'd take just under 5 minutes (~254 seconds).



It’s trivial enough to try that I think any challenge of his word should be conducted by actually trying it first.


I've contributed to OpenSSL in the past, but not regularly.

Heartbleed was partially because they hadn't fully adopted techniques like fuzzing in regular use, so when researchers started fuzzing everything, out popped heartbleed. Now OpenSSL does fuzzing on (every PR, IIRC?) The author is a bit unfair in calling the project out as if they don't do it.

There still aren't a lot of developers on it relative to the complexity of the project though. Frankly there are large parts of the codebase that are pretty intimidating to touch, like the X.509 stuff implicated here.


> There still aren't a lot of developers on it relative to the complexity of the project though. Frankly there are large parts of the codebase that are pretty intimidating to touch, like the X.509 stuff implicated here.

Sounds like the old problem of "Well, the hospital might have enough surgeons overall...but this case is gonna need a real good pediatric brain surgeon or two, and that's a different story..."


Google approached this issue by simply creating and maintaining their own fork of OpenSSL, called BoringSSL. Has it been affected by this most recent issue?



Has BoringSSL been widely adopted? The OpenBSD people forked to LibreSSL which looked very promising (coming from people who make their main obsession about security) but seemed to quickly burn out at least on Linux hosts


>Has BoringSSL been widely adopted?

It was never meant to be, it's only providing a subset of the features openssl has.

>The OpenBSD people forked to LibreSSL which looked very promising (coming from people who make their main obsession about security) but seemed to quickly burn out at least on Linux hosts

In the beginning yes it was, they moved fast on cleaning up OpenSSL codebase. The problems began because it's not explicitly a goal to maintain OpenSSL compatibility, so distros had a huge maintenance burden patching things to work with it. Eventually the distros that were maintaining it (Void and Gentoo?) got tired of it and decided OpenSSL had gotten through the worst of its problems.

Then OpenSSL 3 dropped, once again making it annoying to patch for and introducing issues like this CVE. I think Alpine was discussing looking at LibreSSL again but I think that ship has sailed.


I'm pretty sure even OpenBSD packages OpenSSL in ports for third party software, since so many pieces of software basically require it and do not work with LibreSSL anymore.

I suppose a Linux distro could hypothetically take the same approach and use a mixture of both, but realistically most people don't want to bother with that.

Personally I'd love a Linux distro with a BSD-style base system and extra packages kept separately, but the closest to that would be ... Slackware with pkgsrc or something I guess.


> I'm pretty sure even OpenBSD packages OpenSSL in ports for third party software, since so many pieces of software basically require it and do not work with LibreSSL anymore.

Theo Buehler talked about the current status of LibreSSL compatibility at EuroBSDcon last week. To take some quotes:

“> 2000 OpenBSD ports link against libcrypto or libssl” [i.e., LibreSSL]

“< 100 of these need patches (< 5%)”

“6 ports link against OpenSSL”

https://www.youtube.com/watch?v=bF1d_aCSzS0


I find it odd that Google's oss-fuzz didn't find this a long time ago.

https://github.com/google/oss-fuzz/blob/master/projects/open...


ASAN really is a blessing, any modern C code should at least give it a test run


A bit funny, a software library focused on cryptography, where security is an afterthought rather than proactive effort.

I would consider the alternatives before going to OpenSSL.


LibreSSL is a fork by OpenBSD crew that happened after the Heartbleed: https://www.libressl.org/

Considering OpenBSD's reputation for proactive security, I'd say LibreSSL might be the best alternative out there.


Why hasn't LibreSSL taken off? I thought for sure it would after heartbleed. I assume it's mostly network effects/laziness, despite being fairly compatible (at least when it originally forked) and everyone already using OpenSSH from the openbsd as well.


It has! It's on every iDevice out there.

Apple uses LibreSSL, not OpenSSL.

    @Ytterbium ~ % uname -a

    Darwin Ytterbium.local 22.1.0 Darwin Kernel Version 22.1.0: Sun Oct  9 20:15:52 PDT 2022; root:xnu-8792.41.9~2/RELEASE_ARM64_T8112 arm64

    @Ytterbium ~ % openssl version

    LibreSSL 3.3.6


Isn't the builtin openssl lib a basic shim for LibreSSL, mostly only there for backwards compatibility and with limited functionality. IIRC Apple want you to use their Network framework https://developer.apple.com/documentation/network.


https://en.wikipedia.org/wiki/LibreSSL

Adoption is the default for a few BSDs, OpenSSH on Windows, macOS.

From a usage standpoint, you're probably correct (I honestly don't know) -- I only use it to generate web server certificates.


I'm guessing you're on macOS 13? The machine I'm on now is still on 12.6 (`Darwin Kernel Version 21.6.0`), and `openssl version` reports `OpenSSL 3.0.6`. Glad to see it if they made the switch in the new release, though.


Could it be that you have a different openssl in your PATH shadowing the system one? Because I could have sworn macOS 12 also used LibreSSL.


Derp, you're right - it was finding an OpenSSL binary that MacPorts installed. Explicitly doing `/usr/bin/openssl version` shows `LibreSSL 2.8.3`.


Compatibility seems to be a difficulty: https://voidlinux.org/news/2021/02/OpenSSL.html


Large web companies like Google implement their own encryption stack anyway.

On the BSD's I've used, LibreSSL is a standard kernel configuration option. I'll note on FreeBSD, LibreSSL lacks the in-kernel fast path, last I checked.


> Large web companies like Google implement their own encryption stack anyway.

Google uses BoringSSL[1], which is another OpenSSL fork. I believe AWS uses a mix of OpenSSL and Boring SSL (someone can correct me!).

So it's "their own encryption stack," but that stack is at least originally comprised of OpenSSL's code. They've probably done an admirable job of refactoring it, but API and ABI constraints still apply (it's very hard to change the massive body of existing code that assumes OpenSSL's APIs).

[1]: https://boringssl.googlesource.com/boringssl/


AWS maintains their own TLS stack: https://github.com/aws/s2n-tls


Is this an argument for GPL?

Seems like the big players came, saw, borrowed, and then did their own thing without contributing back.

If this were my project, I would be inclined to archive it and do a GPL fork.


None of what happened with OpenSSL or its forks is incompatible with the GPL.


Forgive my ignorance, but all of these forks are also still open source? My impression was that patches and improvements were made in closed source, private repositories to the benefit of the companies without paying anything back.

Otherwise, couldn't some openssl contributors just crib fixes from the forks?


As far as I know, all of the major ones are. I don't believe anybody has attempted to make a closed fork of OpenSSL, at least not one that has gained any real traction.

> Otherwise, couldn't some openssl contributors just crib fixes from the forks?

They do! But I assume it gets balanced with their own feature development time, and it becomes harder as the codebases drift. OpenSSL probably hasn't done itself many favors with the recent (3.x) "providers" refactor.


BearSSL is also worth a look:

https://bearssl.org/


It’s not actively developed and it doesn’t support TLSv1.3 though.


But it is high quality, small and uses few resources, thus worth a mention.



Isn't openssl included in the oss-fuzz project? If hanno caught it this quickly with his fuzzer, would seem to be surprising if they didn't also.


It is, it'll build a few fuzzers hitting different areas[0]. The important function in many of those `.c` files is `FuzzerTestOneInput` which is effectively the entrypoint for a single fuzz test.

Taking a look at x509.c[1] which I believe is the most likely to be able to reach the punnycode parser. (I am not at all familiar with the codebase). You can see that the OpenSSL fuzzer is basically doing a holistic fuzz (I assume the i2d* and d2i* functions exercise the parser), that is its just invoking key entrypoints that in theory can exercise all the rest of the functionality with the correct inputs.

Hanno's fuzzer on the other hand, is explicitly only testing the `ossl_punnycode_decode` function[3].

Given the breadth of the fuzzer, I think its very possible OSS-Fuzz just didn't hit it.

[0] https://github.com/openssl/openssl/blob/master/fuzz/

[1] https://github.com/openssl/openssl/blob/master/fuzz/x509.c

[2] https://twitter.com/hanno/status/1587775675397726209/photo/2


Given how much horse power and experience they have, this is very disappointing.


"They" who? Even since Heartbleed, the OpenSSL project is still woefully underfunded given its importance to... well, everything on the internet.


I meant OSS-fuzz, i.e. Google & co


Just because a project uses oss-fuzz, you can't assume it has good fuzz coverage. In this case, they probably should have written a specialized fuzz target for the Punycode parser. Parsers like this are easy to fuzz and such bugs are typically caught very quickly, often in mere seconds. With a more general fuzz target, it can take much longer to come up with input that triggers the bug.


Developers at FAANG making half a million a yer, yet they can't invest in the most critical library they use...


Who uses it? Apple and Microsoft have their own crypto libs, Google forked and created their own… not sure about the others.


Literally everyone running a Linux server uses OpenSSL under the hood as soon as HTTPS is in play. Apache, nginx, haproxy, lighttpd, nodejs, .NET - they all use openssl. IIRC the only stacks that do not rely on OpenSSL to provide SSL on the server side without resorting to a frontend loadbalancer/proxy are Go and Java.

That is why OpenSSL is so extremely important, and I seriously wonder why the industry bigwigs haven't stepped up and created a foundation/trust flush with cash to make sure that continuous development of these libraries, regular audits, certifications and testing is paid for.

Out of the big names in tech, I think the only people not depending on Linux at all is Netflix, they're famous for running a massive FreeBSD shop (but that doesn't mean they're not using OpenSSL in their application infrastructure, never read anything about that side of their business). Not sure about Google, they do a lot of yak shaving and reinventing wheels. MS and Amazon run Linux as part of their clouds at the very least.


I'm reminded of ESR's quip "given enough eyeballs, all bugs are shallow." And that's often true for projects that have obvious functionality and for which you're not worried about cross cutting concerns like security or safety. I just remember a decade of working with federal contractors trying to disabuse them of the idea that they could just grab some random code off the internet and assume it was coded well enough to avoid simple, impactful vulnerabilities.


I just remember a decade of working with federal contractors trying to disabuse them of the idea...

I'm curious: in what role was this? In the short exposure I had to federal contracting, I saw few efforts along these lines. It would have been a good idea!


I wonder if using WUFFS for certificate parsing is something that'd help keep these vulnerabilities at bay?


An example of proven correct networking code:

https://www.microsoft.com/en-us/research/blog/project-everes...

This is the future.


I once added a "very simple" string manipulation utility function that was "obviously correct" and "didn't need any tests", then pushed directly to master. Suffice to say I don't do that any more.


What test would you have written that would have found the issue?


What if it was meant to?


[flagged]


Maybe they have other things to do?


Seems like everyone who relies on OpenSSL as a critical part of their infrastructure has "other things to do", and then is indignant when new vulnerabilities come along.


The tragedy of the commons, yes. The solution isn't to ask a random possibly unaffected individual why they didn't fix it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: