> Another option is to purchase a premium support contract which offers extended support (i.e. ongoing access to security fixes) for 1.1.1 beyond its public EOL date. There is no defined end date for this extended support and we intend to continue to provide it for as long as it remains commercially viable for us to do so (i.e. for the foreseeable future).
I think that's fair. If organizations running this EOL software aren't inclined to move away from it, and expect continued support... yeah, pay us. Maybe at some point they will look at the cost and decide it's worth fixing their shit.
Why do these fixes not end up becoming available to the public? I guess the organizations which pay for it aren't incentivized to redistribute it? (I assume they legally could under an OSS license.) But then it only takes one...
In short, it's because that work would not otherwise be done unless there are companies that demand it.
Who wants to work on software that has been deemed "end of life"? There will be no further traction with that software, only the people who aren't willing or able to update their legacy code will benefit from that work. This type of work is the epitome of "pay me" kind of work.
Edit: I have an strong assumption that these EOL patches will not be open source. I didn't mention that in my comment, and I should have at the very beginning.
charging also makes most people migrate off.
It encourages good behaviour. And customers that can't migrate off (for good or bad reasons) pay. Seems like a good model to me. If they made it free, it wouldn't encourage good behaviour.
> Why do these fixes not end up becoming available to the public?
Presumably the patches are not open source, and given to the supported users under a licence that restricts their distribution. The licenses the main code is available under say nothing to stop this – even if the project was covered by some GPL variant they would still be able to do this (assuming all contributors have signed over relicensing rights or similar).
> But then it only takes one...
That one might be taking quite a risk though. They'd be in breach of whatever agreement the updates were handed to them under, so might stop getting future updates so had better not do this (well, get caught doing it!) while they still depend on timely security updates themselves. Even if they no longer need future updates, there will be potential for a costly legal argument.
Ah, I see, OpenSSL is licensed under "an Apache-style license"[1], so they can distribute patches under non-OSS licenses if they so desire. I thought it was a GPL-style viral license for some reason.
Even if OpenSSL were licensed under the GPL, nothing prevents the organization from releasing patches or separate distributions under a restrictive commercial license. They are the copyright holder. Viral open source licenses constrain _licensees_ to release derivative works under the same terms, but they can’t somehow destroy inherent ownership rights due to the copyright holder.
I was going to add that OpenSSL does in fact have a strong CLA enforcement policy, as should anyone attempting to earn money from open source software.
Of course you’re right, if the actual copyright ownership is in dispute then ownership rights associated with that ownership are difficult to invoke.
Last I looked, BoringSSL[1] was a drop-in replacement for 1.1, as long as:
1. Your usecases intersect with Google's (they removed bunch of stuff during the initial forking phase).
2. You can handle the following:
> Although BoringSSL is an open source project, it is not intended for general use, as OpenSSL is. We don't recommend that third parties depend upon it. Doing so is likely to be frustrating because there are no guarantees of API or ABI stability.
Initially the idea seemed to be Libressl has better development practices => fewer bugs. The idea was this pays for a compatibility price. I think that didn't pan out very well longer term.
I think the idea of using their new libTLS api alongside it was there early on. Adding "easier, foolproof api => even fewer bugs" to the mix. But that didn't seem to get much uptake.
There's so much software written against openssl 1.1.1, when you upgrade to version 3.x it's a horror show of deprecations (though things seem to mostly work)... they removed some FIPS functions which broke some code I've tried to compile and I haven't been able to find a solution that works properly without actually changing the source code. Does anyone know about this and what's the appropriate thing to do in such cases where you have thousands of lines of code you have no interest in upgrading, but which no longer compiles on openssl 3?
I went through this last year, the last time the OpenSSL project issued a warning about the EOL, and got about halfway done before giving up and just using #pragma GCC diagnostic ignored "-Wdeprecated-declarations" when calling deprecated OpenSSL functions. The deprecated functions do still work, and with OpenSSL 3 being a LTS release until 2026, I'm hoping they'll continue to work until at least that date.
Of course, this pushes the problem down the road, but here's why I think this is the best approach if you're unable to move to one of the forks.
The OpenSSL 3 API is vastly different from the 1.0 or 1.1 APIs, and are currently poorly-documented. There's a migration guide, but but as lengthy as it is, I found it only minimally helpful in understanding how to transition from the 1.1 API to the 3.0 API. In short, a lot of structs and their associated functions have been deprecated in favor of using the more generic EVP_PKEY and its associated OSSL_PARAMS. OSSL_PARAMS is a struct that contains a sting key and a pointer value, and you get/set arrays of these params on keys. So instead of having C functions which clearly specify what their required parameters are, what they return, and the types of all these things, you get a few comments in the OpenSSL headers telling you which types go with which keys. The compiler won't warn you if you assign an integer value to a key that expects an octet string, for example.
The rest of the documentation isn't much better. Some man pages are great, but most still leave it up to you to piece things together.
Worse, there are quite a few use cases which are made more difficult and verbose by the new API. Browsing the github issues and the mailing list, there are quite a few scenarios that the OpenSSL team has said could be made easier in the future, but were somewhat of a blind spot during the initial design of the new API.
Perhaps because of this, few major open source projects have moved to the new API, even with the EOL deadline so close. This, in turn, makes it difficult to find real-world uses of the new API to help learn how it should be used.
However, the API is slowly getting better. In 3.1, OpenSSL added some functions to make building OSSL_PARAM arrays easier. By 2026, they may have improved the documentation, filled in the missing use cases in the API, and there may be more real-world examples to learn from. All of this would make moving to the new API much easier.
I think it still looks bad in a security audit if you use OpenSSL 1.1.1 with RedHat patches on Windows or Mac. Not sure how much platform specific code is actually there.
Technically not EL, but there are plenty of servers running EL patches on EL-based distros such as Rocky, Alma, Amazon Linux, Oracle Linux, etc.
Some of my clients have only just begun to contemplate upgrading from CentOS 7 to Rocky 8, so I expect to keep seeing OpenSSL 1.1.1 for quite a few years.
This is a very common theme, same with Python 2. It's EOL, but in practice it's really not, because distros have an independent EOL date and cannot make breaking changes.
I am stuck with an application written in Python2, where porting it to Python3 is difficult – if it was just the Python language changes it wouldn't be a big issue, but a third party library we heavily rely on decided that Python2->3 would be a good opportunity to completely redesign their API. Getting the application to handle that third party API redesign is a big task and thus far nobody has had the bandwidth/pain-tolerance/etc to undertake it.
But recently we've been finding other libraries we use need to be upgraded for various reasons, but the new versions we need to upgrade to are now Python3-only. Stuck between a rock and a hard place.
Until I discovered https://github.com/justfoxing/jfx_bridge – a Python2-to-3 RPC bridge. So now the Python2 app spawns a Python3 subprocess to host some of its libraries which it accesses over RPC. Ugly as hell but for now the least worst option. My hope is we can gradually transition more and more of the app on to the Python3 side of the bridge, and maybe eventually the Python2 side (and the bridge) can be jettisoned.
Which also led me to discover https://github.com/justfoxing/ghidra_bridge – this Python2 app is nothing to do with Ghidra (it is CPython whereas Ghidra is Jython), but I've been mucking around with Ghidra, and being able to write Python3 to control it is a better experience than Python2. I hate the 2<->3 mental context switching I'm forced into and would like as little of it as possible.
I was hoping they would extend it for a bit longer, not because I want to be running old versions but because 3.0/3.1 have some massive performance regressions that are yet to be fixed.
Have a look at HAProxys latest release announcement for some openssl 3.x commentary.
With some luck they will get a handle on this before 1.1.1 expires.
Those stuck with systems they cannot upgrade can always pay, but most likely the best-effort patching from your Linux distribution of choice will provide the patches for you.
Better make migration plans soon, though; OpenSSL fixes may not be backported at the same time they're released for newer versions and you don't want to get caught with a vulnerable HTTPS server because Canonical or Red Hat needed a few extra days to backport the fixes.
Time to upgrade to LibreSSL. It works just fine and I haven't ran into any compatibility issues lately. The libraries might even work side by side because of the ".50" version number although I wouldn't recommend it.
When your distro is still on 1.1.1\w, now may be the right time to make the switch.
Well then you could use a sandbox (e.g. bubblewrap) to mount whatever on /etc/ssl. Or you could recompile libressl with a different --sysconfdir and LD_PRELOAD it.
Sure, I could. But it is more complex. And my (probably wrong) opinion is that at the point where you can inject environment variables, the game is pretty much over anyway (you can probably make more harm with LD_PRELOAD compared to SSL_CERT_FILE). So I am not convinced about the value this limitation brings in.
Who should write crypto code? People with PhDs in Cryptography and deep experience with the language they are using!
What does OpenSSL do? Not that!
If this was measured like a startup they have been wildly successful by ignoring best practices and offering something that people want and that works just good enough.
Oh hey, another piece of EOL software I can add to our pile when explaining to management that running stuff that hasn't received security updates in months (or even years in our case) is a bad idea.
And companies which are making a business at upgrading components like openssl are the ones which would be targetted for a planned obsolescence crack down.
Usually, it is c++ ABI issues (those are usually a massive pain), or glibc versioning manic usage, since there are rarely API/ABI breakages in many crypto libs.
Worth a mention, if your use case might possibly attract any attention from PHB's, or other flavors of idiot - support for 1.1.1 officially ends on 9/11 (Sept. 11th, 2023). Yes, that is technically meaningless...but morons who need to look like they Understand a situation, and are Doing Something, can latch onto irrelevant crap faster than you can blink.
I think that's fair. If organizations running this EOL software aren't inclined to move away from it, and expect continued support... yeah, pay us. Maybe at some point they will look at the cost and decide it's worth fixing their shit.