Hacker News new | past | comments | ask | show | jobs | submit login
OpenSSL Security Advisory (openssl.org)
283 points by runesoerensen on July 9, 2015 | hide | past | favorite | 136 comments



> OpenSSL will attempt to find an alternative certificate chain if the first attempt to build such a chain fails

I think the latest big thing I've learned in my career is that trying to fix broken input data silently is always bad. Fixing stuff silently isn't helpful for the callers, it's very difficult to do and it produces additional code which also isn't running in the normal case, so it's much more likely to be broken.

Additionally, your callers will start to depend on your behaviour and suddenly you have what amounts to two separate implementations in your code.

I learned that while blowing up (though don't call exit if you're a library. Please.) is initially annoying for callers, in the end, it will be better for you and your callers because code will be testable, correct and more secure (because there's less of it)


Alternate chains aren't a broken input issue; it's an issue where not all clients have the same CAs; so if you need to chain to an old 1024-bit root to take care of really old clients, newer clients without 1024-bit roots should be able to validate still. Older versions of openssl need to keep the 1024-bit root around, because the only validate the full chain provided by the server.


To clarify this a bit:

The problem is that you want to retire a 1024-bit CA root, and replace it with a 4096-bit CA root. To do this effectively, new clients need to stop trusting the old 1024-bit CA root, or there's no point doing the transition.

However, you still want certificates that you issue to be verified by old clients that don't know about the 4096-bit CA root.

To solve this, you issue certificates with two alternate chains of trust - one up to the old 1024-bit root, and one up to the new 4096-bit root - and teach the new clients to check all the alternate chains.

It's this last bit that required the code change in OpenSSL, which contained the logic error that resulted in this vulnerability.


This seems to be a common opinion recently, see https://tools.ietf.org/html/draft-thomson-postel-was-wrong-0...


But Jon Postel didn't mean what people now think he did.

His famous principle is about border cases, when the spec is vague, handwavy or thought by some to be vague. It's not about the other cases.

Remember that Jon Postel was the RFC editor. He didn't want anyone to ignore the RFCs, he wanted the RFCs to be readable and pleasant, and he wanted implementers to do the right thing when when an RFC erred on the side of readability.

FWIW I wrote a blog post about this a few years ago, http://rant.gulbrandsen.priv.no/postel-principle


Here's an example of the problem with that though: a mailing[0] by someone in February of this year asking if there's a formal grammar for the DNS zone (master) file format. This is a format that was first loosely specified in a RFC almost 32 years ago and there still isn't a rigorous definition. BIND now specifies a defacto interpretation with lots of liberal "treat this as a warning" options[1] and new gTLDs registries now insist on a subset of the original specification.

HTTP also has corner cases that widely-used implementations simply aren't handling consistently because the original RFCs are vague or the ideas being conveyed are buried in even older RFCs that nobody has the incentive to drill in to, or simply aren't known to them.

IMHO the IETF really should move to a wiki format, where information and wording changes on a particular protocol can be seen in one place. Plaintext snapshots of particular versions could still be published.

[0] https://www.ietf.org/mail-archive/web/dnsop/current/msg13349...

[1] https://kea.isc.org/wiki/ZoneLoadingRequirements#a3.3RFCimpl...


BTW there's a reason for that. The IETF decided (it must be a couple of decades ago) to restrict itself to matters of the internet. Things like file formats are thus out of scope for RFCs. There have been exceptions, RFC5952 is a good example and I know at least two others, but by and large RFCs are about the internet now, not about file formats or other worthy subjects.


RFC 6120 and 6121 are for XMPP (chat), and define XML is to be used, and even goes into the exact structure of the XML "packets".


Bad example. XML is the wire format in XMPP RFCs, not a file format.


RFC7159? It even recommends a file extension.


That's a perfect example: Publishable as an RFC because the format is used in many APIs on the general internet, but also says a little about a local matter, in this case the file names.

Is the rule a bit messy? Yes it is!


There's a reason why I try to use the tinydns zonefile format (http://cr.yp.to/djbdns/tinydns-data.html) whenever I can.

It's so much simpler to use, and less problematic.


The problem isnt so simple in the world of large scale protocol design - standards are rarely successful when imposed, they're usually adopted as a reflection of the current implementations. And when you're dealing with multiple independent implementations the variance can be subtle, and the standards often are broken or at best under-specified at first.

When dealing with integration among many parties, there is tremendous pressure to just "make it work". The web arguably is an example of this - the standards were post-facto representations of what's already implemented.

Of course we are all hating the long term implications on our codebases , but "let's force everyone to do it one way through strict behaviour" seems to discount the social dynamics of interoperability.

Moving away from Postel's principle in production will not lead to successful open and interoperable implementations, it will rather trend towards towards one single implementation , likely open source, that is shared and tweaked by all. That has some positive (interop!) and negative implications (limited ability to innovate / dragged down into programmer religions, etc).


I wouldn't say he is wrong, so much as you just need a dev mode where strict acceptance is the order of the day. You need people to learn to produce correct results, but still be resilient in the field.


This x10. Strict Dev modes are helpful. Strict everywhere just means that there will only be one successful implementation that everyone uses.


It's not a particularly new opinion, just one people refuse to learn. Prior example from 2008: http://www.joelonsoftware.com/items/2008/03/17.html


Unfortunately, I suspect this feature may be necessary in practice in order to actually expire old, insecure CA certificates that are still in widespread use.


This may be true when accepting input over which the user has control so the library can tell him "do it better".

I would say it's also true for security-related protocols, such as in this case.

But for general network protocols some leniency in processing is necessary, or even beneficial for forwards-compatibility.


Not only that, but with less and simpler code, there's less cognitive disincentive for people to take a look at it, see what it's doing, and verify that it does what it says it does.


So true.


I am hardly astonished that a 319-line function that opens by declaring x, xtmp, xtmp2, chain_ss, bad_chain, param, depth, i, ok, num, j, retry, cb, and sktmp variables had a bug.

Before someone provides the standard "submit a patch" retort, I'll note that the variable naming is in full compliance with https://www.openssl.org/about/codingstyle.txt even if the function length isn't. A quick sample of other files suggests the function length matches actual practice elsewhere, too.


And the coding style doesn't exactly avoid errors as well.

"Do not unnecessarily use braces around a single statement:

    if (condition)
        action();
and

    if (condition)
        do_this();
    else
        do_that();
"

Didn't people learn from the goto fail bug ? http://embeddedgurus.com/barr-code/2014/03/apples-gotofail-s...


Personally, I say if a statement continues on a different line, then you should use braces

    //okay
    if (condition) return foo;

    //not okay
    if (condition)
        return foo;

    //not okay
    if (condition) do_foo();
    else do_bar();
In the second case, the else can be considered a continuation. In the first example, there's little chance of confusion or the introduction of an error, in the second and third, that is not necessarily the case.

If it doesn't look/fit well on one line, break it up with braces.


What about:

if((somevar != checkvar((byte)othervar)) & (i != 3)) somefunc();

someotherfunc();

A bit exaggerating, I agree, but not far off from some real-world examples and quite confusing.


Nice, in that case, I might advocate for either somefunc or a function in front of it taking somevar, othervar and i as parameters with early return statements before falling through to the work.. returning a boolean for if they passed through...

I've seen far, far, far worse...


> Do not unnecessarily use braces around a single statement

If you consider all braces to be necessary, then that's a truism and you can safely ignore it. "I didn't unnecessarily use them: we always use them because that's the safe thing to do."


Maybe the coding standard was written by someone who was used to auto-indentation making this sort of mistake immediately obvious.


Reading your comment I initially assumed {i, j} were iterators, in which case they would be fine by me. Nope.


You seriously have trouble reading "i", "j", "param" and "num"? Hell, "ok", "depth" and "retry" are already in your English-language dictionary!

I'll grant that having variables named with "tmp" is confusing out of context, I guess. But if you're trying to start a Java-style war over this stuff, just recognize that most of the world has moved on and views names like those as perfectly fine when used within standard idioms.


I have trouble understanding what they're meant to signify, yes. I am a regular human with finite intellect; I admit it.

To actually comprehend this function requires storing those 14 names in short-term memory, reading through the over 300 lines of remaining code, filling in bits of the meaning of those names as they become clear, and only then reading the code again with that mental map. That's the case where none of the 14 have slipped my mind by the time I get back around. That just seems like an awful lot of overhead to net something that could be as easy as reading names if they were better chosen.

Just as a demonstration, why don't you time how long it takes you to figure out what j actually is and then report back?


I don't know the codebase, but if for example, there's a project wide standard or convention for what the variable j means, then it could be OK. Or at least not as bad as it first appears. Not all code has to be written for an assumed stranger on the street trying to sight-read it.


This is OpenSSL we're talking aboot here.


The use of shorthand names like that is a strong indication that the variables don't need to be live for ~319 lines. If you reduce the live range of your variables, you reduce the complexity of your function. Less complex functions are less likely to have bugs and are easier to diagnose when they do.


Good points, but if you are able to do that, doesn't it suggest the function could easily be split into multiple, more specific functions?


Yes, of course, it's just the first step. In fact you'd probably need to introduce a couple more variables. For example, i gets defined three times:

  i = sk_X509_num(ctx->chain);
  i = check_trust(ctx);
  i = X509_chain_check_suiteb(&ctx->error_depth, NULL, ctx->chain, ctx->param->flags);
What a mess. I would probably start by moving towards a module / class for all the functions that take either an X509_STORE_CTX *ctx pointer or something accessed through ctx.


Are you kidding me? There is no possible universe in which 'param' is an acceptable name for an argument.


What about this?

  class Function<T> {
    public apply(T param);
  }


OK. One possible universe.


Technically speaking, that's a parameter, not an argument.


A three hundred nineteen line function though with all those variables is more than a bit big. Modern compilers are rather good at inlining code, so it's doubtful there will be very much, if any effect on codesize there. The multitude of temporary variables also suggests that there are probably portions of that function that need factored out. OpenSSL suffers from a severe thousand paper cuts problem. While its warts aren't that bad when viewed individually, the sheer multitude shows a severe lack of organization of the project from top to bottom.


Interesting part is that the bug was introduced in the latest versions and has been fixed by the person who inserted it :-)

Bug added: https://github.com/openssl/openssl/commit/da084a5ec6cebd67ae...

Bug removed: https://github.com/openssl/openssl/commit/2aacec8f4a5ba1b365...

Although that's just the committer: https://twitter.com/agl__/status/619129579580469248


Looking at the changes that introduced the bug, it's obvious that the nature of the problems being solved is too complex for the changes to be only "visually" reviewed. There must be enough external tests to "uncover" the potential issues. And the tests of course can have the bugs too, not covering what needed to be covered. That's why for so sensitive code the testers should be the best programmers with additional preference to search for the combinations that don't work as intended.


You're confused. This thread is about OpenSSL. OpenSSL doesn't have tests.


Even if doesn't have tests at the moment that doesn't mean that it should always remain that way.


Yep, that the bug was recently introduced was implied from the notice they put out.

For interest, the line that was fixed from the first commit is:

https://github.com/openssl/openssl/commit/da084a5ec6cebd67ae...


    /* Remember how many untrusted certs we have */
    j = num;
Flawless.


We probably don't need to worry this time:

https://ma.ttias.be/openssl-cve-2015-1793-man-middle-attack/

"The vulnerability appears to exist only in OpenSSL releases that happened in June 2015 and later. That leaves a lot of Linux distributions relatively safe, since they haven't gotten an OpenSSL update in a while.

Red Hat, CentOS and Ubuntu appear to be entirely unaffected by this vulnerability, since they had no OpenSSL updates since June 2015."


Christ, what a mess of a project. They inserted this after their big promise to do better after heartbleed?

No wonder distros take their time moving to a new version. I really hope one of the alternative SSL libraries get picked up by the major distros. This is embarrassing, especially for those of us who have to justify FOSS in our environment.

LibreSSL looks promising. Hopefully competition will mean better outcomes for such critical libraries.


You can't be serious here? Bugs are introduced into software all the time. It's a sign of active development.

It's not the number of bugs that matters, or even the fact that new bugs get introduced over time - rather it's the severity of the bugs, how rapidly the bugs are realized, and ultimately how fast they are dealt with.

In this case, it appears to have been a pretty rapid resolution - ie. about 1 month from it being introduced, realized, and fixed.

A lot of folks like to lean on LibreSSL and cite "supposed problems" with OpenSSL, just as you have done now. This is a naive approach -- LibreSSL took OpenSSL, cannibalized and gutted it, and all sorts of new, untested, un-vetted code injected. OpenSSL was written largely by crypto specialists, where LibreSSL is mostly a bunch of grumbling developers, with little to no prior crypto experience.

There's a reason the world is not jumping on LibreSSL just yet. There's a reason foundations outside of the LibreSSL home (OpenBSD) such as the Core Infrastructure Foundation have not backed it -- it's simply not ready, is very unproven, and won't be for a long, long time, if ever.

Give OpenSSL a break. It works far better than nay-sayers want to let on, and has done so for almost 2 decades.


> and all sorts of new, untested, un-vetted code injected.

What?


The OpenBSD guys have little to no prior crypto experience? Can you prove this?


It is narrowly true that the libressl devs are not cryptographers, but that's also quite misleading. Lots of bugs, like say... this one, are not crypto related.


I mean, OpenSSH is such a piece of buggy garbage... /s


I once worked with some Linux admins who told me that SSH public key authentication wasn't secure.


While that's certainly possible, it's an extraordinary claim because it flies in the face of generally accepted beliefs. If your coworker was Bruce Schneier, I would pay close attention to his explanation. If they were your standard issue sysadmin types, then I'd want to know:

1) Why they believe so,

2) Why they haven't filed security advisories to advise the rest of us, and

3) Why you don't hear about banks being wiped clean because crackers were able to bypass SSH's security measures.

It's possible they're right, but as with all extraordinary claims, the onus of proof is on the ones making them.


1) Wouldn't tell us

2) I tried to explain millions of people around the world rely on it and use it. I argue it's probably safe (within reason - obviously the weak point is the private key file).

These were also the guys who refused to install packages we asked for from the community RedHat repository claiming security vulnerabilities but then they just admitted they installed some packages from there for their own use for puppet and other things they do.


So... the standard issue sysadmin types. Sigh. :-(


I agree he is being excessively hard on OpenSSL, but they have had alot of problems as of late. Software bugs are introduced all the time, you're correct, but projects like OpenSSL should be keeping a much closer eye because state agencies and other malicious parties are definitely watching development closely.


"Patch provided by the BoringSSL project."

This is an example of them doing better. A bug was found, reported to them, and they responded quickly giving advanced notice too.


This is also an example of open source working, in general. Bug found in one project, applied to other.


I think the same could happen in closed source projects as well.


Theoretically, but it's a lot more likely to happen if the projects have almost identical codebases (BoringSSL is a fork of OpenSSL).


from test/verify_extra_test.c:

    Test for CVE-2015-1793 (Alternate Chains Certificate Forgery)
   
    Chain is as follows:
   
    rootCA (self-signed)
      |
    interCA
      |
    subinterCA       subinterCA (self-signed)
      |                   |
    leaf ------------------
      |
    bad
   
    rootCA, interCA, subinterCA, subinterCA (ss) all have CA=TRUE
    leaf and bad have CA=FALSE
   
    subinterCA and subinterCA (ss) have the same subject name and keys
   
    interCA (but not rootCA) and subinterCA (ss) are in the trusted store
    (roots.pem)
    leaf and subinterCA are in the untrusted list (untrusted.pem)
    bad is the certificate being verified (bad.pem)
   
    Versions vulnerable to CVE-2015-1793 will fail to detect that leaf has
    CA=FALSE, and will therefore incorrectly verify bad


So, bad certificate HAS to be signed by leaf certificate, and leaf certificate HAS to be trusted. (And you need two CAs with the same keys)

openssl would accept certs that have been issued by a non-ca cert (which is trusted).

So if you have control over the leaf cert, you can just use it for contacting openssl.

If you don't have control over the leaf cert, you can't issue a bad cert.

Am I missing something?


The leaf cert is signed for evil-bastard.net, but the "bad" cert can be for mail.google.com.


So, updating server side OpenSSL will not close this vulnerability (for servers offering https-protected websites)? Is that correct?

If I understand the advisory correctly then this means that somebody could set up a webserver with a specially-crafted certificate and pretend to be somebody else, assuming that the client is running a vulnerable version of OpenSSL.

Is that right? I wish they would write these advisories in a slightly more helpful fashion.


Yes, this is a client side bug.


And servers that are authenticating client certs?

(clearly there are far fewer of those around but they do exist)


Good point.


In case it's slow:

OpenSSL Security Advisory [9 Jul 2015]

=======================================

Alternative chains certificate forgery (CVE-2015-1793)

======================================================

Severity: High

During certificate verification, OpenSSL (starting from version 1.0.1n and 1.0.2b) will attempt to find an alternative certificate chain if the first attempt to build such a chain fails. An error in the implementation of this logic can mean that an attacker could cause certain checks on untrusted certificates to be bypassed, such as the CA flag, enabling them to use a valid leaf certificate to act as a CA and "issue" an invalid certificate.

This issue will impact any application that verifies certificates including SSL/TLS/DTLS clients and SSL/TLS/DTLS servers using client authentication.

This issue affects OpenSSL versions 1.0.2c, 1.0.2b, 1.0.1n and 1.0.1o.

OpenSSL 1.0.2b/1.0.2c users should upgrade to 1.0.2d OpenSSL 1.0.1n/1.0.1o users should upgrade to 1.0.1p

This issue was reported to OpenSSL on 24th June 2015 by Adam Langley/David Benjamin (Google/BoringSSL). The fix was developed by the BoringSSL project.

Note

====

As per our previous announcements and our Release Strategy (https://www.openssl.org/about/releasestrat.html), support for OpenSSL versions 1.0.0 and 0.9.8 will cease on 31st December 2015. No security updates for these releases will be provided after that date. Users of these releases are advised to upgrade.

References

==========

URL for this Security Advisory: https://www.openssl.org/news/secadv_20150709.txt

Note: the online version of the advisory may be updated with additional details over time.

For details of OpenSSL severity classifications please see: https://www.openssl.org/about/secpolicy.html


"No Red Hat products are affected by this flaw (CVE-2015-1793), so no actions need to be performed to fix or mitigate this issue in any way." https://access.redhat.com/solutions/1523323


Well there goes my long laborious afternoon of sysadmin work that I wasn't looking forward to! :-)


Why has the adoption of alternative SSL software been so low. We have libressl, boringssl, something from Amazon? Very few Linux distributions seem interested in shipping alternative SSL software.


Probably because once you wrap your code around one SSL stack it's hard to migrate it to another. So you stick to the one you use first. OpenSSL, for instance, isn't just an SSL library... since C has no standardized "stream" functionality, it's a whole big generic streaming library with pluggable modules for various streams and the ability to write your own... once you're stuck to that you can get stuck pretty hard if you don't properly wrap it with abstraction, which C is not, ahhh... let's say it's not really the best at that sort of thing anymore, which is of course partially because every language since C has known that it needs to be better than C at this to even be considered by anybody.


LibreSSL is a drop-in replacement for OpenSSL.


It /was/ a drop in replacement for a single point in time but it isn't if you make use of any of the recent improvements openssl has added. For example auto selection of DH/ECDH primes and curves.

Note that recently a big clean up of the openssl codebase has taken place so openssl master no longer exposes the internals of structs etc. meaning it's both more auditable and more maintainable. This code is not yet released however.


Well that's the intention... is it really in practice? They have deliberately removed a lot of support for rare architectures and features.

Distributions such as CentOS/RHEL which are focused on stability are not going to replace OpenSSL in any existing releases.


I can confirm that it currently just "drops in" when linked against nginx. This initially took a few small patches, but these have been merged into LibreSSL mainline.

I've been testing this against each release for some time and I'm very happy with it.


You missed the two oldest and most mature alternatives: libnss (firefox) and gnutls.


Because, quite frankly, many FOSS components are not that actively maintained, and all of the alternatives either raise licensing considerations, were not intended for use by general projects, or are not significantly mature yet.

Many projects have also invested heavily into optimizing the performance of OpenSSL itself or the use of its interfaces.

You can't sprinkle "magic SSL dust" over these components and just start using an alternative. In some cases, significant, non-trivial changes would be required to change which library is used.

The reality is, as fast as OpenSSL development is moving now, it remains the better option for a lot of projects because of the significant investments already being made and concerns I mentioned earlier.


Because despite bug history of OpenSSL, nothing proved to be more reliable to this moment.


> nothing proved to be more reliable to this moment.

Huh? Of the 22 vulnerabilities OpenSSL has disclosed since March (4 high severity, 14 moderate, 4 low), LibreSSL has been vulnerable to 8 (0 high, 6 moderate, 2 low).

References:

March: https://marc.info/?l=openbsd-cvs&m=142677372515025&w=2

June: https://marc.info/?l=openbsd-announce&m=143406498020131&w=2

Today: https://marc.info/?l=openbsd-tech&m=143645910727507&w=2


And how long is LibreSSL on the market? A year? It's hard to call it proof. It's easy to point intervals longer than whole LibreSSL lifetime with no security bugs in OpenSSL.


LibreSSL is a cleanup of the OpenSSL base. They started with OpenSSL and worked from there. They have mostly deleted code, not added it, so they shouldn't be adding many new vulerabilities. On top of this, it is being written by the OpenBSD/OpenSSH people, who have a good history with writing secure software.


Still, a sole year is hardly an evidence of quality.


Some linux distributions allow you to choose your implementation of SSL upfront, be it libressl or openssl.

See: Exherbo Linux (http://www.exherbo.org/docs/eapi/providers-and-virtuals.html). This isn't one of the big distros, but you do have a choice.




Debian stable/oldstable is not affected. Only in unstable: https://security-tracker.debian.org/tracker/CVE-2015-1793


Well on Ubuntu I see nothing yet... http://www.ubuntu.com/usn/trusty/ I do not know if this is good or bad :(



Thanks for the link +1



Welp, you saved me some work today!

I was considering using the `ec2.py` script from Vagrant's dynamic inventory docs and then running SSH command execution over all our instances to upgrade the packages for both Ubuntu and AWS AMIs (yum), just to be safe. Guess I don't need to after all!


Changes between 1.0.2c and 1.0.2d [9 Jul 2015]

  *) Alternate chains certificate forgery

     During certificate verfification, OpenSSL will attempt to find an
     alternative certificate chain if the first attempt to build such a chain
     fails. An error in the implementation of this logic can mean that an
     attacker could cause certain checks on untrusted certificates to be
     bypassed, such as the CA flag, enabling them to use a valid leaf
     certificate to act as a CA and "issue" an invalid certificate.

     This issue was reported to OpenSSL by Adam Langley/David Benjamin
     (Google/BoringSSL).
     [Matt Caswell]


It's worth noting that only releases since June 2015 are affected


That means, only releases that have no other known high severity bugs...


An interesting coincidence is I noticed what I thought (and maybe is) a similar bug in the elixir hex module on the same day that this bug report was submitted to openssl. If you look at the hex partial chain method (https://github.com/hexpm/hex/blob/master/lib/hex/api.ex#L59-...) you can see it goes through all the certificates the other party supplied starting from the first one and tries to find one that is signed by a certificate in the trust store. it then explicitly returns it as the trusted_ca which effectively means the certificate has the CA bit set on it.

in order to exploit the attack in hex you need find a CA that will directly issue certificates off of a certificate in a trust store. apparently, this is not the recommended policy for CAs. so I made this tweet: (https://twitter.com/benmmurphy/status/613733887211139072)

'does anyone know a CA that signs directly from their root certs or has intermediate certs in trust stores? asking for a friend.'

and apparently there are some CAs that will do this. in the case of hex i think the chain you need to create looks something like this:

    RANDOM CERT SIGNED BY ISSUER NOT IN TRUST STORE
    |
    V
    VALID_CERT_SIGNED_BY_CERT_IN_TRUST_STORE (effectively treated as CA bit set)
    |
    V
    EVIL CERTIFICATE SIGNED BY PREVIOUS CERT


AFAIK the Baseline Requirements don't allow it but old certs that are not expired may still exist. It was one of the reasons why the e-Guven root was removed from Mozilla.


The latest version (2.3.7) of the official OpenVPN client is vulnerable, as is Tunnelblick for OSX. No fix has been published yet. The OpenVPN clients for Android and iOS are not affected.

See https://mullvad.net/en/v2/news for more details.



Note that this is specifically OpenVPN on Windows, since the Windows installers ship with their own openssl dll (and as already said by the other commenter, a new installer was made available around the time of your post). All other platforms simply use the system library.


This sounds really familiar to the old IE bug that didn't check the CA flag - http://www.thoughtcrime.org/ie-ssl-chain.txt


A lot of these SSL vulnerabilities show that complexity is an inherently bad thing for security. In general, bugs in a system are exponentially not linearly proportional to system complexity. With security that means that the addition of a feature, option, or extension to the security layer of a system exponentially decreases its trustworthiness.


Seems to be OK for anyone using non beta versions of Ubuntu as well:

http://people.canonical.com/~ubuntu-security/cve/2015/CVE-20...


I've got a few sites using OpenSSL certs; do I need to do anything?


Unless you are using client side certificates, this one is not your problem.

But everybody must upgrade their browsers ASAP.


Well, Firefox and Chrome use NSS, IE uses SChannel. Not sure about Safari or mobile browsers, but I believe the majority of desktop browsers will be safe.


Safari uses Apple's own TLS library, SecureTransport. Apple deprecated OpenSSL long ago.


Is that right? My reading of it is that this affects all cases where you verify the certificate


It's right and wrong.

It's wrong in that this is very much your problem. It's right in that there is nothing you can do about it except hope that all the people who might try to connect to your website are using a patched (or pre-broken) verison of OpenSSL.

Patching your server-side version of OpenSSL (while a good idea) will not solve the problem because certificate verification is done (as it must be) browser-side.


Yes and most web servers do not use client certificates and do not have any need to validate certificates.


Yes but in practice a lot of them do, for example, when they download a library from PyPI or rubygems...? Unless we're talking just about when people use client certificates as authentication?


> PyPI or rubygems

These are not activities which a web server does though.

These are activities usually triggered by developers or administrators, not by web servers remotely and they have to do with a web application, not a web server.

And even then, for this attack to be meaningful you'd need to have active MITM between the server and PyPI or rubygems at the time when the developer or administrator was updating this. In a good datacenter, this should not be possible. Employees of the DC and national security agencies, which may be able to perform active attacks in such datacenters would probably be the biggest risk.


>And even then, for this attack to be meaningful you'd need to have active MITM between the server and PyPI or rubygems

Yeah. Which is pretty much the thing (or one of the things anyway) that TLS is supposed to prevent!

>These are activities usually triggered by developers or administrators, not by web servers remotely and they have to do with a web application, not a web server.

Some strange distinctions. A server running a web application may well want to make requests to PyPI when being provisioned.


This should only be the case for the build server that creates the packages that get installed on production.


Maybe that's how you do it, but I guarantee that there are loads of production webservers out there grabbing stuff from all over the web and building it in situ.


Any proxy or loadbalancer that uses https should validate certificates, however.


SSL does not necessarily mean OpenSSL is involved, but it could. Follow the advisories.


Nothing I can find in yum for CentOS 6 or 7


They aren't affected.


At first I thought this was the result of that Hacking Team dump, but it seems this was reported prior to that.


Good work on finding and fixing the bug to those involved. I don't think this is said often enough.


It´s for old version. For example actual debian not affected. https://security-tracker.debian.org/tracker/CVE-2015-1793


'actual debian' made me smile .)

all quibbling aside, most people would probably need an explanation to understand your post, as the recent version of debian is indeed affected, as in debian-unstable (sid)'s openssl.


s/old/recent/


Well this isn't how I wanted to start my morning


How is it that we still depend on something so broken?


If you think you can write a better implementation then put your hands where your keyboard is. Show us the code.

If not, submit patches.

An OpenSSL team member said, "If you're in a position to offer technical criticism you're in a position to offer technical help." While it sounds like their pleading for help, its because WE ARE. There are 3 full time maintainers, 1 is a dog :P and only ~10-16 regular patch submitters.


The bit you don't cover is where you repeatedly submit patches (in my case, for example, to fix documentation and improve testing) which are ignored.


This is not true anymore. At the time, OpenSSL had really a single maintainer, and was only receiving $2,000 a year in "donations". Not exactly something to keep someone working full time on the project with.

A lot of that has now changed with the Core Infrastructure Initiative.


I do agree the situation is now better, but personally I've still found contributing to be a lot harder than it should be even for trivial fixes.


> If you think you can write a better implementation then put your hands where your keyboard is. Show us the code.

Here you go!

https://github.com/mirleft/ocaml-tls


The world awaits your perfect unbroken code...


agl++;





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: