Hacker News new | past | comments | ask | show | jobs | submit login
The most dangerous code in the world (stanford.edu)
406 points by gmcabrita on Oct 24, 2012 | hide | past | favorite | 130 comments



The worst example from this paper is Curl's API.

Curl has an option, CURL_SSL_VERIFYHOST. When VERIFYHOST=0, Curl does what you'd expect: it effectively doesn't validate SSL certificates.

When VERIFYHOST=2, Curl does what you'd expect: it verifies SSL certificates, ensuring that one of the hosts attested by the certificate matches the host presenting it.

When VERIFYHOST=1, or, in some popular languages, when VERIFYHOST=TRUE, Curl does something very strange. It checks to see if the certificate attests to any hostnames, and then accepts the certificate no matter who presents it.

Developers reasonably assume parameters like "VERIFYHOST" are boolean; either we're verifying or we're not. So they routinely set VERIFYHOST to 1 or "true" (which can promote to 1). Because Curl has this weird in-between setting, which does not express any security policy I can figure out, they're effectively not verifying certificates.


While you're attacking curl you (intentionally?) forget to mention that the default value is 2 that is perfectly safe. I don't think it's "reasonable to assume" that the parameter is a boolean value while it's well documented. If programmers are lazy, they make many kind of mistakes.


Yes, yes, I know, stupid Paypal, stupid Amazon, we get it. VERIFYHOST=1 doing nothing makes perfect sense, you just have to read the documentation. Of course.


Yes, Amazon et.al. made some mistakes due being lazy and PHP being the nice language that it is. They have fixed it. Let's go ahead and find another non-default, documented parameter value in a library and call it a vulnerability.


Ok. Your turn. Go!


Policy vs Mechanism.


I think it's highly questionable that a default value of 2 is appropriate for something that is critical functionality. DISABLE_HOST_VERIFICATION would have been much better, similar to the example code.

http://curl.haxx.se/libcurl/c/https.html


It's bad API design. Yeah, I get it, you read your docs properly and would never make that mistake. But most programmers are not as gifted and good API design is about mitigating inevitable human error.


Most programmers are not gifted enough to read the docs? Well there's the problem.

Also "The most dangerous code in the world" is a bit misleading. I would say the code that runs a Predator drone is a little higher in the danger department.


Well, maybe it's 6 of one and half a dozen of the other.

But it's certainly thoughtless API design. The parameter sounds boolean (am I going to verify the SSL host or not?), and essentially it is, except that not verifying it for some reason takes two forms, and a true value turns out to specify one of them.

Whereas I doubt that the giftedness of the programmer has much to do with their reading the docs. A gifted programmer might assume that the API designer would do the same thing they would, which is to specify this with a boolean argument.


Or rename the parameter to CURL_SSL_VERIFICATION_LEVEL.


No! Because CURL_SSL_VERIFICATION_LEVEL 0 is the same as CURL_SSL_VERIFICATION_LEVEL 1: meaningless.

The problem isn't simply that VERIFY_HOST is documented as a scale from 0-2 that people are mistaking for a boolean by not reading the docs. The problem is that the scale is pointless, because the value it is expressing actually is a boolean: you are either verifying SSL certificates properly or you might as well not verify them at all. There is no real scale to express. The whole option actually documents a misunderstanding of SSL/TLS on the part of curl's author.


>Most programmers are not gifted enough to read the docs? Well there's the problem.

Which one's easier - fixing an API, or fixing most programmers?


I don't know if it's fair to blame laziness. Acting in haste, maybe? A ternary implementation for something that intuitively seems like a binary choice is a little... well.. unintuitive. That said, when building an application of significant scale, yeah, you should probably read the documentation of anything you aren't familiar with.


Reading the documentation is important, but tests are even more so. A simple test running with a bad certificate would have caught a lot of these issues. Small devs often push out software without having much time to read docs or make tests, but these are big players with lots of resources, and real money at stake.


I think the real problem is the name of the option, not the range of its accepted values. Should they have named it HOST_VERIFICATION_LEVEL or something similar, no-one would think it's boolean.


In that case "TRUE" shouldn't be a valid value.


I think that's because in the weakly typed language used, TRUE is a constant with a value of 1.


Perhaps this is a dumb question, but when would VERIFYHOST=1 be useful?


On Twitter today, Daniel Stenberg, Curl's maintainer, said:

@pmjordan @tqbf for test, and also for the same reason browser-users just click yes yes yes on cert dialogues

He seems like a nice guy and being the sole maintainer of Curl seems like a tough and probably unrewarding job. But that said, it's hard to shake the impression that VERIFYHOST=1 exists because people want to believe they're doing SSL/TLS securely, but have trouble getting it configured properly, and that VERIFYHOST=1 exists primarily to give people the feeling of having configured SSL/TLS well without making them actually do that.


> and that VERIFYHOST=1 exists primarily to give people the feeling of having configured SSL/TLS well without making them actually do that.

API doc: "When the value is 1, the certificate must contain a Common Name field, but it doesn't matter what name it says. (This is not ordinarily a useful setting)."

[emphasis mine] It's hard to get clearer than that.

http://curl.haxx.se/libcurl/c/curl_easy_setopt.html#CURLOPTS...


I think we are all clear that if library users carefully read the documentation, this is an avoidable flaw. I'm not sure what that has to do with my point. If even the Curl author thinks VERIFYHOST=true isn't useful, why does it work that way? The answer is: it shouldn't.

"Here's an option that you could reasonably expect to be boolean. And lo! the code appears to work if you treat it as a boolean. But, just to mess with you, I introduced an apparently useless option to handle the 'true' case in that boolean, and introduced an 'even truer' value that does what you expected".

This is simply a terrible, dangerous design. It is, like I said on Twitter today, a vulnerability in Curl, and one that should be fixed. The option expressed today by VERIFYHOST=1 should instead be the case when VERIFYHOST=-1 or VERIFYHOST=666. Meanwhile, VERIFYHOST=1 should, tomorrow if possible, do exactly what VERIFYHOST=2 does today.


> I'm not sure what that has to do with my point.

The point of yours that I addressed was: "that VERIFYHOST=1 exists primarily to give people the feeling of having configured SSL/TLS well without making them actually do that."

Put simply: It does not exist primarily for that purpose.

I can't make it any clearer than that.


Oh yeah? Interesting. Tell me, what is the point of checking to see if a certificate has a common name field in it but then not checking to see if that common name is related to the connection bearing the certificate? Make it clear for me, will you?


That option exists to punish those evil people who don't read the documentation.

I'm kidding... I think.


Of course, who it really punishes is the user.


Some developers seem to believe that more options are always better, even if removing the bad options makes the software better and easier to use.


> Put simply: It does not exist primarily for that purpose.

You fail to provide any proof for that statement. The API doc merely states what the setting does, it says absolutely nothing about about its purpose, and neither do you.


You do realize that the curl API says this:

> The default value for this option is 2.

In other words, there is absolutely no issue here. It does the right thing out of the box.


This is definitely NOT a vulnerability in curl. I hate it when security people treat everything black and white. There ARE other things to consider.


Such as what? When VERIFYHOST=1, curl has basically turned the security part of SSL/TLS off. That seems like kind of a big deal for an HTTPS/SSL API.


> I hate it when security people treat everything black and white.

That's because experience shows if things are "grey" then someone can almost certainly break in.


I don't think this shouldn't be changed, but surely it's not one that can be made without thought or concern either. Changing the behavior overnight will undoubtedly break servers and scripts.


Changing the behavior overnight will undoubtedly break servers and scripts.

It will only break servers and scripts that currently operate under a false sense of security. The maintainer should file a CVE, push the fixed version and let the distros sort out the rest. And he should do it ASAP.


It's easy to get much clearer than that. Instead of "1", use a constant with a name like "VERIFY_HOST_SETTING_INSECURE_IGNORE_COMMON_NAME".


When it comes to API design, a good rule of thumb is that parameters that can be either 0, 1 or 2 will cause bugs.


More generally: "when it comes to API design, a good rule of thumb is that parameters will cause bugs."


No, I was actually trying to be specific. There's usually nothing wrong with 0/1 or 1-9.

Empirically, 0/1/2 either means no/yes/yes-and-something-obscure-on-top-of-that, or, worse, as in this case, no/kind-of-but-not-really/yes.


It's akin to http://c2.com/cgi/wiki?ZeroOneInfinityRule. If you started out thinking it was just a switch, you should be very reluctant to add a third state without rethinking what you really need to express, or you end up with my favorite from TheDailyWTF:

  enum Bool { 
      True,
      False,
      FileNotFound
  };
This would have been much safer as separate AUTHENTICATE_CERT (which checks the CA sig and expiry and revocation) and VERIFY_CERT_COMMON_NAME options, both of which default to true.


Your more general adage is useless. The one you responded to is useful. A more useful, also general adage would be "the more choice in selecting parameters to an API, the more likely there will be bugs consuming that API."


In this case I think it wouldn't cause bugs if the effect of 1 and 2 would be switched. Then everybody would use it like a boolean and that would be fine, only the few people who really want the name-check disabled would use that option.


Networks where users have failed to set up proper SSL CAs for internal use; Lots of customers I've seen with failed CA setups, self-signed certs everywhere, etc.


Yes, but VERIFYHOST=0 handles that case, and does what users expect.


At least the default value is 2.


Which, as the authors show in section 7.1 and 7.2 of their paper, many PHP developers, like the authors of the Amazon Flexible Payments Service SDK and the PayPal Payments Standard SDK, will happily override with true.


You would have to read the docs to find the parameter you wish to overwrite would you not? I don't see how the curl behaviour can be faulted. If you are overwriting a default you are responsible. And you are reading the docs anyway.


Many times as a developer I have followed other people's example code. And if I see someone's code say

    curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, TRUE);
I and any normal developer is going to think "oh, this is important, I'd better make sure to leave it in there."

Configuration audits aren't as expensive as code audits, but they are still expensive.


This is like saying any API is reasonable as long as it's documented. If you read the example, they found numerous examples of people who clearly weren't intending to disable SSL certificate validation who nonetheless overrode that default.


No, not any API. But if the defaults err on the side of caution, which is the case here, and the non-default documentation is complete and coherent, I think that's pretty good. I don't think I would call this situation unreasonable, though I do agree that it could be improved.


There's a simple concept that one should bear in mind when designing APIs: "pit of success" (credit to Rico Mariani).

That is, failure should take effort. Dangerous options should be more laborious to express. The happy path should be the most concise and straightforward path.


I would say that by having the default be the safer option, the API is in line with that philosophy.

It takes more effort to override the default and shoot yourself in the foot.

Granted, it could be better. I guess that strange option could be a completely separate command parameter with a very obvious name. Then again, it's such a strange behaviour, maybe it just shouldn't even be an option at all.


The effort it takes to do something that's unsafe should be laborious and not simply require adding a single line of code.

This API doesn't fit with the philosophy because it takes a negligible amount of work to walk down the unsafe path.


Of course the developers are at fault here, but good API design would be to not invite developers to mess things up.

When designing the Curl API, it should have been obvious that VERIFYHOST is likely be set incorrectly, especially once it's used in languages that coerce true to 1.


I will sometimes override default values in my code even when the defaults do what I want.

Sometimes it's just because I like to be explicit and sometimes I'm concerned about the default being changed in a future version of the library or due to a server with a different config.

Not to mention that a lot of code ends up on the internet and is often blindly copied and pasted by programmers in a rush.


I agree that these APIs are fundamentally difficult to use correctly (sometimes it almost seems as if they've been designed to trick you), and that developers commonly get them wrong, but this paper is perhaps a little more inflammatory than it should be.

They cast a really wide net, looking for as many examples as possible where non-browser applications fail to do SSL validation correctly, but then conclude that this will result in a security compromise without fully examining the implications.

For instance, they point out that many SDKs for Amazon FPS don't validate certificates correctly. But I didn't see them mention that the FPS protocol does its own signature-based authentication and that credentials are never transmitted in the clear: it was essentially designed to operate over an insecure transport to begin with.

Likewise, they point out an "unsafe" construction that an Android application that I wrote (TextSecure) uses. But they don't mention that this is for communication with an MMSC, that this is how it has to be (many don't present CA-signed certificates), and that the point of TextSecure is that an OTR-like secure protocol is layered on top of base transport layer (be it SMS or MMS).

So I think the paper would be a lot stronger if they weren't overstating their position so much.


One of the authors here... The paper is accompanied by a FAQ, which among other things explains the impact on specific software:

https://docs.google.com/document/pub?id=1roBIeSJsYq3Ntpf6N0P...

For example, broken SSL in Amazon FPS allows a MITM attacker to forge instant payment notifications and defraud merchants who use vulnerable SDKs. This is a real vulnerability, acknowledged by Amazon.


Oh thanks, I think that FAQ is exactly what I was looking for. I realize we might not be your target audience, but it seems like it'd be worth including those impact analysis statements in the paper itself, since they're short and do really help to clarify the situation.

I'll have to take a second look at the instant payment notification forgery. Last I used FPS, I remember the transaction notifications being signed. Maybe "instant payments" are a new thing.


Can you address his specific point about TextSecure?


please would ya?


Another author here. With regards to TextSecure specifically, I will quote the code in the paper here:

schemeRegistry.register(new Scheme("http", PlainSocketFactory.getSocketFactory(), 80)); schemeRegistry.register(new Scheme("https", SSLSocketFactory.getSocketFactory(), 443)); ... HttpHost target = new HttpHost(hostUrl.getHost(), hostUrl.getPort(), HttpHost.DEFAULT_SCHEME_NAME); ... HttpResponse response = client.execute(target, request);

Viewing the code sample from the paper it can be seen that an SSLSocket was meant to be used if the connection was over HTTPS. However this use of the API results in a request being sent over HTTP instead of HTTPS. The argument for CAs not having correct certs makes less sense here in conjunction with the use of SSL API.

We clearly qualify in the paper that this is not exploitable directly.


Nit: "May not result in exploitable vulnerabilities" is not "clear qualification" that a vulnerability isn't exploitable directly.


Video of moxie at BlackHat 2012 for deep essential background on the subject:

SSL and the Future of Authenticity

http://www.youtube.com/watch?v=Z7Wl2FW2TcA

What's the status of the Convergence SSL alternatives that were going to be built into Chrome/FF?

http://convergence.io/


Trevor Perrin and I are actually making some encouraging progress with TACK, which is a less controversial proposal with fewer moving parts. It's for dynamic certificate pinning rather than a full CA replacement, but we feel that it takes a big bite out of the problem and is potentially a step on the path out of the current mess.

The internet draft and reference code can be found here: http://tack.io


is there a faq or tack for dummies? i'm reading the rfc, but somewhat confused.

edit: http://blog.cryptographyengineering.com/2012/05/tack.html helps (i was missing that it is in addition to tls, so it's like perspectives / network notaries, but over (limited) time, for a single client, rather than over multiple clients)


Why are you shifting from convergence? It seemed like such an ingenious solution.


I'm not Moxie, but one attractive thing about TACK is that it standardizes something browser vendors already do: if you're on a short list of sites trusted or taken seriously by Google, for instance, your certificates can be "pinned" in Chrome; essentially, Chrome builds in a notion of what your certificate is supposed to be. As a result, no matter which CAs have been compromised by which foreign governments, Chrome isn't going to believe that a pinned site, like MAIL.GOOGLE.COM, is represented by a Diginotar or Comodo certificate.

The obvious problem with that is that you have to call in a favor from Google to get that level of security. TACK is a mechanism that allows any site to get something comparable.

Another attractive thing about TACK is that it follows a model that other security features in the browser already use. For instance, the HSTS header is a widely-supported feature that allows websites to instruct browsers to remember that a site is intended to be reached only via HTTPS. TACK does something similar, but with a much more useful assertion.


Yep, it also has benefits to the site. AGL is quite generous with his time in terms of accepting static pin requests, but it can become a difficult situation for large website operators. It's a little nerve-wracking to know that the fastest you can make a change is 10 weeks out (the expiration for Chrome pins post-build), and some of those pin lists get pretty long (CDNs, multiple CAs for whatever reason, multiple SPKIs per CA, etc).

TACK is designed to alleviate that pain for the site owner by providing flexibility, and by eliminating even the CAs the site uses from its scope of exposure.


I conceptualize Convergence as providing trust agility for situations where a client needs third party verification. TACK is about reducing the number of situations where we even need to trust a third party at all.

The latter helps the former by making it easier to deploy. If TACK were the norm, then the only purpose CAs would serve is to introduce clients to websites they have never seen before (rather than authenticating every single connection to a website during every page load to that website).

By taking a bite out of the problem, we feel the remainder will be easier to solve. And yeah, hopefully we can position convergence as that solution.

It's also easier to get TACK done with browser vendors, simply because it's well encapsulated as a TLS extension, is fairly uncontroversial, and requires them to write less code. Basically, we feel it's a good first step.


One question I have about convergence. I understand how it helps prevent MITM attacks by getting consensus from a trusted third party as to the authenticity of a particular cert.

However what happens if the MITM attack is on the other end, in other words somebody has got into a hosting providers network and is MITMing a bunch of traffic to some of their servers.

They could use this to pass back bullshit certs/public keys to all clients (including notaries) who connect to servers they have MITMd.

One way to prevent this of course would be to have the server keep it's own list of notaries and self-check every so often and alert clients if something appears wrong.

However here you are relying on server administrators keeping this configured and working. I could imagine less scrupulous administrators on strict SLAs disabling this and letting it fail in a way that is silent to the end user to avoid downtime. This would be more difficult to do with the traditional CA structure since the attacker would need a valid cert for the site or would need to SSL strip everything (which would eventually get noticed).

Or do I have this wrong and it is intended to augment the existing CA structure rather than replace it?


Since you were mentioned by name in the paper, and you consider their analysis to be incomplete, you should email the authors.


You assume he hasn't? Did you notice how many times he was mentioned in the paper? This is one of Moxie Marlinspike's research areas.


Yes, I do assume so, since neither the authors nor Moxie himself indicate that they have had any communication. I cite people all the time without contacting them, and I have been cited without being contacted.


I'm really just commenting to note that Moxie Marlinspike isn't just the author of some random piece of software cited in the paper.


The paper makes that quite clear by calling him an SLL expert and referencing two of his vulnerability discoveries. Academic papers are stingy with both personal praise and references. Anyone who receives either one of those I assume is an expert in the field.


Academic papers are certainly not stingy with references.


In my experience as an author and reader of such papers, they are. References take up space, and every reference you make is less of your own work that you can discuss. So I think very carefully about who I'm going to give my precious paper-space to.

Don't confuse a paper having 30+ references as being generous. If they could have supported and justified their own work with less citations, they would have.


I have somewhat different experience. I usually include more references than strictly necessary to support the work, because there's an expectation of general credit-giving: if someone else works in a related area, even if their work isn't directly on-point or needed in the specific paper at hand, it's often expected that they get a generic nod in the "related work" section, and reviewers may complain if that's not done. But often the reference amounts to little more than a friendly ack: yes, I know paper [2] exists, and I hereby acknowledge it.


My last (ok, only) published paper cited 47 other papers or books. I felt absolutely no compulsion to reduce (or, for that matter, increase) that number, nor does the number of citations have any connection to the amount of time I could discuss my own ideas ("See \textcite[p 6]{someone2005}" doesn't take much space).


It's not the citation in the paper, it's the two or three lines the reference itself takes up in the references section. When we publish papers, we are always pressed for space, and we always have to decide what we have to cut. I can't speak for where you publish, but in computer science conferences, we have page limits.


Nah, these are academics, so they would've supported and justified their work with fewer citations


The title should be renamed to:

Many security flaws found in commonly used SSL libraries.

Other than that, it is a great find.


Thanks for de-FUDing the title, I scanned the comments hoping someone had done that =)


How ironic. Even these guys hosting a paper about SSL can't host their stuff securely on an HTTPS server.

<base href="http://crypto.stanford.edu/~dabo/pubs/pubs.html>;

This causes the page to throw an HTTPS warning: "this page loads insecure content" due to the css loaded over HTTP.


Yes, yes, stupid Dan Boneh, extracts SSL keys from servers over the Internet using timing measurements, but can't properly write an HTML page, we get it. Credibility -> toilet.


No, this is not what I meant. I pointed out that even if you are a security expect, technology stacks (HTML/HTTP in this case) are not designed well enough to make security easy. IOW my anecdote confirms the paper's conclusion.


Because being talented at one thing means you're talented at everything.


The two things here being "finding critical vulnerabilities in the TLS protocol and its implementations, and generally being one of the most highly-regarded academic practical cryptographers" and "writing HTML". Trenchant point.


Exactly (I think people might assume I was disagreeing), the skill sets are wildly different in many ways.


From the PDF linked in the article:

"Not the most interesting technically, but perhaps the most devastating (because of the ease of exploitation) bug is the broken certificate validation in the Chase mobile banking app on Android. Even a primitive network attacker—for example, someone in control of a malicious Wi-Fi access point—can exploit this vulnerability to harvest the login credentials of Chase mobile banking customers."


Further down in the PDF, the authors present the decompiled source of the Chase app:

  public final void checkServerTrusted(X509Certificate[]
      paramArrayOfX509Certificate, String paramString)
  {
    if ((paramArrayOfX509Certificate != null) && (
        paramArrayOfX509Certificate.length == 1))
      paramArrayOfX509Certificate[0].checkValidity();
    while (true)
    {
      return;
      this.a.checkServerTrusted(
          paramArrayOfX509Certificate, paramString);
    }
  }
Good to know this isn't used for anything critical, just for, um, mobile banking.


Okay. Please tell me I'm reading that wrong.

   void foo(X509Certificate[] a, String unused) {
     if( a != null && a.length == 1) a[0].checkValidity();
     while(true){
        return;
        /* never executed stuff */
        foo(a, unused);
     }
   }
X509Certificate.checkValidity()'s javadoc: http://docs.oracle.com/javase/1.5.0/docs/api/javax/security/...

Selections from above link:

Checks that the certificate is currently valid. It is if the current date and time are within the validity period given in the certificate.

  Throws:
  CertificateExpiredException - if the certificate has expired.
  CertificateNotYetValidException - if the certificate is not yet valid.
There has got to be more to their code than this. It does nothing useful except make sure a random certificate is not expired. If there is no certificate, it doesn't fail or return false or anything. What is this!?

Could this be a profiler-optimized set of code that really had a LOT more code to it? But then why would the bit of random junk after the return statement exist? Did no one do a proper code-review of this???

[edit: I used JavaSE 1.5 above; but I just checked Android's version and it looks to be the same, just less verbage in the docs.]


It looks a lot like the output from an Android decompilation tool... which tends to return some weird stuff. That code segment would never compile (code immediately below a return/break/continue/etc. statement causes a compilation error).


Can you elaborate here? Can you point to an example for some weird Android decompilation?


Just run the dex2jar tool on any android .apk and view the results in jd-gui. I've looked at a few apk's and weirdness like the one noted above is quite common. The code is being reverse engineered from a .smali file and the reverse engineering process appears imperfect. Of course any const names would be stripped away from the decompiled code leaving hardcoded integers (such as the 0 in paramArrayOfX509Certificate[0].checkValidity() ).

I'm not saying that the conclusion of the article is wrong, I'm merely trying to explain why the code blob above looks so weird.


0 is the first element in the array; so, I bet that wasn't a constant, but honestly what the programmer did.

In either case, we have the bytecode; and shouldn't what is written above compile to the same bytecode?


To demonstrate my point, try writing the following into C.java:

public class C { public int foo() { return 1; int b = 1 + 2; } }

Then compile it (javac C.java). You will get the following:

C.java:4: unreachable statement int b = 1 + 2; ^ C.java:5: missing return statement } ^ 2 errors

Therefore this code is not compileable, thus telling us that the initial code blob could never have run in the first place.

I'm not sure why the decompiler returns non-compiling java code, I just know it does.


Some code obfuscators will introduce illegal programming constructs. Most VMs will tolerate these but the decompiled source cannot be recompiled without fixing these. Not saying that's what happened here, but it is an option.


So it is a very real possibility that an obfuscation technique that basically throws random bits of code with random extra params in another method and calls them at it would could have been used here. Interesting. It would certainly make more sense.


That almost hurts to look at -- It looks like a botched attempt at a recursive method to iterate through the array.

I can imagine some poor developer, unable to figure out why it seems to always throw a StackOverflowException, putting that return statement in in frustration, and forgetting about it.


Ouch, that is seriously yucky code.


The issue with the code above is the return statement, most likely inserted during development, with the good intention to remove it later, since the "bit of random junk" that follows would have validated the server's certificate.


But it doesn't. It's calling the exact same function. As your sibling comment remarked, it looked like someone really frustrated with why they were getting a stack-overflow error and gave up.


I'm not sure if things are different with the Android compiler, but javac will not compile code with an unreachable statement after a return like that.


Sounds like it might be easier to list the options that actually do the Right Thing. If you're using Python, for example, the correct way to make HTTP requests is to ignore the standard library's urllib and (shudder) urllib2, and use Requests instead:

http://docs.python-requests.org/en/latest/

It validates SSL certificates correctly by default. How about other languages?


Good to know. I always wondered how come none those libraries fully validate SSL certificates. On one of my projects, we were hooking into the Windows Winhttp Libraries to be able to do this (and a couple of other things), but when porting to Mac we kinda had to accept the standard libs just didn't care enough about this. It's been a while ago, so perhaps things have changed. Requests is a great example of this I guess.


Well, it does the right thing by default if you have a sufficiently recent version. You still need to pass `verify=True` to make older versions fail rather than silently accept bad certs. For httplib2, the idiom is similar: `client = httplib2.Http(disable_ssl_certificate_validation=False)`


I notice that whenever I use "wget https://github.com/[...]" I always end up typing wget --no-check-certificate because the first try never works.

I suppose my web browser has an extended list of CA that my OSX lion does not know about.


My HTC Droid Incredible's browser also always complained about their certificate and popped up a dialog box I had to click through. But now that I've installed Cyanogen Mod, it hasn't been a problem, so I guess it's one of several things HTC broke.


So of all the possible futures we could have, ones where we use computers to give us crypto, good security and privacy etc, instead we end up with Masamune Shirow's admitted guess of Ghost in the Shell where people can't properly use their arms due to 5 different version of the driver installed and people having 10 different viruses IN THEIR BRAINS and are constantly getting hacked and having their bodies taken over.


they make this point in the paper, but still it surprises me - the level of testing for payment frameworks seems surprisingly minimal. it's pretty easy with openssl to roll your own certificates to test a bunch of different issues. you'd think that the people involved would have quite an incentive to test well.

i'm not saying that this would solve all the problems, or that you should develop critical financial software by having people that don't understand much writing tests. but tests are pretty much common culture now; you'd think people would have considered this. and the argument the paper makes is not that the programmers are clueless, but that they are confused by the API, so they should be able to think up some useful tests...

of course, integration testing with sockets is a bit more complicated than unit tests (perhaps something toolkit apis should support is a way to allow testing without sockets?), but it's not super-hard. [edit: hmm. although testing for unreliable dns is going to be more tricky.]


The title is a bit sensationalist - there was incorrect code and it made the copy/paste rounds. Presumably all incorrect code is dangerous to some degree but I'm certain there's a more fitting title for this story.

At any rate, here is a pull request for PHP which attempts to address the issue:

https://github.com/php/php-src/pull/221


I have only read the first two sections, but the prose in this paper is a breath of fresh air. It is clear and strong.


Slightly related, link to Peereboom's rant on the OpenSSL library (a bit dated): http://www.peereboom.us/assl/assl/html/openssl.html


I came across this issue when using node.js to make secure requests as a client and after setting up tests with bad certs found it silently worked anyway. To get it working you need to be at a certain version of node.js and make sure you set the options up carefully. Testing with a bad certificate is essential for this stuff. http://stackoverflow.com/questions/10142431/my-node-js-https...


So how soon until we start seeing developers fix these gaping holes? And, more importantly, how soon do we start seeing app-specific exploits that take advantage of this problem?


Probably as soon as certificate validation is more reliable.

I can see turning off validation for stuff where you're sending non-important data to a third party which may or may not be encrypted (asyncronous API pings where you say "something happened, ask us [securely] what it was") - but that's only OK if you're just as willing to send the info over an http connection as well.

If you turn off cert validation in your app to your own API, something is seriously wrong. Unfortunately, it often comes down to simply buying a more expensive certificate that's signed by a root CA with wider trust. Given spending $50 or CURLOPT_SSL_VERIFYPEER=false, a scary number of people will choose the latter.

These libraries should provide an easy way to specify an additional root CA when validating the chain of trust - that way you could even use a self-signed cert without disabling the chain of trust check, all you'd have to do is embed your self-signing public key.

Probably more important, there should be clearer error messages when the SSL chain verification. Cryptic "verify the CA cert is OK" messages don't help much; they should say "SSL CA [name] is not in the installed trusted certificates, add trust for it with CURLOPT_CAINFO, details at example.com/ca.html" or something along those lines.

It may cause issues with root CA revocation (DigiNotar anyone?), but it's still better than disabling the checks entirely.


If the SSL/TLS server is properly configured with all of the nodes in the certificate chain, you don't need to buy more expensive certs.

Now I will say that, most common apps (Apache HTTPD + mod_ssl for example), do not make this process as clear as it should be. Specifying cert order wrong in the Intermediate Chain file will still break cert validation for some apps, because it's sent out exactly as the server admin configures (and this fact is often lost of folks who maybe touch their certs once a year when they expire).


There's no standard as to which certificate issuers have their trusted root CA cert installed. If your clients don't have it on their machines (desktops, phones, servers, whatever) it doesn't matter what you paid.

My point was that the more widely-trusted issuers (meaning installed by default on the most OSes) can get away with charging more because they're more compatible.

There's nothing stopping you from setting up your own root CA and self-signing, maintaining a full chain of trust if you build the certs right (I've done it; once you have the process documented well it only takes a few minutes) but unless the clients trust the lowest link in the chain (not the case for self-signed, and some cheaper issuers) you'll still get validation errors.


>Unfortunately, it often comes down to simply buying a more expensive certificate that's signed by a root CA with wider trust. Given spending $50 or CURLOPT_SSL_VERIFYPEER=false, a scary number of people will choose the latter.

What is a trusted and reputable CA that offers cheap certificates ? Is this even a possibility or do all the CAs charge according to how reputable they are ?


I haven't done extensive research so I can't help a whole lot. I use StartSSL for some personal stuff which is free and have heard good things about NameCheap, but I don't know how widely-installed either of their root CA certs are.

Any issuer with a widely installed root CA can basically charge whatever they want.


I got an email a few days ago from Amazon suggesting that I should upgrade my code if I was using one of their several payment code examples. I'm assuming this is in response to some responsible disclosure having happened.

Sent me to: https://payments.amazon.com/sdui/sdui/about?nodeId=201033780...


> So how soon until we start seeing developers fix these gaping holes?

Some, lynx for example, seem to have been fixed a long time ago.


And odds are the guys that wrote this paper don't have any clue that even if those writing the CLI tools/libraries/frameworks that use SSL had locked them completely down, developers and sysadmins would write scripts to agree-to-all, fake auth, etc. to get around security, because we have jobs that have to get done and security is not what we are all paid to do. Security is only critical when it fails. People claim to want security. They may even have an office of security. But even if that office of security is scanning all the apps, taking production apps down because they didn't throttle their probes, and maybe even looking at code- they cannot do the job of the developer.

It is destined to be flawed as long as insecurity is allowed. Only when every exploit is exploited continously will people be vigilant.


Yes! Yes! Stupid researchers! Who has time for security? We've got mobile banking transactions to process!


Anyone have an example of good cert verification in Java? The concept at https://github.com/iSECPartners/ssl-conservatory is great, but it needs examples in more languages. Our case is pretty weird (some self-signed certs between peers, cert pinning of sorts in that we only accept equifax as a root signer, no default signing authorities accepted), but anyone see holes in the authenticate method of our trust manager at:

https://github.com/getlantern/lantern/blob/master/src/main/j...

? This code is intended for deployment in potentially dangerous regions for getting around government censors.

Thanks.


I noticed this the other day in Rails. ActiveResource::Connection in 3.2.8 is affected in that the default OpenSSL verification mode is "OpenSSL::SSL::VERIFY_NONE". A developer has to explicitly set it for SSL validation.

You can see it here: https://github.com/rails/rails/blob/3-2-stable/activeresourc...

I'm pointing it out as it was not mentioned in the paper.

Edit: It looks like it has been that way since SSL was first implemented in Connection.


2010, Certificate verification is essential to TLS.

require 'always_verify_ssl_certificates' AlwaysVerifySSLCertificates.ca_file = "/path/path/path/cacert.pem"

http= Net::HTTP.new('https://some.ssl.site, 443) http.use_ssl = true req = Net::HTTP::Get.new('/') response = http.request(req)

http://www.rubyinside.com/how-to-cure-nethttps-risky-default...


Everyone who does any development should read this paper. It is not just for SSL specialists!


Are Rack-based middleware affected by these vulnerabilities (or did I lose the plot)?


"...bad implementations of SSL like OpenSSL..."

<falls off chair>




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: