Hacker News new | past | comments | ask | show | jobs | submit login
Tarsnap: No heartbleed here (daemonology.net)
222 points by cperciva on April 9, 2014 | hide | past | favorite | 53 comments



Title's correct: there is no heartbleed in Tarsnap, because the only thing that was using OpenSSL was using an old, unbugged version.

That said, he goes on to claim that even if he had been using a buggy SSL stack, his stunnel setup "keeps the bug away from anything sensitive." But his prior blogpost on the stunnel setup[1] itself acknowledges some limits: "if someone compromises OpenSSL, they will still be ... able to steal the SSL certificate, to be sure, and also able to intercept other HTTPS connections". Meaning, I think, they'd have access to the net traffic in cleartext --- including such goodies as usernames and passwords, which might be shared with other sites.

(Colin might respond, "use a password manager". He very likely does. Most likely, though, a lot of his clients don't.)

So, if they (well, he) had been running a vulnerable OpenSSL, he might still have things to worry about.

[1] http://www.daemonology.net/blog/2009-09-28-securing-https.ht...


I don't think he'd refute that he'd have more to worry about had he not been lucky enough to be on an unaffected version of OpenSSL. I believe his point was that the use of stunnel to terminate SSL connections mitigates some of the attack vectors that could've been used to recover customer information in the event of having been compromised at the OpenSSL layer, and that the architecture of Tarsnap itself absolutely precludes recovery of customer backups in any event. And that these facts aren't an accident.

The important takeaway from this post is that it pays to employ layers of security when building software systems.


I was reacting mostly to the apparent exclusion of usernames, passwords, and session cookies (all exposed in net traffic) from the category of "anything sensitive".


Fair point. Perhaps I should have said that the stunnel/jail setup keeps OpenSSL bugs away from the more sensitive things.


If I understand correctly, he's decoupled the SSL connection handling from the http server. That seems to have all kinds of advantages to me, for example if a vulnerability in your SSL library is found you could quickly swap in SSL termination based on another library (e.g. gnutls or nss) or if there was a vulerablity in stunnel you could change it out for stud or even apache or nginx in a jail (assuming you had any or all of these things ready to go). Should also make for more flexibility with load balancing. Brilliant engineering.


Yes. And assume there will be bugs.


"able to steal the SSL certificate, to be sure, and also able to intercept other HTTPS connections"

Couldn't a SSL-terminating process that's vulnerable to heartbleed also leak unencrypted traffic? The attacker wouldn't even need to be in a position to intercept other users' connections, which is much worse.

I don't know about stunnel specifically though, maybe it doesn't free memory containing unencrypted traffic.


The worse problem is that once you get the private key you can MITM the traffic or, if forward security was not used, decrypt a sniffed one -- this will limit the casual attacker to a directed attacks, though.


For anyone else interested in the protocols that tarsnap uses - https://www.tarsnap.com/crypto.html


I read the blog post earlier and this one line really resonated with me: we don't assess the structure of bridges by asking "has it collapsed yet?"

Security is one of those notoriously hard fields to get right for precisely this reason: you don't know you're doing a bad job until it's too late.


> I read the blog post earlier and this one line really resonated with me: we don't assess the structure of bridges by asking "has it collapsed yet?"

Actually, there's a school of thought that this is exactly how bridges are assessed. Henry Petroski's To Engineer is Human argues that structural engineers never have have successful structures. They only have the current absence of failure.

I reviewed it here: http://chester.id.au/2013/07/07/review-to-engineer-is-human-...


I like that. I think by that measure, though, OpenSSL is failing badly. It's like an old rusted-metal and rotten-wood bridge that should have a sign saying, "No vehicles over 2 tons," but instead it's carrying highway traffic.

I'm starting to get frustrated by the security experts basically chasing everyone away from cryptography. Am I the only one getting tired of the experts (many with misaligned incentives) telling us, "This stuff is really hard... Just trust us"?


The fact remains that crypto _is_ very hard. I don't see why telling people that equates to "chasing people away" from cryptography. It would be even more dangerous to not mention that fact when people try to roll their own encryption systems. It's just not something that can be safely done by a hobbyist/amateur, at least not for critical applications.


The fact remains that crypto _is_ very hard.

I don't think it is any harder, actually. It's just that the stakes are much higher.


You say its no harder, but the risks are higher.

A lot of crypto is unintuitive.

And the goal-posts keep moving. You have to keep well-read and very objective.

You can say this about mainstream programming, but its a bit of a stretch. There is plenty of mainstream programming using the equiv to bubble sorts and nobody should care. You can get away with being a bad programmer.

Because the risks are higher, and you can't get away with being mediocre, is why crypto is hard.

PS: not a tarsnap user, but love your work and your thoughtful posts :)


People using bad algorithms is exactly the sort of thing I was thinking of. People write horribly broken code in every context; but instead of a bug making software slower than it should be, when an "equally dumb" bug happens in crypto code it probably reveals your keys.


I'll grant you that. Crypto is just one aspect of the security field that doesn't seem very intuitive to me, but it's probably just me.


Based on how many security issues the internet has seen from developers that try to roll their own crypto: it's not just you.


Well, you do understand how to write proveable programs. Most programmers do not.


In many other areas of CS (e.g., gaming, language development, DB design), the experts are simply more willing to discuss or enumerate the requirements of the problems they are solving. They make an effort to educate those who are interested in learning.

In security, you have a cabal of self-appointed experts who write inscrutable code and chastise you if you try to improve on it. What does that accomplish other than chasing people away?

These security experts make their livelihoods based on their credentials; they have every incentive to keep others out. At what point do we stop giving them the benefit of the doubt and start educating ourselves? Trust can only go so far.


Disclaimer: I'm an infosec student/enthusiast, so I may have a different perspective. Also, it's late; it's entirely possible that this is sleep deprivation talking.

I think it's more a case of the subject being so important that it's not really something that you can take shortcuts with. When it's something so critical to daily life as the internet is becoming, it's imperative that all code made for security purposes is carefully considered, and architected to avoid the tiniest bugs. Hobbyists or amateurs (myself included) are not necessarily qualified to judge what might constitute a critical bug, and that is one of the reasons the barrier to entry in the field is so much higher.


> it's imperative that all code made for security purposes is carefully considered, and architected to avoid the tiniest bugs.

I agree wholeheartedly, but all you have to do is take a look at these security libraries' code, processes, and recent bugs to realize that the experts are failing at this. Their math and theory may be flawless, but their code is shit.

I think cryptography could benefit greatly from an influx of software engineering. We should be using modern testing and code review practices to bring some measure of reliability and architectural clarity to cryptographic work. The attitude toward programmers who aren't experts (yet!) doesn't exactly encourage this.


That is very true. However, the number of people who are both good software engineers and good cryptographers is very low. Personally, I can't say that I've seen experts being hostile towards those willing to learn the concepts and best practices for cryptography, though. The hostility is usually reserved for those who try to roll their own crypto with ill considered design, no matter how nice their code is (Telegram, etc). But it's possible that I could be entirely out of touch with that.


I'm no security expert, so this is an honest question.

Why was Tarsnap running a version of OpenSSL that's at least two years old^1 ? My understanding is that, while you might not want to upgrade immediately upon release (because there may be new bugs), neither should you never upgrade (because releases fix old bugs).

Is it because the website itself doesn't do much -- it's not the business; backup is -- so not much time is put into it?

[1] OpenSSL 1.0.1, the first version of OpenSSL with the bug, was released in March 2012: http://heartbleed.com/


OpenSSL frequently hosts vulnerabilities that don't affect all users of TLS. If you pay attention to the updates (as Colin surely does), you can filter down to the updates that matter.

It's a risky thing to do if you're not willing to own those judgement calls; I wouldn't recommend that most shops do that. Among other things, you need to eyeball the diffs.


Or he is running an older version of FreeBSD that contains an older version of OpenSSL in the base.


For what it's worth: he's the former FreeBSD security officer, so I wouldn't worry too much about which version of FreeBSD he's using.


[deleted]


"Argument by authority" is not a fatal fallacy. The key insight is appealing to an irrelevant authority. For example:

    Oprah says heartbleed is not a problem.
Maybe she's right, maybe she's not. But appealing to her authority doesn't help because she's not an authority on this topic.

    Colin Percival knows a lot about FreeBSD security.
This appeal to authority is much stronger, because Colin Percival is the former FreeBSD Security Officer, runs a security-conscious SaaS, provides security review and consulting services, publishes on security matters, has corrected major companies' security errors and has developed novel cryptographic algorithms.


Maybe the more precise way to say this is that an appeal to authority isn't always fallacious.

It's especially not fallacious when the actual debate is about the person being appealed to. :)


I'm going to nitpick...

>The key insight is appealing to an irrelevant authority.

This is true, but the authority is almost never relevant to forming a valid argument. It's just a shortcut. Take the following:

"X is not Y. I am a subject expert on X and Y."

It may be true that X is not Y. It may also be true that I know everything there is to know about X and Y. However, it's a completely invalid argument. There is no evidence provided about why X is not Y.

If someone questions why is X not Y, providing more evidence of authority does not contribute to the discourse. It is definitely a fatal fallacy if you exchanging in meaningful discourse.

If you are just looking for general knowledge on topics you are vaguely familiar with, receiving knowledge backed by statements of authority is okay. But that's okay because you aren't actually arguing, you are just seeking clarification and knowledge.


It isn't that much older... stable/9 was the latest until 10 came out within the last year. The release cycle is closer to, say, Debian, than, say, Fedora.


FreeBSD 10 was just released, January 2014 if I remember correctly.


I've been hearing about it from some FreeBSDers for months before that, so I can never remember the actual release date. But I would believe you on January :).


The release was in January, but a lot of us were running Betas and Release Candidates for about three months prior to that.


JeffR sounded pretty busy with preparing those betas and RCs, iirc. ;)


The Tarsnap website is running on FreeBSD 9, which gets OpenSSL bug fixes backported.


Security fixes are usually backported for affected versions, and presumably Colin installed these updates. This depends on the distribution you choose and if you are running a release where they have guaranteed critical updates (i.e. Ubuntu LTS releases).

What you do miss out on is new protocol support, which for TLS can be a huge problem when secure and compatible crypto options run out.

(Sadly, people will usually err on the side of compatibility, which in the past boils down to the terrible RC4)


Can't answer for Tarsnap specifically, but older (unaffected) OpenSSL versions are shipped by various distributions that are still supported, for example Ubuntu 10.04 LTS. There's nothing wrong with sticking to LTS releases IMO.


SuSE Enterprise linux for example is still on 0.9.8, moving from 0.9.8 to 1.0 for a distro is a non trivial amount of work.

Some of these distros are LTS, so the bug fixes are usually back ported. The latest 0.9.8 is y which is from Feburary 2013.


This is common for the "enterprise" distributions. They adopt new packages very slowly, preferring to backport security patches. Much to the aggravation of developers who want to be able to use the lates and greatest of something only to find that the RHEL production environment doesn't offer it.


Much to the aggravation of developers who want to be able to use the lates and greatest

Upgrading OpenSSL introduces other problems though -- the OpenSSL developers don't seem to understand the concept of a "stable branch" or "binary compatibility", so importing a new version of OpenSSL can mean that everything which links against it has to be recompiled. This is one of the reasons why FreeBSD doesn't get major OpenSSL updates on stable branches -- our policy is that if a binary worked on X.0, you should be able to run it on all future X.* releases.


Also it should be mentioned: OSX Mavericks has 0.9.8y


there are multiple branches that are developed simultaneously and have bugfixes backported. My CentOS box is on 0.9.8.


I think the juice is in the comments[1].

[1] http://www.daemonology.net/blog/2014-04-09-tarsnap-no-heartb...


And, just to go back to patio11's article[1] "replace OpenSSL for free" is something that Tarsnap could afford to do as a company, and might make business sense, if it were substantially more profitable. </nag>

https://news.ycombinator.com/item?id=7523953https://news.yco...


The older posting that is mentioned, which is called Securing an HTTPS Server is a good read: http://www.daemonology.net/blog/2009-09-28-securing-https.ht...


This is a plus to distributing your own app...you can just bundle the public key(s) for your servers with the app and then there's never any reason to verify through CAs or any such nonsense.

Although in Tarsnap's case, an SSL leak wouldn't be terrible because everything's encrypted before even going over the wire, at least to my understanding.


This is a plus to distributing your own app...you can just bundle the public key(s) for your servers with the app

Yes, this is one of my standard recommendations: If you can, distribute your own damn keys.

in Tarsnap's case, an SSL leak wouldn't be terrible because everything's encrypted before even going over the wire

Correct, and requests are authenticated using signatures, so even if Tarsnap was using SSL, there would be no tokens disclosed which could be stolen. Breaking Tarsnap's client-server encryption/authentication would allow an attacker to identify the names of blocks of data, though -- so he could see e.g., that someone is deleting today the data which was uploaded yesterday.


I prefer SSL termination on the client-side (my computer, my data only).

I like to have the ability to view my SSL traffic in plain text.

Should the user be able to see what her computer is sending out? I think she should. And encrypted traffic should not be some special exception.

Installing someone else's "MITM" software to decrypt SSL seems unnecessary.

It is much simpler to generate and install your own "fake" certificates that you control.

stunnel is one option.

There are others. socat, Pound, etc.

It should be the user who has the final decision over which certificates to trust. Users are the real "Certificate Authorities". They should have full control over encryption and decryption should they want to exercise it.

Is it wise to irrevocably delegate the decision to trust/not trust to website owners and browser authors? Perhaps those promoting solutions like "TACK" should give this more thought.


> It should be the user who has the final decision over which certificates to trust. Users are the real "Certificate Authorities". They should have full control over encryption and decryption should they want to exercise it.

And they do.

I'm not sure what you're getting at here. You and I and my mother all have the ability to edit the root CA certificates on our computers and add our own, if we wish.


Indeed, that has been my solution.

But I'm seeing more and more authentication information being incorporated ("baked in", pre-installed, whatever) into browsers, whether it is lists of "valid" TLD's, certificates for "approved" CA's, or chosen individual website certificates.

Personally, I think this information should be cleanly separated from the software that may use it rather than pre-installed and "hidden from the user".


Does Heartbleed affect OpenSSH? I seem to recall it uses OpenSSL but I haven't seen it mentioned.


No. SSH is a completely different protocol to TLS/SSL:

http://tools.ietf.org/html/rfc4253




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: