Is there any coordination between OpenBSD and the Linux Foundation? For example, did the Linux Foundation consider adopting OpenBSD's fork rather than auditing and fixing the original?
LibreSSL is going to track OpenBSD versions, so now that OpenBSD 5.7 is done, the LibreSSL 2.1.x series is done (except to receive patches). At some point a 2.2 release will be made, corresponding to new developments.
Thanks. Did OpenBSD get the funding they sought for LibreSSL? How mature is it at this point? I see it's included (?) with OpenBSD, which is a good sign.
> libressl's binary is already half the size of openssl
What does binary size have to do with quality?
OpenSSL has been around for a long time, and still is the standard for most deployments. It will take a long time before LibreSSL has been proven enough to become the standard for anything outside the BSD community.
For as nasty as the OpenSSL code appears to be, it sure did work (and worked well) for a long time.
Remember, OpenSSL really only had 1 developer and only received around $2,000 a year in donations prior to Heartbleed[1] (which is grossly pathetic for such a critical piece of software).
If those numbers had been tenfold or more, perhaps the bugs that led to Heartbleed may have been found and fixed long before they were an issue.
Thankfully the Linux Foundation and the Core Infrastructure Initiative are aiming to remedy this.
I think grandparent meant that binary size is related to code size. The number of bugs being related to the number of lines of code, you could infer that a bigger binary means more lines of code which means more bugs. In my experience, this tends to be true. Of course this makes the big assumption that the binaries have been compiled with similar compilers using similar optimizations and are both stripped.
It doesn't indicate either way. If the code is in there, if it works for those platforms, and has no adverse effects on other portions of the codebase, then it doesn't matter (those code paths are never executed unless you run on those platforms).
You can't guarantee that code path never gets executed. For a good historical example, Windows NT4 also came with inbuilt OS/2 and POSIX subsystems that were never used by users, but typically used by people on the internet to crack boxes.
While I haven't look at the OS/2 support of OpensSSL specifically, surely it is not a question about code that it never executed, but code that is never built on a modern system. That makes it more a question about code maintainability and code readability, rather than security problems with seldom executed code.
Much of the code you don't think was ever built on a modern system was being built. Either because they enabled the workaround everywhere on purpose, or because the preprocessor test was borked and the modern code never enabled.
That's likely true in the case of OS/2, DGUX workarounds might be ran on Unix boxes.
It's all dead code that any of us here would remove from our own software projects - and your point about the OpenSSL team's lack of resources is quite relevant to that.
I have to say, LibreSSL's snarky PR stance when it was first launched really turned me off. This stuff is hard and the code is old; being an asshole about the code quality is really inappropriate. Has their behavior matured since then?
They did really try to seize the moment and imho did act a bit disingenuous towards OpenSSL. They even played on the point that OpenSSL's coding style wasn't "modern" and adequate and therefore made it near impossible for people to work on[1]. Since when does having curly brackets on different lines make something impossible to read?
They also gutted more than 90,000 lines in the first month (something that would be naive to claim had absolutely no unintended side effects), and claimed the OpenSSL project would not accept a single change (sort of alluding to the OpenSSL project having some delusional desire to stay "broken"), mandating the fork[1][2].
OpenSSL was largely written by cryptographers. LibreSSL is being hacked on by OpenBSD developers.
OpenSSL's code may not be pretty, but I sure think it would have been a better use of time and efforts to have the OpenBSD developers work alongside the OpenSSL cryptographers.
They did, but they also were frustrated beyond sanity at what OpenSSL was doing like adding a wrapper around malloc & free and that was new code not old legacy cruft that can be excused. It also taught us all an important lesson in rage commits when Theo screwed up one of them :-P
You are looking at the wrong repository -- that is a portable version of libressl (I agree it's non-obvious how to find the true repository, as it is stored in CVS).
OpenBSD is about to tag 5.7 for release. During the release window commits normally drop to only include minor changes. Once the tree is tagged all the patches sitting in the developers private trees can go in.
I wish they can turn their fuzzers and similar tests that they build into an open source test suite that can be run by anyone, and into the future. That would be a great public contribution from this work.
Wll thats pretty awesome :)
I'm hoping we don't have to patch everything tomorrow but.. ;)
On a side note - a lot of libs start to implement their own crypto because "OpenSSL code sux its terrible and buggy!" impression. This scares me. These won't get much exposure and will most likely be full of implementation bugs that nobody will ever review - except bad guys and the NSA.
OpenSSL isn't the be-all-and-end-all of crypto. Simple crypto can be done simply, as it is in e.g. Tarsnap. If all you need is to pair your clients with your servers, a little bit of glue on top of NaCl is a whole lot easier to trust than the whole TLS ecosystem.
What OpenSSL is, is a magic wand that can set up encrypted tunnels between peers that don't have pre-existing mutual key pairing, by making use of well-known X.509 certificates and the CA infrastructure. Don't try to make that kind of wand yourself; 99.999% of such wands are really misidentified Wands of +5 Foot Shooting.
Given that we know OpenSSL is a Wand of +3 Foot Shooting[1], seeing what kind of modifier you can roll doesn't seem so terrible. :)
[1] I rolled my own CA bundle this week, turns out I need to include 4 1024-bit certs because the endpoints I hit are chaining to them for legacy clients, and OpenSSL didn't understand to stop when it got to a cert it knew until 1.0.2 which was released in January. Given what happened the last time I upgraded OpenSSL versions, I'm not inclined to upgrade right away. Seriously, the week after we upgraded to OpenSSL 1.0.1, heartbleed came out; serves us right for wanting to support TLS 1.1 and 1.2 and PFS.
> On a side note - a lot of libs start to implement their own crypto because "OpenSSL code sux its terrible and buggy!" impression.
I don't think they'd do that - it's be far easier to grab libressl or boringssl. If you're aware that openssl is buggy, you're probably aware that writing your own crypto when you're not a cryptographer is worse.
> OSF typically receives about US$2000 a year in outright donations and sells commercial software support contracts and does both hourly rate and fixed price “work-for-hire” consulting as shown on the OSF web site. The media have noted that in the five years since it was created OSF has never taken in over $1 million in gross revenues annually.
Speaking of OpenSSL, what state are the competing libraries in at the moment? I'd love a version of OpenSSL without all the potentially-insecure legacy code given all the problems its had.
Are there decent implementations of OpenSSL in more secure languages like Rust?
F# has a proven implementation, but I doubt that's what you're after. OCaml is also getting one for Mirage, but I don't know its status, and it won't be proven correct.
Do people really believe in more secure languages? Are they the same people that think switches make networks secure? Switches don't and neither does a given language. I recall a CTO that would not allow C++ development because he thought the language was insecure. Java was the only language allowed. Even college courses are still teaching that security is one of the benefits of the virtual machine. We only have to look at all the patches for java to see that it hasn't been secure. Then we look at every other software that has been patched to see that nothing is secure.
> We only have to look at all the patches for java to see that it hasn't been secure.
All those big security issues aren't in the Java language, they are in the JVM running untrusted Java byte code. Not to say that situation isn't bad, but you can't compare it to C++ because nobody ever thought running untrusted C++ code without some other sandboxing was a good idea.
That aside, memory safety is great for security. Of course there are 1000 other things that are important, too, and so I'd trust a C program written by a security expert much more then the same program written by someone who thinks his program is secure because he used Java. But I'd feel even better if the security expert used a memory-safe language because I am certain that all C programs above a certain size are vulnerable to memory attacks.
> Not to say that situation isn't bad, but you can't compare it to C++ because nobody ever thought running untrusted C++ code without some other sandboxing was a good idea.
This is actually kind of a point for the other side. You can sandbox code regardless of what language it's written in. Maybe what we need is not better languages but better sandboxes. Even when code is "trusted", if the developer knows it doesn't need to e.g. write to the filesystem or bind any sockets then it should never do those things and if it does the OS should deny access if not kill it immediately.
But sandboxing does nothing to protect information if the information resides in the sandbox (sandboxing wouldn't have stopped heartbleed).
Rust and friends would aren't going to make all securty issues go away, just as sandboxing would not. There is no one true silver bullet in securty, at least not yet.
> This is actually kind of a point for the other side.
I wanted to move the goalpost from "Java is insecure" to "the Java sandbox is insecure". I completely agree with the second statement, so I don't think I made a point for any other side.
You made the point that I was trying to make: implementations are not secure. A programming language can follow a philosophy but implementations never quite line up with the theory. We only use implementations of the theory and experience shows that implementations all have vulnerabilities.
I'm sorry if I misrepresented your post, but I feel you do the same to mine. I didn't say the JVM is insecure - I said the sandboxing part of the JVM is insecure and C++ doesn't have anything comparable.
I don't think security is produced by picking one language or another, but I do believe that it's harder to write secure code in a language like C than a language like Java or Rust. There are simply way, way more ways to shoot yourself in the foot.
> I don't think security is produced by picking one language or another, but I do believe that it's harder to write secure code in a language like C than a language like Java or Rust. There are simply way, way more ways to shoot yourself in the foot.
The trouble is that everything is a trade off. It's very hard to get a buffer overrun in Java but that doesn't make Java a good language. It tries so hard to keep you from hanging yourself that it won't let you have any rope, so in the instances when you actually need rope you're forced to create your own and hang yourself with that.
For example, you're presented with garbage collection and then encouraged to ignore object lifetime. There are no destructors to clean up when an object goes out of scope. But when it does you still have to cleanup open files or write records to the database or notify network peers etc. Which leaves you to have to manage it manually and out of order, leading to bugs and race conditions.
In other words, C and C++ encourage you to write simple dangerous bugs while Java encourages you to write complicated dangerous bugs.
That isn't to say that some languages don't have advantages over others, but rather that the differences aren't scalar. And code quality is by far more important than the choice of language. BIND would still be less secure than djbdns even if it was written in Java.
Not that this really proves anything one way or the other, but remember that in regard to the Java SSL implementation shipping with the JDK, it was very recently found that:
"...the JSSE implementation of TLS has been providing virtually no security guarantee (no authentication, no integrity, no confidentiality) for the past several years."
I don't understand this argument. For example, if I use a language that doesn't allow buffer overflows to happen, I've eliminated an entire class of security bugs being caused by programmer error. Why would you not want to use such a language? Performance and existing libraries will factor in to this obviously but I don't understand why you wouldn't consider security built into the language as a benefit.
Yes, security issues are found in Java and every other language, but when these are patched all programs that use that language are patched against the issue. The attack surface is much smaller.
> Yes, security issues are found in Java and every other language, but when these are patched all programs that use that language are patched against the issue. The attack surface is much smaller.
All patches work like that; when there is a bug in libssl and OpenSSL patches it then all the programs using libssl are patched. The difference with Java is that when a C library has a bug only programs using that library are exposed but when Java has a bug all Java programs are exposed. Moreover, Java itself is huge. It's an enormous attack surface. Your argument would hold more weight if the "much smaller" attack surface actually produced a scarcity of vulnerabilities.
> For example, if I use a language that doesn't allow buffer overflows to happen, I've eliminated an entire class of security bugs being caused by programmer error.
There are several assumptions behind "if I use a language that doesn't allow buffer overflows to happen" which you aren't taking into account. For instance, are you entirely sure that the implementation of that language's compiler will not allow buffer overflows to happen? We have a good example of a possible failure of that model in Heartbleed: when it came up, a bunch of people in the OpenBSD community raised their eyebrows, thinking hmm, that shouldn't happen for us, we have mitigation techniques for that. Turns out -- for performance reasons -- OpenSSL was implementing its own wrappers over native malloc() and free(), doing some caching of its own. This, in turn, rendered OpenBSD's own prevention mechanisms (e.g. overwriting malloc()-ed areas before using them) useless. The language specifications may not allow such behaviour, but that doesn't mean the implementation won't, too.
You're also underestimating a programmer's ability to shoot himself in the foot. Since I already mentioned OpenBSD and Heartbleed, here's a good example of a Heartbleed-like bug in Rust: http://www.tedunangst.com/flak/post/heartbleed-in-rust . The sad truth is that most vulnerabilities like this one don't stem from accidental mistakes that languages could have prevented; they stem from fundamental misunderstanding of the mode of operation which are otherwise safe constructs in their respective languages.
Granted, this isn't a buffer overflow, which, in a language that doesn't allow arbitrary writes, would be an incorrect construct and would barf at runtime, if not at compile time; but then my remark about bugs above still stands (and I'm not talking out of my ass, I've seen buggy code produced by an Ada compiler allowing this to happen), buffer overflows can be increasingly well mitigated with ASLR, and the increased complexity in the language runtime is, in and by itself, an increased attack surface.
Edit: just to be clear, I do think writing software in a language like Go or Rust would do away with the most trivial security issues (like blatant buffer overflows) -- and that is, in itself, a gain. However, those are also the kind of security issues that are typically resolved within months of the first release. Most of the crap that shows up five, ten, fifteen years after the first release is in perfectly innocent-looking code, which the compiler could infer to be a mistake only if it "knew" what the programmer actually wanted to achieve.
My point is simply that every programmer will make mistakes when coding so I want the most automated assistance possible to point out those mistakes. If a programmer has a pressing need and the persistence to work around those checks, that's fine, at least the surface area for those mistakes are then limited to a smaller amount of code.
From comments I read about that when it was written, it's not clear to me that the author actually demonstrated the same behaviour of Heartbleed. I'm not the person to be the judge of that, but for what it's worth here is the top comment from /r/rust on the topic. Then you can make up your own mind about that.
> You should note that Rust does not allow unintialized value by design and thus it does prevent heartbleed from happening. But indeed no programming language will ever prevent logic bugs from happening.
Under OpenBSD, that values would not have been uninitialized, were it not for OpenSSL's silly malloc wrapper -- a contraption of the sort that, if they really wanted, they could probably implement on top of Rust as well. What is arguably a logic mistake compromised the protection of a runtime that, just like Rust, claimed that it would not allow uninitialized values, "by design".
Of course, idiomatic Rust code would not fall into that trap -- but then arguably neither would idiomatic C code. It's true that Rust also enforces some of the traits of its idioms (unlike C), but as soon as -- like the OpenSSL developers did in C, or like Unangst did in that trivial example -- you start making up your own, there's only that much the compiler can do.
At the end of the day, the only thing that is 100% efficient is writing correct code. Better languages help, but it's naive to hope they'll put an end to bugs like these when they haven't put an end to many other trivial bugs that we keep on making since the days of EDSAC and Z3.
> Under OpenBSD, that values would not have been uninitialized, were it not for OpenSSL's silly malloc wrapper -- a contraption of the sort that, if they really wanted, they could probably implement on top of Rust as well. What is arguably a logic mistake compromised the protection of a runtime that, just like Rust, claimed that it would not allow uninitialized values, "by design".
I really disagree. Rust does not allow uninitialized values by design - end of story. If a piece of Rust code let's uninitialized values bleed through, then it is broken. The semantics of Rust demands this.
(OpenSSL on the other hand only broke/Overrode OpenBSD's malloc - they didn't break C.)
It is news to no one that you can break - break - Rust's semantics if you use anything that demands `unsafe`. That's why anyone who uses `unsafe` and intends to wrap that `unsafe` in a safe interface has to be very careful.
Complaining about Rust being unsafe - in the specific sense that the Rust devs use - by using the `unsafe` construct, is like complaining that Haskell is impure because you can use `unsafePerformIO` to `launchMissiles` from a non-IO context.
> Of course, idiomatic Rust code would not fall into that trap -- but then arguably neither would idiomatic C code.
It's not even a question of being idiomatic. If someone codes in safe (non-`unsafe`) Rust, then they should not fall into the trap that you describe. If they do, then someone who implemented something in an `unsafe` block messed up and broke Rust's semantics.
What if that same thing happened in C? Well, then it's just another bug.
---
I'd bet you'd be willing to take it to its next step, even if we assume that a language is 100% safe from X no matter what the programmer does - "what if the compiler implementation is broken?". And down the rabbit hole we go.
> I really disagree. Rust does not allow uninitialized values by design - end of story. If a piece of Rust code let's uninitialized values bleed through, then it is broken. The semantics of Rust demands this. (OpenSSL on the other hand only broke/Overrode OpenBSD's malloc - they didn't break C.)
I'm not familiar enough with Rust (mostly on account of being more partial to Go...), so I will gladly stand corrected if I'm missing anything here.
If the OpenSSL did the same thing they did in C -- implement their own, custom allocator over a pre-allocated memory region, would anything in Rust prevent them from the same sequence of events? That is:
1. Program receives a packet and wants 100 bytes of memory for it.
2. It asks custom_allocator to give it a 100 byte chunk. custom_allocator gives it a fresh 100 byte chunk, which is correctly initialized because this is Rust.
3. Program is done with that chunk...
4. ...but custom_allocator is not. It marks the 100 byte chunk as free for it to use them again, but continues to retain ownership and does not clear its contents.
5. Program receives a packet that claims it has 100 bytes of payload, so it asks custom_allocator to give it a chunk of 100 bytes. custom_allocator gives it the same chunk as before, without asking the Rust runtime for another (initialized!) chunk. Program is free to roam around those 100 bytes, too.
I.e. the semantics of Rust do not allow for data to be uninitialized, but custom_allocator sidesteps that.
It doesn't have custom allocator support as in "you can't have one function allocate memory and pass it for another function to use it", or as in "you can't replace the runtime's own malloc"? OpenSSL were doing the former, not the latter.
(Edit: I'm really really curious, not necessarily trying to prove a point. I deal with low-level code in safety-critical (think medical) stuff every day, and only lack of time is what makes me procrastinate that week when I'm finally going to learn Rust)
Having said that, I hope jemalloc gets linked externally soon so my code doesn't have to pay the memory penalty in each of my memory-constrained threads.
You can't say "this vector uses this allocator and this vector uses another one." If you throw away the standard library, you can implement malloc yourself, but then, you're building all of your own stuff on top of it, so you'd be in control of whatever in that case.
(We eventually plan on supporting this case, just haven't gotten there yet.)
Yes, my point was that OpenSSL did not throw away the standard library! openssl_{malloc|free} were thin wrappers over the native malloc() and free(), except that they tried to be clever and not natively free() memory areas so that they can be reused by openssl_malloc() without calling malloc() again. I.e. sometimes, openssl_free(buf) would not call free(buf), but leave buf untouched and put it in a queue -- and the openssl_malloc() would take it out of there and give it to callers.
So basically openssl_malloc() wasn't always malloc()-ing -- it would sometimes return a previously malloc()-ed aread that was never (actually) freed.
This rendered OpenBSD's hardened malloc() useless: you can configure OBSD to always initialize malloc-ed areas. If OpenSSL had used malloc() instead of openssl_malloc, even with the wrong (large and unchecked) size, the buffer would not have contained previous values, as it would have already been initialized to 0x0D. Instead, they'd return previous malloc()-ed -- and never actually free()-d -- buffers, that contained the old data. Since malloc() was not called for them, there was never a chance to re-initialize their contents.
This can trivially (if just as dumbly!) be implemented in Go. It's equally pointless (the reasons they did that were 100% historical), but possible. And -- assuming they'd have done away with the whole openssl_dothis() and openssl_dothat() crap -- heartbleed would have been trivially prevented in C by just sanely implementing malloc.
> (We eventually plan on supporting this case, just haven't gotten there yet.)
I'm really (and not maliciously!) curious about how this will be done :). You guys are doing some amazing work with Rust! Good luck!
> Please stop perpetuating the myth that security is produced by a programming language.
Security happens by taking care of what you're doing; if a language can eliminate a whole class of bugs then you might as well use it. That's why people keep arguing that some languages can eliminate some kind of bugs, but that absolutely doesn't make programs implemented in these languages bug-free.
Said differently: more secure (relatively) doesn't mean secure (in absolute).
> We only have to look at all the patches for java to see that it hasn't been secure.
We only have to look at all the patches for java to see how much it is analyzed; it doesn't mean java is relatively more or less secure than any other language.
I've seen no patches for this nim interpreter for brainfuck [0], does that mean it's more secure than java ? Absolutely not.
You can draw some parallel with crypto schemes: anybody can come up with some cipher, nobody will analyze it unless there is something to gain (that includes fun). When you've reached the state where you're under scrutiny of every crypto analyst and their student, and potential vulnerabilities are found, does that make it a weak scheme ? We don't know. Only a real analysis of the vulnerabilities can tell us.
Is 'security' really something that requires faith or belief? It seems that "more secure" (not secure in an absolute sense) can be made tangible, and I think that programming languages can give you "more security", in the sense that they close off certain possibilities or make them much harder to exploit/mess up.
Not that I know about security, but all you're doing right now is to fend off the claim of "more secure" by stating that java is not secure in an absolute sense - no one has claimed absolute security, only relatively more.
I have been using Hiawatha webserver which uses PolarSSL, and have avoided almost all of the recent bugs (on the servers that weren't running other things). I really like PolarSSL, but they just got bought up by ARM, so I'm a bit scared for the project.
Regarding SSL, I just generally consider it borked and try not to trust it if possible. SSH/VPN are just about the only thing I do trust (and even then with trepidation).
Once again, I think the many eyes theory only works when the code is sufficiently simple/terse. I bet if you did a study on lines of code in proportion to security vulnerabilities you would find a correlation (obviously not necessarily a causation).
PolarSSL has had their own fair share of nasties. Only a few months after Heartbleed they had a remote code execution in the ASN.1-parser. They fixed it quickly however, and keep a current CVE list at their home page. It just goes to show that all popular SSL options are more or less bad (and the same goes for IPsec).