Hacker News new | past | comments | ask | show | jobs | submit login
Alpine Edge has switched to libressl (alpinelinux.org)
236 points by g0xA52A2A on Oct 12, 2016 | hide | past | favorite | 81 comments



I'm currently evaluating LibreSSL for use in data protection software I licensed to a large company.

The optional libtls API bundled with LibreSSL is a really simple wrapper API that is secure by default. And it was a breeze to build on Windows because they use cmake (just need to download released bundle rather than from git to avoid problems.) A couple of the optional libtls functions don't work on Windows (tls_load_file), but 100% of OpenSSL-1.0.1+ api functions I tried so far worked fine.

For me, the biggest downside is LibreSSL doesn't support X25519 yet, while BoringSSL and OpenSSL both support it. And BoringSSL is starting to get easier to use with other software like nginx without messy patches.

Hopefully, X25519 will be added as a beta feature during LibreSSL 2.5.x and released as stable in 2.6.

If you have time, take a look at https://github.com/libressl-portable/openbsd

And email patches to: tech@openbsd.org

Edited: tls_load_file instead of tls_read


> LibreSSL doesn't support X25519 yet

ouch, that's pretty core. (We use it for storage crypto in the Userify on-prem server [ssh key/sudo management] management servers, although the managed nodes themselves just use 'regular' TLS/https for communication.) That means we can't switch to Alpine pretty soon, which I was contemplating for our AWS Marketplace instances in order to move to a distro with a smaller footprint. (X25519 is also the default between Chrome and web servers: https://www.chromestatus.com/feature/5682529109540864 )

Any idea on when X25519 will be added?


I too would definitely be interested if there's any sort of milestone planned for X25519. There are a couple of open enhancement requests on it in both openbsd and portable [1,2] dating to before ratification of RFC 7748 (or Chrome 50), but no assignees yet. I also don't see any discussion on the libressl public mailing list archives at marc.info, though I'm not sure if that's complete. It seems like it may become an ever bigger breaking issue for a lot of folks as support becomes more widespread following standardization, inclusion in OpenSSL 1.1.0, and increasing major browser/OS support.

1. https://github.com/libressl-portable/portable/issues/114 2. https://github.com/libressl-portable/openbsd/issues/58


No idea when LibreSSL will add X25519 or X448, as defined in RFC 7748 - Elliptic Curves for Security.

Here's RFC 7748 https://tools.ietf.org/html/rfc7748

Here's the errata for RFC 7748 https://www.rfc-editor.org/errata_search.php?rfc=7748


  > (X25519 is also the default between Chrome and
  > web servers: https://www.chromestatus.com/feature/5682529109540864 )
I thought chacha20/poly was the default. That's what I get when I connect to https://www.google.com with chrome (chrome-53) at any rate.


TLS cipher suites have five parts:

  1. key exchange:
    PSK - for embedded only
    RSA - obsolete because it doesn't provide PFS
    DHE - secure only if 2048 bits and up
    ECDHE - usually using P-256, secure
    ECDHE with Curve25519, called "X25519" - secure
    CECPQ1 - Google experiment in post quantum crypto
  2. authentication:
    PSK - for embedded only
    RSA encryption/decryption - obsolete because it doesn't provide PFS
    RSA signing and verification - secure if keys are 2048 bits and up
    ECDSA signing and verification - usually over P-256, secure
    EdDSA signing and verification - draft standard, uses Curve25519 and Curve448, secure
  3. cipher (for confidentiality):
    RC4 - disallowed
    3DES - obsolete because of sweet32
    AES-128 - good, requires AES hardware to be both fast and secure
    AES-256 - same as AES-128 but is required for post-quantum and against parallel attacks on many keys
    CHACHA20 - good, is fast on generic hardware
  4. MAC (to protect against tampering which usually breaks confidentiality):
    HMAC-MD5 - obsolete
    HMAC-SHA1 - ok
    HMAC-SHA256 and HMAC-SHA384 - no more secure than SHA1 for this use case
    GCM - faster than HMAC, requires CLMUL CPU instruction to be fast
    POLY-1305 - fast and secure on generic hardware
  5. KDF used to generate symmetric keys:
    MD5+SHA1 - obsolete, probably ok
    HMAC-SHA1 - ok
    HMAC-SHA256 and HMAC-SHA384 - no more secure than SHA1 for this use case
Originally 5 was the same as 4 and was not specified separately. Also, many details omitted.

But anyway, chacha20-poly1305 is actually one of these [1]:

   TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
   TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
   TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256
   TLS_PSK_WITH_CHACHA20_POLY1305_SHA256
   TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256
   TLS_DHE_PSK_WITH_CHACHA20_POLY1305_SHA256
   TLS_RSA_PSK_WITH_CHACHA20_POLY1305_SHA256
and you use only the first two from the list. The "ECDHE" part can be regular ECDHE with P-256 or X25519.

1 - https://tools.ietf.org/html/rfc7905#section-2


This is a really good list. Those long cryptic string constants make a lot more sense now.


ChaCha20-Poly1305 is a symmetric cipher, used for encrypting your traffic. X25519 is an key exchange protocol using asymmetric cryptography, and is used in during connection setup to establish keys for the symmetric cipher.

X25519 would appear under "Key exchange" and ChaCha20-poly1305 under "Cipher Suite"

With Chromium I just get ECDHE_RSA and AES_128_GCM, anyway.


I'm glad that VoidLinux isn't anymore the only distro in town that's switched to LibreSSL. And with Docker defaulting to Alpine, more OpenSSL/LibreSSL compatibility fixes will trickle into upstream projects. This is good news.

EDIT: Next, I predict a linux distro will, after having switched to musl, also support llvm/compiler-rt/libunwind/libc++ as base toolchain instead of gcc/libgcc/libstdc++


> EDIT: Next, I predict a linux distro will, after having switched to musl, also support llvm/compiler-rt/libunwind/libc++ as base toolchain instead of gcc/libgcc/libstdc++

You're probably right, but I hate that the work free software developers contribute to LLVM & clang will end up being taken proprietary by the likes of Apple. With GCC one knows that the users of one's code will always have the ability to run, read, modify & share that code, rather than having their rights stripped away.


Yes, some people won't give back. However, a project like llvm/clang, most people do give back because sharing is cool, and also it makes their job easier because they don't have to repatch for every new version of the compiler. Sony's team developing the Playstation 4 wished at times they could use newer versions of llvm/clang for their software, but couldn't because their version had so many patches [1]. Because of this, after the PS4 was released, Sony developers sent a bunch of patches upstream, and work pretty closely with the rest of the developers. Some bits and pieces may not get merged upstream, but that's the same for any project, GPL or not.

[1] http://llvm.org/devmtg/2013-11/slides/Robinson-PS4Toolchain....


On the flip side, code from permissively-licensed projects (like LLVM/clang) can be incorporated into free software with GPL-incompatible licenses. I personally consider that gain to be more than worth the risk of proprietary forks.

Also, the licenses of neither GCC nor LLVM apply to programs compiled by either project. I don't think that's what you meant by the copyleft sentiment, but the wording of that bit is weird and it's worth clarifying just in case.


Please note that no company has the right to strip your copyright or license away. But they may chose to extend your software without giving you access to the extension, if that is permitted by the license.


"after having switched to Musl" - Musl is not GPL either.

The BSDs have been used in proprietary software, but the open source projects continue as the centre of development.


NetBSD has had two installs, one with the gcc chain and one with the llvm chain for some years now. unwind compatibility was a problem but is getting better I think. For the base distro it makes less difference perhaps, although more C++ code is coming in.


What is "unwind compatibility"?


The clang and gcc unwind libraries have slightly different interfaces.


thanks


It is quite unlikely that generic Linux distributions will switch to musl as it lacks support of locales and NSS.


Right, that's why I wrote "_a_ linux distro", not "some linux distros". I'm sorry for not making that clearer.


Can Clang/LLVM compile Linux itself?


A not-yet-fully-upstreamed branch maintained by the linux foundation can, yes. But just like FreeBSD has to make an exception to build certain packages with gcc, the kernel can be exclusively built with gcc for the time being. If you're not using musl libc, I guess glibc might also require gcc but that's an unsurprising coupling, likely due to gcc C extensions in use.

The important points are that such a distro will quickly improve the situation of clang and libc++ compatibility across the board and exercise the alternative toolchain a lot more. In fact, there isn't much incompatible code out there, and it's been fixed mostly due to efforts of Debian, FreeBSD and of course Apple and Bitrig and Gentoo. FreeBSD is the leading force behind making lld a viable alternative to binutils ld and so far ThinLTO (which works with binutils, to be clear) is exclusive to llvm, so there's that.

https://aur.archlinux.org/packages/llvmlinux-git/

http://llvm.linuxfoundation.org/index.php/Main_Page


Can you remind me why LLVM/CLANG can't compile the linux kernel? Is it dependance on GCC extensions?


TL;DR Yeah GCC specific compiler compiler extensions.

GCC has some platform specific symbols that are really just functions wrapping assembly. Most of this is related to crypto and AESNI (built in x86 AES operations). This breaks some crypto, which breaks some wifi/file system stuff.

XEN driver uses a GCC specific ASM macro for counting NUMA Nodes and CPU cores.

Dynamic compiling within the kernel for say BSD Packet Filtering JIT requires LLVM Compiler-RT have a custom wrapper. Also some libgcc specific header information.


I think those are mostly fixed for C code, but assembly still has issues. Otherwise dependence on exact gcc behaviour is an issue.


thanks



It's not anytime soon that musl will replace libc.


I recently built LibreSSL to replace OpenSSL on my laptop that runs ArchLinux. After installing, pretty much every thing works seamlessly so far. I rebuilt python because apparently the ssl module looks for RAND_egd (or something of the vein that LibreSSL has removed - and I didn't compile it with a shim). Other than that, dig is broken on my system ("ENGINE_by_id failed") although I have not bothered to fix it since drill works fine.

It's nice to see LibreSSL being picked up by Linux distributions. I wish other major distros did this (I'm looking at you Debian). IIRC, Alpine was often used to built docker images. If that's still the case, I'd say it's good news.


> I'm looking at you Debian

You can read up on the packaging efforts at https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=754513 — there hasn’t been any activity in a while, so if you have something to add, please post there (but please refrain from +1/me too posts).


What's the general etiquette regarding "bumping" Debian bugs?


There is a LibreSSL overlay in Gentoo, which patches various software that would otherwise fail to build with LibreSSL, currently they are these packages:

   dev-qt/qtcore
   dev-qt/qtnetwork
   net-misc/socat
It used to be a bit painful to setup, but things have improved, now I only need this in make.conf:

  USE="... libressl"
  CURL_SSL="libressl"
And this to mask it: /etc/portage/package.mask/openssl

  dev-libs/openssl
There is still some software that would fail to install (with a conflict between openssl and libressl), but none that I need currently.


Worried about OpenSSL, but fine with running drill? ;- ) drill segfaults for me on a regular basis.


To be fair, I'm not a heavy user of either dig or drill. I use it mostly to check that my local resolver (unbound) is still responding, and that the DHCP distributed resolver isn't handing out garbage.

My usage probably falls into the well tested part of the code I guess.


It does seem like a very practical solution. Hopefully we'll see this in the bigger distros once the smaller distros prove it's doable.


It's worth remembering that OpenSSL has faithfully served the community for many years. Most of those years with almost no financial support. Few projects would survive the scrutiny they have undergone. These guys deserve some credit.


As with most security-related projects, it takes very little to have the trust house of cards fumble.



How has LibreSSL stood up lately to the relatively frequent CVEs in OpenSSL the past few months? I know the initial months were a frenzy of removing garbage and classes of problems (#yadf) that preempted a few CVEs, but I haven't been paying attention to the commit logs to know if it was also susceptible to them.


A quick look through the release notes: https://www.libressl.org/releases.html seems to indicate that they've only been affected by six CVEs since October 2015.

I haven't followed LibreSSL recently, but previously many of the CVEs that affected OpenSSL are for features that LibreSSL ripped out.



I tried libre on OS X (it's on Homebrew), the binary is around half the size of the equivalent OpenSSL release. Kudos to Libre for stripping out so much junk - OSs that people don't use and ciphers that people shouldn't be using - and producing a more auditable sensible codebase.


I would hope that "OSs that people don't use" do not contribute to the binary size on a different OS, since the relevant source files are just not built for the traget OS.

Of course, the operative word in this sentence is "hope".


The easy example: Because the memory allocator on some systems is slow, OpenSSL had its own malloc implementation (built on top of the system malloc). There was a configure flag to use the system malloc, but because it was seldom used, that didn't actually work. Which meant that because an OS that I don't use has a crappy malloc, I'm stuck with a bigger OpenSSL binary.

This came to be extra-important, because of Heartbleed. The custom malloc both increased the odds that the memory it was accessing but shouldn't be contained something sensitive, rather than just an unmapped page. But it also bypassed the vulnerability-mitigating strategy of OpenBSD malloc, which would have certainly caused a crash, rather than a vulnerability; which would have lead the the issue being fixed.


LibreSSL got the OpenBSD treatment, so the base version is for OpenBSD only, and then there's a portable version for other OSs.

For OpenSSL, I wouldn't know, but fear the worst.


Go see the talk on the reasons for the LibreSSL fork.

Don't just fear the worst, know the worst.


Bob Becks talk on LibreSSL is really informative, scary and entertaining.

https://www.youtube.com/watch?v=-4psTQ1sX7s


This is the other one given by Bob Beck https://www.youtube.com/watch?v=GnBbhXBDmwU - bit different conversation, but same slides


That's the one I was talking about. Sorry, I didn't have the link on me.


Oh, I saw that. I just want to forget it.


The "Big Endian AMD 64 support" slide and discussion should be required viewing in a CompSci ethics class.


Is that the one where they check for endianness changes while the program is running? One of those moments where I was stunned by the stupidity of what I was hearing.


The OSSL outlook is that both the OS and the hardware were designed by untrustworthy, paranoid, schizophrenics who wanted OSSL crash and burn. I wouldn't expect intelligence, sanity, rhyme, or reason.


Yep. Hearing that solution to the problem was just beyond reason. Its like they didn't just set some compile flags and constants for the architecture, but decided that their code had to calculate it every time the variable is needed. The line about finding if /dev/null moved was also pretty scary.


> finding if /dev/null moved

Couldn't find that, do you have a reference somewhere? (I only watched the video that mrweasel posted.)


43:16 into the video I posted

https://www.youtube.com/watch?v=-4psTQ1sX7s


I remember cracking up then forwarding it to all kinds of people and forums. Along the lines of "remember this" if you try to justify trusting the OpenSSL codebase for anything.


...Which is why LibreSSL was started in the first place: Trusting OSSL is fundamentally insane.


I'm running nginx linked against LibreSSL on our frontend since February. We've not seen any issues and LibreSSL has been a perfect drop-in replacement for OpenSSL so far.


This is a really short note. I'm curious, was BoringSSL evaluated at all as an alternate substitute?


LibreSSL aims to be a mostly API compatible replacement for OpenSSL, BoringSSL does not. Literally the first text on the BoringSSL web page.

    BoringSSL is a fork of OpenSSL that is designed to meet Google's needs.
    
    Although BoringSSL is an open source project, it is not intended for general
    use, as OpenSSL is. We don't recommend that third parties depend upon it. Doing
    so is likely to be frustrating because there are no guarantees of API or ABI
    stability.


The only thing I miss in libreSSL is SRP. How would hn crowd recommend to pick alternative to SRP?


Was the issue that the OpenSSL SRP stuff was wonky, or that the LibreSSL guys hate SRP? I hope that it's the latter, because SRP is a really good protocol.

As an aside, does anyone know if there's been progress with elliptic curve SRP? The last literature review I did was … shaky.


They looked for maintainers[0] and later removed it when the OpenSSL SRP code was considered bad and turned out to be vulnerable as well[1]

[0]: http://undeadly.org/cgi?action=article&sid=20140429062932 [1]: http://www.openbsd.org/papers/eurobsdcon2014-libressl.html


I hope LibreSSL gets the traction it needs in the coming months or years. There are great wins using it and it is worth to move over the FOSS stack to use LibreSSL instead of OpenSSL and reduce the number security bugs we are facing today.


Regardless how much love it gets, it will never be safe.

https://groups.google.com/forum/m/#!msg/boring-crypto/48qa1k...


I've been hoping for a C/C++ compiler with no/limited undefined behavior for years. Sadly, the people writing compilers have lost touch with the people using them, so this is very unlikely to happen.

But I have to say, if one is going to do this, then C isn't a good target. Especially for crypto, C lacks extremely useful semantics such as rotate-left, rotate-right, add-with-carry, subtract-with-borrow, etc. Things that would greatly accelerate libraries that work with 512-bit multiplies like DJB's Curve25519. And then there's useful math operators like power-of that could be added.

You could also put requirements in there like "warn/error on variable-length divide" to catch surprise gotchas like x86 CPUs taking an indeterminate length of time to divide. In fact, constant-time execution could be a compile-time check.

The key would be to keep it "as much C" as possible. Rust and co are going to face barriers by being so incredibly different from C.


The biggest problem Rust has with crypto is that the canonical compiler is built on LLVM, so no matter what a human writes, it'll get ground up by the optimizer and may turn into something quite different. Rust has an advantage in that it doesn't have the undefined-behaviour gotchas of modern C/C++, so LLVM won't optimize away all your sanity checks, but that's not a strong enough guarantee for crypto work—as you point out, you need strong control over assembly instructions to maintain constant-time execution.


That problem is true of practically any language, even C and C++, and the solution is the same (and well accepted) for all of them: write a small set of controlled primitives in raw assembly, and let the "high level" language compose them. Obviously using assembly forgoes safety for those pieces of code, but being primitives they've typically fairly small, and have straight-forward memory access patterns/behaviour.


I wish all the people djb alike could gather in a room and give gifts to the world.


Yes, that's a valid argument, but if you think everybody will switch to Oberon any time soon, prepare to be disappointed.

In any case, LSSL can be a heck of a lot safer than OSSL, and, in fact, already is.


What about switching to Rust? https://github.com/ctz/rustls


Impressive, but I'm willing to bet nobody will want to rewrite all of open/libreSSL, one of the most popular cryptographic libraries out there, in Rust.


https://crates.io/crates/ring , which is BoringSSL, being ported over. Eventually it'll be Rust and asm.

(And is what rustls uses)


Well then, maybe. Consider me corrected.


Nice, thanks for point it out.


Oberon ship has sailed long time ago. I still like it, but that is just a nostalgia thing.

It was a great experience in the mid-90's, but a modern language in 2016 requires much more that what it offered, hence why I eventually got disappointed with Go's feature set.

I would rather see Ada, SPARK, Swift, Rust, ATS, Idris, F*, Formal Methods or whatever else that ensures that our code runs on top of solid foundations.


I'm not sure I entirely agree, but fair 'nuff.

In any case, my point still stands: People aren't going to rewrite all that C code. The best we can hope for is for some UB to actually be standardized, so that the sociopaths who write the compilers are reigned in a bit.


I very much doubt it, as it goes against the C culture of micro-optimize each code line as it is written, without any feedback from a profiler because "the coder knows best!".

Since I know C (1992), I have seen quite a few times people write code to go fast as they can and apply compiler optimization tricks for use cases where it hardly matters.

Also many of the issues regarding UB go back to the early ANSI C days, when it was the wild west of C dialects outside UNIX and no vendor wanted to give up on their own extensions.

As for C and C++ compiler writers view on UB:

https://www.youtube.com/watch?v=yG1OZ69H_-o


I know perfectly well what their views are. I did describe them as sociopaths, remember? UB, to some degree, might be resonable. C-style nasal-demon UB, with propogating conditions so that you can't check for integer overflows reasonably is possible to live with, but insane.

C's culture of hyper-optimization is ridiculous. Unfortunately, while Rust may well eat C++'s lunch, there's very little that is competitve with C at the same level of abstraction, so for some work (kernel development, perhaps, embedded systems, definately) it will remain significant for some time, if not outright dominant.


Oh good. Finally.

I want to see OSSL burn. Because it sure as heck doesn't deserve to live.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: