Hacker News new | past | comments | ask | show | jobs | submit login

Imagine the problem of tracking down all the different versions of a library when an exploit is found. If you have 20, or even 50 different apps that bundled openssl, imagine the hassle of making sure each one was vetted and updated as needed, not to mention the delay in getting all the different packages rebuilt and pushed (which may be a small delay, or may not, depending on the vendor).



You regularly use 20 or 50 end-user applications that use openssl?

I'm not talking the low-level OS applications here... I'm talking end-user applications and major exposed services.

For that matter, each of those applications needs to be updated, vetted and packaged... it's a matter of the level and completeness of packages.


What's considered an end user application? Installed languages (Perl, Python, Ruby, etc)? Would you consider all of regular userspace one "app", or split it into multiple chunks (dev tools, web tools, etc)? Wget, curl and chrome all use openssl.

smtpd? httpd? sqld? sshd? ntpd?

This may be illuminating, it's the list of RPMs that have a requirement containing the string ssl: # for RPM in `rpm -aq --qf '%{NAME}\n'`; do rpm -qR $RPM | grep -iq ssl && echo -n "$RPM "; done python-ldap libssh2 mailx Percona-Server-server-55 abrt-addon-ccpp libfprint qpid-cpp-client-ssl perl-Crypt-SSLeay ntp httpd-tools openssh openssl-devel pam_mysql redhat-lsb-core Percona-Server-client-55 sssd-common perl-IO-Socket-SSL qt squid openssl098e elinks ipa-python python-nss compat-openldap Percona-Server-shared-55 systemtap-runtime perl-Net-SSLeay python-libs Percona-Server-shared-compat openssl systemtap-client qpid-cpp-server-ssl certmonger python-urllib3 openssh-server nss_compat_ossl openldap percona-toolkit pyOpenSSL git sssd-common-pac wget ntpdate openssh-clients openssl systemtap-devel postfix nss-tools perl-DBD-MySQL libcurl curl sssd-proxy libesmtp

That's just for SSL, which while it's used in many applications and services, generally is limited to items that communicate externally, for the most part. What about when it's a core library that everything uses? tzdata updates often... We want correct time representations, right? Gzip is used by a lot of applications. What about glibc?

It's not an easy problem, but that's why I'm interested in how it turns out.


If you're talking a server, then the exposed application servers (those listening on ports) would be better served as an abstraction from the core OS (lxc is a decent collection of solutions for that, as would BSD jails, or even Solaris containers)...

As to end-user applications, if you are developing using Perl, Python, etc.. then developing against the host is your choice... that said, having the result deployed separately might be a better option, in a container a valid one.

You bring up a great example... OpenSSL has a minor change, you upgrade that package, and everything runs fine, except a ruby gem you rely on is broken, and your system is effectively down, even if that ruby app isn't publicly facing, and only used internally. If they were in isolated containers, you could upgrade and test each app separately without affecting the system as a whole. Thank you, you've helped me demonstrate my point.

That said, in general there aren't dozens of applications that are run on a single system that matter to people... and most of those that are could well be isolated in separate containers and upgraded separately. Via docker-lxc, your python apps can even use the same base container and not take as much space.. same for ruby, etc... and when you upgrade one, you don't affect the rest.

I've seen plenty of monolithic apps deployed to a single server, where in practice upgrading an underlying language/system breaks part of it.

Myself, for node development, I use NVM via my user account (which there are alternatives for perl, python, ruby) that allow me to work locally with portions of the app/testing, but deployment is via a docker container, with environment variables set to be able to run the application. It's been much smoother in practice than I'd anticipated.


I'm not arguing that there's no advantage to containerized packaging, just that it also comes with it's own set of problems. I'm not sure what weighs more at this point, the advantages of good encapsulation, or the problems caused by the system itself being harder to query. I'm not sure any level of encapsulation is worth it if it leads to a service being exploited when it otherwise wouldn't be, due to missing just one instance where some underlying library upgrade was missed. But this is a solvable problem, it's just a matter of tooling.

I'm not arguing against more containers, I'm just making the point it's not all rainbows and kittens. There are problems, but if we address them and solve them, we come out well ahead of where we were before.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: