Hacker News new | past | comments | ask | show | jobs | submit login

If you think you're safe: it's the same thing with Linux. Yes, good distros sign their blobs and you can probably verify that with builtin tools.

However, consider how distros generate their signed binaries:

1) A packager downloads a random tarball off the internet, often over HTTP and/or unsigned and unverified.

2) The packager uploads the same tarball to the distro build system (you trust them, right?)

3) The packager's script for building the program or library is executed by the build server (you trust all of the packagers, right? they have implicit root access to your machine during pkg install.)

4) The packager's script likely invokes `./configure` or similar. Now even if you trust the packager, the downloaded source has arbitrary code execution. You verified it, right???

(Not trying to advocate for webcrypto. And I'm a Linux user. But I'm also a packager, and I have some awareness as to how one would go about pwning all users of my distro.)




Sure, but you have to trust someone. How do you know the baker you bought some bread from didn't put hallucinogenic drugs into your bread this morning?

The key is to limit the number of people you trust and remove instances where you mistakenly trust more people than you believe you do. When downloading a .exe over http, you trust an unknown number of people working at each company your packets hop over to reach the server. You are implicitly trusting each and everyone of an unknown number of people with direct root access.

With a Linux distro this is different: you are trusting the distro and any employees/volunteers of that distro. You trust that the distro is actively vetting the people involved - or is at least in a position to publicly name them if they break the trust of users, etc. Ultimately you do still have to trust someone, though.

Debian, at least, has proven to be fairly trustworthy so far. Who has access to ae-5.r23.londen03.uk.bb.gin.ntt.net and what do I do if they MITM my traffic? EDIT: Any why can't they spell London correctly?


Londen is the Dutch spelling of London, so could be a network link maintained by some Dutch provider?


ntt.net is Japanese...



> 1) A packager downloads a random tarball off the internet, often over HTTP and/or unsigned and unverified.

Then they're being remiss in their duties.

> 2) The packager uploads the same tarball to the distro build system (you trust them, right?)

Yes, I do.

> 3) The packager's script for building the program or library is executed by the build server (you trust all of the packagers, right? they have implicit root access to your machine during pkg install.)

There is at least traceability here. There are a large number of packagers for my distro, true - but they are required to personally sign for the packages they upload. If one of them turned out to be malicious, I don't think this would be without consequence.


Honest questions: what do you think the consequences would be, and how do you think they would be enforced?


I think they'd be banned from the project. If it looked to be malicious, I can see a lawsuit happening, though that would probably be a slow process and end in a settlement of some sort. Packager identities are verified against legal identity documents; depending on your threat model that may or may not be an effective barrier - a nation state can probably afford to burn a few identities, but regular criminals not so much.


It might not be malice on the part of the packager. It could be that their machine is deliberately compromised.


It would certainly make a big fuss.

First the identity of that person would be stigmatized to a point where it wouldn't be usable anymore to gain trust to other projects. Publishing rights certainly would get revoked for that user.

Then all packages published by him/her will need to be analyzed for further exploits and discussions would happen to avoid future similar issue. If possible a patch/fix would get published by the distribution.


Well... To me there are two very serious issues with typical packages for Linux (and I'm a long time and a die-hard Linux user, so I'm not criticizing Linux here).

One of them being that you typically must be root to install packages. This mean that if anyone manages to slip a backdoor in any moderately used package, it probably means "root" on many Linux systems.

Some people have been complaining about that for years. Thankfully we're now beginning to see things like "functional package managers", where packages can not only be installed without admin rights but can also be "reverted" back to exactly the same "pre-package-installation" state if wanted.

The other very serious issue is that most package builds are not deterministic. I think everybody should begin to take security very seriously into account and realize that deterministic are the first (and certainly not only) step to take towards software which can be trusted a bit more.

There are thankfully quite some people who are now taking the deterministic builds route and one day we should, at last, be able to create the exact same package on different architectures and cross-check that we've got the same results. This won't help with backdoors already present in the source code but it's already going to be a huge step forward.

So, yup, I take it that, of course, as a packager you know how to pwn all your users.

As a user I wish we had a) deterministic builds, b) functional package managers, c) packages which can be installed without being root.

If we had that, there would be less ways to pwn all the users of one package at once. I'm a Debian user since forever (and I love the rock-stable Debian distro) and I'm not expecting Debian and other big distros to move to such a scheme anytime soon (it's probably too complicated) but there may be a real opportunity here for newer distros who'd want to focus on security.


> One of them being that you typically must be root to install packages.

  ./configure --prefix=$HOME/opt/$pkgname && make && make install
It's not pretty, I'll admit, and package managers could probably help here, if they're building from source.

> but can also be "reverted" back to exactly the same "pre-package-installation" state if wanted.

For system-wide packages, most package managers do support this. Since they don't support user-only packages, of course reverting an install isn't going to happen.

If you've installed it yourself, `rm -rf $HOME/opt/$pkgname`.

> The other very serious issue is that most package builds are not deterministic.

Deterministic builds are hard.

> be able to create the exact same package on different architectures and cross-check that we've got the same results

Unless you're cross-compiling, different architectures by definition nets you different builds. Even within an architecture, differences in feature sets (take advantage of Intel's shiniest instruction?) and compile time options (use this library?), where to install, etc. cause the number of possible build combinations to multiply quickly. Binary distros like Debian have it a bit easier, as they usually distribute a lowest-common-denominator binary with all features, but some distributions (I'm a Gentoo user) let you tune the system more.

Even if you had all the things you name, you still have to trust whomever is packaging your software. Or build it yourself after reading the entire source. (And then there's the chicken-and-egg problem with the compiler.)


Manual builds work fine for some programs, but when you have to have a dozen dependencies that now also have to be built manually, and those have their own dependencies...


Just a quick note for those following along at home, I recommend having a look at xstow for managing custom package(trees). Basically it's:

    mkdir ~/opt/xstow
    cd /tmp
    # get package (verify signature)
    cd package
    ./cofigure --prefix=$HOME/opt
    make
    make install prefix=$HOME/opt/xstow/package-version
    cd ~/opt/xstow
    xstow package-version # xstow -D to "uninstall"
I find that helps a lot when you need to install new versions, and don't want to worry about cruft left over -- and also simplifies handling of PATH, MANPATH, LD_LIBRARY_PATH and LD_RUN_PATH.

Some packages are a little harder, but can sometimes be tricked to behave by moving ~/opt/xstow out of the way, doing a make install and then moving ~/opt to ~/opt/xstow/package-version-etc and xstow'ing.



> The other very serious issue is that most package builds are not deterministic.

It's virtually impossible to create deterministic builds of common software. There's random sources of data and variables all over the place. And more importantly, deterministic software != secure software. You could make a perfectly deterministic piece of code, compile it, run it, all the same on all hosts. It can still be rife with security holes.

I don't know what you mean by 'functional package managers'. There's plenty of 'functional' package management software out there used by millions of people every day. If you just mean easier to use, there's that too.

You can already install software without being root, very easily in fact. It just won't work very well because a lot of software is designed to operate with varying levels of user and group permissions, and varying system capabilities. And again, more importantly, there are plenty of privilege escalation exploits released all the time that could get to root from your user. Malware doesn't even need to be root if all it wants is control over your browser sessions or to siphon off your data. Root-installed software is as big a deal to a single user system as a vuln in your copy of Adobe Flash.


> It's virtually impossible to create deterministic builds of common software.

Nope, it's definitely not. If projects like Tor and Mozilla (they're working on it) can do it, then the 99.99% of packages out there which are less complicated than Tor / Mozilla can do it too.

> It can still be rife with security holes.

You're just rewriting what I wrote: I didn't said it would mean the software would be secure. I wrote it would already be a huge step forward.

> "I don't know what you mean by 'functional package managers'."

I mean for example this:

nixos.org

> Root-installed software is as big a deal to a single user system as a vuln in your copy of Adobe Flash.

Definitely not. Especially in a system like Linux where it's easy to have multiple user accounts (including one used just for surfing). I'm a "single user" and I do have several user accounts on a single machine (including a user account which I use only for surfing the Web). No Adobe Flash here btw and no Java applets either (I'm a dev and I do typically target the JVM, but there's no way I'm allowing Java applets in a browser) ^ ^

You say: "deterministic builds cannot be done", "there's no point in having deterministic builds because there could still be security holes", "local exploit is as bad as root exploit on a single-user machine"...

And I disagree with all that. And thankfully there are people disagreeing with you too and working on tomorrow's software packaging/delivery methods.

I thank Tor, for example, for showing us the way. The mindset really need to change from: "it cannot be done, it's too complicated" to "we can do it, let's follow the lead of the software projects showing the way". There's a very real benefit for users.

The "why":

https://blog.torproject.org/blog/deterministic-builds-part-o...

The "how":

https://blog.torproject.org/blog/deterministic-builds-part-t...

Honestly I simply cannot understand why there are still people arguing that deterministic builds aren't a good thing or people arguing that it cannot be done.

It can be done. And it's a good thing.


"A packager downloads a random tarball off the internet, often over HTTP and/or unsigned and unverified."

Unless the packager is on the mailing list for the project, which many are as it helps them keep up to date on changes.

"The packager uploads the same tarball to the distro build system (you trust them, right?)"

Now the package is in one place, so when you say, "SSH on Fedora seems to open a connection to this random server in Ft. Meade!!!" everyone else can check and see if that is what is happening. Now you have thousands of people investigating the bug -- not so bad. Compare this to, "I downloaded something that is supposed to be PuTTY, which I found via a Google search, and it is acting funny!"

The fact that everyone who uses Fedora or Ubuntu is running the same code is pretty helpful. It is not much, but it does help.

"The packager's script for building the program or library is executed by the build server"

In a chroot jail, or an SELinux sandbox, or a VM, or any number of other environments that help to isolate the build process from the rest of the system. In theory, the build server has quite a bit of protection from malicious packagers.

Also worth noting is that packagers' actions are logged and would probably be audited if a user sounded the alarm and nobody could figure out what was happening. It would take a lot to pwn the users of a distro in any meaningful way, because keeping it secret is hard -- your victory would be short-lived.


The whole "package signing" thing can be validated against the source tarball with a bit of due diligence. The key is "source packages". As in, `apt-get source`. See:

http://askubuntu.com/questions/28372/how-do-i-get-the-source...

When you do that you'll get the original tarball that was used to create the package along with any patches. You can compare that tarball to one you can find on the Internet (usually in more than one location) and they usually have md5/sha1 signatures.

The rabbit hole goes very deep and yet there are people (like me!) who actually do do that sometimes. I suspected that the tuxtyping package may have been modified (due to corrupt .wav files being included) and went through the whole rigamarole of validation. I double and triple-checked the signatures against the binaries, sources, and everything else I could find in the package repositories (and mirrors).

Turns out it was just some filesystem corruption that added some extra bytes to the tail end of those .wav files. They're harmless.


While this is all true, there are a couple of other important considerations:

1) packagers have a history with their packages. They fetch from the same source, verify with the existing GPG key, etc. They aren't fetching from a different random source for each build. As part of the package review process the upstream source should be reviewed and confirmed.

2) Everyone in the distribution is using the same code. Unlike Windows/OSX the users get the same binaries from a single source. This allows problems to be corrected quickly for everyone at the same time.


consider how distros generate their signed binaries

Do you have any actual evidence that Linux distros do packaging this way?

But I'm also a packager, and I have some awareness as to how one would go about pwning all users of my distro.

And do you actually do so? If so, please tell me which distro you package for so I can avoid using it. If not, why do you think other packagers do?


Different distros require different metadata for their packages. But in general amongst the top 10 distros, almost all metadata in a package is optional. You don't have to specify the URL where it came from or who the author was in order to get a package accepted. Some distros do require a URL, mainly because they build from source to install on a system. But other distros merely accept a source-package which bundles the source code and that's built on their build servers and released after being peer-reviewed. But the peer-review is a manual process, so it's human-fallible.

As an example, let's compare the way two distros (Fedora and Debian) package an old piece of software: aumix.

Taking a look at this spec file [1] for fedora, we see two pieces of metadata: a URL to a homepage, and a URL to the software. The URL is not used for packaging at all; it's merely a reference. The URL to the file can be used to download the software, but if the file is found locally, it is not downloaded. And guess what? That source file is provided locally along with the other source files and patches in a source package. So whatever source file we have is what we're building. This file doesn't contain a reference to any hashes of the source code, but the sources file [2] in Fedora's repo does.

With Debian we have a control file [3] that defines most of the metadata for the package. Here you'll find a homepage link, which again isn't used for builds. The path to a download is contained in a 'watch' file [4], which is again not referenced if source is provided, and generally only used to find updated versions of the software. There are no checksums anywhere of the source used.

The source to aumix actually provides its own packaging file [5], provided by the authors. Apparently the URL used here is an FTP mirror, not the HTTP mirror provided by the earlier packagers. Could that be intentional or a mistake? And could they possibly be providing different source code, especially considering the hosts themselves are different?

It's clear that there's a lack of any defined standard of securely downloading the source used in packages, much less a way of determining if the original author's source checksum is the same as the packager's source checksum. There are several points where the source could be modified and nobody would know about it, before the distro signs it as 'official'.

[1] http://pkgs.fedoraproject.org/cgit/aumix.git/tree/aumix.spec... [2] http://pkgs.fedoraproject.org/cgit/aumix.git/tree/sources?h=... [3] http://anonscm.debian.org/viewvc/collab-maint/deb-maint/aumi... [4] http://anonscm.debian.org/viewvc/collab-maint/deb-maint/aumi... [5] http://sources.debian.net/src/aumix/2.9.1-2/packaging/aumix....


These are all good details about how much information various distros give me, the user, about the sources they're using for their builds. I certainly agree that it would be nice for them to give a lot more.

But this is still secondary to the basic point: as a Linux user, I get packages from my distro, not from the upstream source, so I don't have to go searching around the Internet for packages or package updates, wondering whether I've got the right source, wondering why there isn't an https URL for it, etc., which is what Windows users have to do according to the article (and OS X users too, for the most part, though the article doesn't talk about that). The distro does all that, and either I trust them to do it or I don't (in which case I go find another distro). The fact that the distro doesn't make it easy for me, the user, to see how they verify the sources they use, does not mean they aren't verifying the sources they use.

Also, while it's true that the distro verification process is human-fallible, as you say, and it would be nice if every OSS project made it easy for distros to automate the process instead, it's still a lot less human fallible than having every single user go searching around the Internet for software. Distro packagers at least have some understanding of what they're doing, and they at least know who the authoritative source for a particular package is supposed to be without having to depend on Google's page rank algorithm.


Yes, Linux software is generally less prone to erroneous installs than Windows software, when it is done through your distribution. However, I think a parent commenter was pointing out how much easier it is to hack all of the users with this unified system of installation.

Is searching for, downloading and installing Putty actually resulting in users with malware-laden files? It would seem not, as the highest-ranking results for Putty are the official ones, and downloading/installing is a breeze once you get to the official page.

For software that's a more likely target for scams (like Firefox) you'll find a lot more user error and potential for abuse. And consider that many users may download and install Firefox by hand instead of using their distro (it's faster and less complicated). And similar to the attack on popular Windows end-user software, Linux server software is often a more high-value target for attack also results in users unknowingly installing insecure software, as we've seen in[1] many[2] cases[3].

Realistically the only thing keeping Linux more safe is that the user base and culture are different. But it would be naive to assume that somehow distro packagers are a more trustworthy source of files than the ones you could find on your own. It would seem to completely depend on the application and the user.

[1] http://www.darkreading.com/attacks-breaches/open-source-proj... [2] http://arstechnica.com/business/2012/02/malicious-backdoor-i... [3] https://security.stackexchange.com/questions/23334/example-o...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: