Hacker News new | past | comments | ask | show | jobs | submit login

That's a pretty common thing. I don't see the problem.



Docker addressed this last week at dockercon, announcing Docker Notary to securely publish and verify content.

https://github.com/docker/notary


It being a common thing is the problem. It's teaching insecure habits.


Ok, I wonder how many do bother to check checksums after downloading binaries?


Many do, likely without even realizing it. It's common functionality in Linux package managers.


What do you think about this solution: introduce a layer on top of SSL that just verifies whether the private key of a certain explicitly stated site has signed a file?

In other words, compromising the server wouldn't be enough, because that doesn't give you the SSL key, so it would still fail "curl|is_signed_by site.com|sh", which they can only pass if they compromise the private key?

Better than the current system?


> compromising the server wouldn't be enough, because that doesn't give you the SSL key

But it does. The server needs to have the SSL key to be able to serve requests over HTTPS.

It may be encrypted with a password, but at that point you're severely degrading your integrity assurances (compared to offline executable/archive signing). Might as well do it right with offline signing, right off the bat.


interesting. What if a script just uses the SSL infrastructure to get the private key associated with a domain name, without actually needing anything at that domain name to come over SSL? Then the private key does not have to be live/online at all, but could be used to verify the shell script. This is getting complicated, but if there is infrastructure, it should be possible to use it.

Personally I think curl of an https URL is not the worst thing in the world.


I think you need to read up a bit more on how asymmetric key cryptography works :) Verification is done using the public key, the private key is used to sign something. That's why it's so useful. This is a good read if you want to learn more: https://www.crypto101.io/

Basically, the separation between 'server serving the downloads' and 'machine signing the release' is intentional, and should be maintained. Consider it an 'airgap' of sorts, although it usually isn't one in the strictest sense of the word.

Making release signing depend on the SSL infrastructure (which is already rather broken in a number of ways) in any way, is a bad idea. Verification is a different story, but secure code delivery is a hard problem anyhow.


I understand exactly how it works. How do you get a code-signing paradigm down to something as simple as curl | sh though? (Well not as simple, but still a human-readable one-liner that works on nearly all Linux systems.)

I thought maybe a single-line invocation might piggy-back on SSL as follows:

- get a server's public key that is not online or able to answer requests (because if it were it couldn't be airgapped)

- but still use the key to verify the script that's downloaded from the server that is online.

- only pass the code to sh if it was properly signed by the offline server.

Then the offline server could be "https://key.meteor.com" and the private key wouldn't have to be anywhere but an airgapped machine.

I don't know if there is more of the SSL infrastructure that I'm missing though (I'm not an expert) or if this could practically be reduced down to a tamper-evident one-liner (a la curl https://install.meteor.com | sh). It would be a marked improvement over just passing anything from a potentially compromised server straight to bash though!


> How do you get a code-signing paradigm down to something as simple as curl | sh though? (Well not as simple, but still a human-readable one-liner that works on nearly all Linux systems.)

You don't, really. Not currently anyway. Retrieving a binary/archive and doing out-of-band verification are two logically separate steps.

The problem with your suggestion is that SSL is about transport security. It verifies that you are talking to the right server, but does not provide any guarantees beyond that.

It's not really possible to shoehorn release signing into that, without additional infrastructure.

It doesn't matter how you combine things - the server and the signing system are (and should be!) two separate entities, and you cannot rely on the server to tell you who the signing system is (because that'd give you no better security than not having a signing system at all).

> It would be a marked improvement over just passing anything from a potentially compromised server straight to bash though!

It wouldn't, because as far as I can tell, you're still relying on the server to tell you what the correct release signing key is. How else would you obtain it?


thanks - drop me a line and I'll reply, this thread is getting old and deep. thanks for your thoughts though and hope you do write.


That's exactly what docker notary is.


[flagged]


I am downvoting them because shell piping is not relevant to Meteor 1.2 announcement, this topic was discussed on multiple occasions and they offer no alternative.


Sure it's relevant.

They're announcing "hey we have these new features" and I'm saying "hey look, they still haven't fixed this big glaring problem that is very much relevant to how their software is used".


Downvoting because it's a non-issue. If I'm copying and pasting something from the web into my terminal, it's because I trust the source.

It's not any less secure than downloading a tarball and running scripts inside of it. That's equally insecure and people have been doing that since the dawn of the internet.


Hack their webserver, replace the contents of https://install.meteor.com/ with malware, instantly pwn anyone who pipes that to their shell.

Worse: the people who are most likely to curl|sh are DevOps folks with the keys to their company's kingdom.


What do you want them to do? The obvious solution is to change it from "curl|sh" to "curl|{something about whetehr PGP says this is properly signed by the private key belonging to public key blahblahblahblahbalhMETEOR.COMkey. If yes:}|sh"

But the problem is anyone compromising the site can just change the line from "blahblahblahblahbalhMETEOR.COMkey" to "attackerchangedblahblahblahblahbalhMETEOR.COMkey" right on the web page, and people will copy the one verified against the wrong key. So that doesn't work.

Nor do clients have caches of PGP signatures, nor is there some totally obvious third-party that you can verify it with. You can't just go:

curl|{check_if_signed_with_www.this-site.com}|sh (which would pass visual inspection - the attacker would have to change www.this-site.com to something else) because there is no obvious mechanism to do that. Who will tell you whether https://install.meteor.com/ has signed it?

Well, HTTPS will kind of tell you. So "https://install.meteor.com/" is a lot better than nothing...

If you're going to entertain the idea of the HTTPS site being compromised to serve whatever they want, well, there is precious little you can do about it.


> What do you want them to do?

I want them to not use a one-liner. Step-by-step:

1. Download the files

2. Download the public key

2a. verify the public key if you've never seen it before (publish in the blockchain, have lots of high profile technologists sign it, etc)

3. If the verification matches, then proceed.

Teaching developers to value "clever one-liner hack" over "secure, dependable solution" will lead to bad habits.


if you're going to include "2a" you can refactor all of your steps into:

1. Google "meteor.com compromised" and decide whether it's currently compromised. If it isn't:

2. Run curl https://install.meteor.com|sh

It saves a few steps and is equally secure - you know, since you're just going to go based on what other people think and include no programmatic check whatsoever. (your 2a).


2a can be swapped out for a better PKI system at any time. Relying on whether it's public knowledge that Meteor is compromised or not is not nearly as resilient.


so swap it out for a better PKI system. There is literally nothing in any of your steps that can't be automated, except for the totally nebulous 2a "publish in the blockchain, have lots of high profile technologists sign it" which 9/10 people are not qualified to judge.

There is no reason you couldn't automate your whole suggestion, except for that one, which makes it infeasible and open to all manner of social engineering.


Commit a race condition to glibc,musl,uclibc and fuck up almost every software on the planet.

It's convenient and that does not mean it's a good practice but i doubt using an other method would minimize a risk when the meteor.com would actually get owned.


> i doubt using an other method would minimize a risk when the meteor.com would actually get owned.

GPG signing, keep the private key offline, publish the public key in the blockchain and have a lot of high profile technologists sign it so it can be independently verified.

See also: PHPUnit. https://phpunit.de/manual/current/en/installation.html#insta...

(They provide an example shell script for quickly downloading and verifying the latest versions of their install)


I completely understand your point.

But at the end it's about people ...your example with PHPUnit can be abused like this https://thejh.net/misc/website-terminal-copy-paste How many people do you think will bother to paste the script to a text editor and check for evil parts ?


Anybody who runs unverified code, through any medium, when the option to run trusted code is available, deserves to get pwned.

https://github.com/paragonie/password_lock/blob/master/run-t...

^- For the record, I keep scripts like this in my Git repositories.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: