If you could check for container signing and providence on all materials and make sure that only a single registry is being used (ex only `internal.company.com:443`) and make sure it's not possible to schedule pods with unsigned/untrusted containers that would be awesome.
I feel like installing a security tool by curling a random script off the internet and piping it into `/bin/bash` is a bit contradictory. Surely there's a better way to install this?
If you're smart enough to realize that there might be something to worry about, you should be smart enough to be able to figure out how to divide the command into three parts instead of one (download the script, inspect the contents and then run the same inspected [local] script).
Every time a project with curl | sh is featured on HN this comes up. At this point we might as well write a bot that scrapes submitted pages for "curl * | * (sh|bash)" and leave this comment for all of them.
> If you're smart enough to realize that there might be something to worry about, you should be smart enough to be able to figure out how to
This is gatekeeping 101.
Some people are just starting out in security/software engineering and things like this might not be obvious to them. It's good that you have suggested what to do but there are different ways to "suggest" things.
> Some people are just starting out in security/software engineering and things like this might not be obvious to them
That's fair enough. But I wished these beginners then didn't make claims like "security tools cannot be installed like this, it's insecure", and we would all be better off.
Either you know what you're talking about and you share your knowledge. Or, you listen and ask questions in order to eventually know what you're talking about.
> What? How?
In many hobby communities, whenever new people ask questions, which are obvious to the more experienced people, some more experienced people become hostile/use more hostile language/say things like "you should know this wtf" etc. A fairly recent example I saw on Reddit of what I mean https://www.reddit.com/r/AdeptusMechanicus/comments/h7s5gw/g...
> That's fair enough. But I wished these beginners then didn't make claims like "security tools cannot be installed like this, it's insecure", and we would all be better off.
I understand where you are coming from but this is an eternal struggle with any profession.
> Either you know what you're talking about and you share your knowledge. Or, you listen and ask questions in order to eventually know what you're talking about.
Well, this is for users who do not want to work hard:)
If you wish, you can clone the project and build, other option- you can download the file from the release url.
It should be pretty simple to understand from the install.sh script.
Good luck :)
There is also an implementation difference between the two, while KubeBench requires installation within the cluster, Kubescape runs as CLI from any computer using Kube API, so it can be added to any CI/CD pipeline very easily, also the latest version enables you to scan YAML files before you deploy them so you know early on whether you are compliant.
CIS is very prescriptive. It gives you a list of very specific checks with very little context. NSA, on the other hand, explains the problems and potential attack vectors allowing you to adjust and extend the checks to your specific needs.
I actually came into the comments to make a joke about that very thing :-)
Even if you read the file first, you have no guarantees that the file bash gets from curl, should you copy/paste the commands in the readme, is the same thing. Yeah it’s way outside what anyone would ever consider “reasonable” to be that paranoid, the mere mention of certain 3-letter agencies trigger any informed person’s built in “paranoid mode”, and rightly so.
(This is in answer to the below question from another user. I’m too lazy to do two threaded responses on mobile. Maybe if mobile wasn’t such a shit platform with tiny ass screens and software keyboards that didn’t duck up everything you typed-ahh help there it goes again I swear to god Steve when I die I’m coming down there to kick your ass you son of a…)
If you are slightly more paranoid, keep in mind that the file you downloaded for manual inspection may not be the file you download automatically for bash-piping. This article from 2016 was making the rounds on HN about three years ago: https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...
Of course for a properly, professionally paranoid person, you would download every component manually and store the received artifacts locally (for caching, reproducibility, TOFU principle and general BCP needs). Then build from those only. In fact, in high-trust environments it's common for CI systems to not be able to hit internet at all.
Well, this is exactly why it's the open source. You can build everything yourself, check the scripts etc. At the end of the day all these tools are intended to help people find issues as early as possible.
the script downloads a binary blob and copies it into your bin folder. no hash check ... If somebody can replace the binary blob, there's no security check before I would execute it.
The NSA is much like the FBI, it has multiple divisions with different purposes. The NSA has one group that is specifically focused on the security of US public and private entities. They help to find standards that the rest of the US bureaucracy follows to keep their systems secure. On the other hand, the NSA does have hacking hand that is supposed to go after foreign adversaries, and this is the group that will use zero day hack.
> specifically focused on the security of US public and private entities
> this is the group that will use zero day hack
So, the initial comment was right. NSA does keep zero days to themselves so obviously other things are more important than the security of US public/private entities. Otherwise they'd want those patches, but other missions are more important.
There is a vast community who thinks it's bad, and unsafe (with examples on how to detect direct piping to bash in order to serve malware) and a few ones condoning it. That seems like saying there is currently a debate on the roundness of the planet Earth.
Yes, it's exactly like that, but not in the way you think.
One side is a small-but-vocal minority making silly arguments. "how to detect direct piping to bash in order to serve malware"? An attacker who tries to serve different things to different people will easily be caught, and a simple diff will highlight their exploit. A much more robust attack strategy is to serve the same malware to everyone but obfuscate it. Make the vulnerability look like an innocent bug, and you have plausible deniability. This same attack works for every approach to software distribution, it's not unique to curl|bash.
The vast majority of pragmatic people just don't think there's an issue here, don't find these arguments convincing, and don't care to argue about it. I run the Sandstorm project, which uses curl|bash, and this issue really hasn't impacted adoption. Our users aren't naive, they understand what curl|bash is, but they also recognize that obviously by installing our software they are giving us arbitrary code execution. Users who don't trust us install Sandstorm in a separate VM -- the only reasonable way to run software you are suspicious of.
The problem is, doing this for a specific project you really trust is not a problem. But what you’re actually teaching inexperienced programmers, that this is true in the general case. Would you run a one liner like this if I sent it to you and it was hosted on GitHub?
The very intelligent people at Docker used to have this pattern at get.docker.io. They changed it.
And to obfuscate the attack, I think typo squatting in package managers like npm and pip, might be a blue print.
I really don't think anyone is being taught anything misleading here. curl|bash makes very clear that it's running arbitrary code as you, and I think almost everyone who invokes it does in fact understand that.
If anything, I think package managers create a false sense of security, especially ones that allow anyone to publish packages and declare dependencies on anything else. Installing an npm package -- even one from someone you trust -- could very easily transitively install something malicious. Debian packages are arguably better in that not just anyone can push packages to Debian's main package repo, but there's still a very real danger of a hidden backdoor. But I think a lot of people just assume installing from a package manager is not dangerous.
> The very intelligent people at Docker used to have this pattern at get.docker.io. They changed it.
Probably not for security reasons, though. There's a separate, very legitimate argument for distro packages, which is that they implicitly promise to play nicely with your distro's conventions and install themselves in a clean, manageable way. How to update or uninstall a distro package is well-defined. None of this is actually guaranteed, of course -- a package can do whatever it wants in an install hook. But assuming you trust the developer isn't malicious, then you can get some comfort knowing the package is unlikely to screw up your system.
curl|bash provides no such promise, even implicitly. Who the hell knows what it might decide to do. Maybe it has one of those `rm -rf $SOME_VAR_THAT_MIGHT_NOT_BE_DEFINED/*` bugs like Steam did at one point.
Sandstorm is careful to avoid messing up anything, installing itself to a self-contained directory, and providing an uninstall script. But the user doesn't really know that in advance.
The reason to use curl|bash in spite of this is because distro packaging is a lot of work (there are a lot of distros!) and may not be the best use of time for a small project. Docker is definitely big enough to do it, though.
So yeah, there's definitely a cleanliness argument. But the security argument is bogus.
It's still downloading and executing a script, just two steps instead of piping the output straight into bash.
You can't even inspect the code you run when you do curl|bash and the server is able to detect this distinction and hide code when you do a curl and then run bash.
You got me. The last time I installed Docker, I remember doing it using packages, so I assumed that was the replacement. Looking at their comment, it looks like they got tired of people arguing about it, so they changed the instructions to sidestep the argument. I don't think they seriously believe this made a security difference.
> You can't even inspect the code you run when you do curl|bash and the server is able to detect this distinction and hide code when you do a curl and then run bash.
I addressed exactly this argument two comments ago.
> Your argument is: "we do it and we use GitHub, you can trust us and can trust GitHub, you don't need to verify the code you run."
When did I ever say anything about GitHub? This isn't my argument at all.
> And devs learn: "Trust me, you don't need to verify the code you run."
Come on, nobody actually verifies all the code they run.
Ultimately, the only insecure thing about curl-to-bash is if you don't trust the server — the server in this case being Github. I think few people or companies include Github-being-malicious in their threat model; if Github was malicious, any build artifacts produced by even cloning from Github would be potentially suspect, since Github could fairly easily detect automated build agents and serve them different content (especially if the build agents were integrated with Github Actions, which is fairly common these days — or even if they were just using Github webhooks). If you do continuous delivery, or even if you just do continuous integration and then don't have a human decompile and audit every line of the build artifacts after they're created, you de facto trust your repo host; for most people that means Github.
I'm not going to go into the curl|sh debate, but there is actually something wrong about what you described. Trusting the server doesn't apply to Github at all.
Github is technically like a bazaar or mall. It's wrong to assume that if you trust a store on the bazaar you then trust ALL stores on the bazaar. You personally may, but then I strongly question your sense of security. Sure, you trust that Github doesn't add malicious stuff when cloning, but each repo on Github is kinda it's own "server" to fit into your analogy.
The link being used in the curl-to-bash example from the original discussion is a direct link to content in the repo itself — so if you trust that repo server for its content, it doesn't matter whether you curl-to-bash or not. If that Github server is malicious you're in trouble.
FWIW though, if Github servers have a vuln that allows attackers to modify repo content being served on a per-request basis, it's pretty likely that many repos (or even all repos) would be vulnerable, not just one. From a security standpoint I'd consider Github repo servers to be a tier of servers that you do or don't trust; saying that you trust one "server" within the tier but don't trust another doesn't make a lot of sense, especially since you don't actually know the topology behind their DNS and the "servers" are virtualized, colocated, and parts of them are ephemeral. What exactly are you trusting? Other than "I sent a request to Github and it gave me a response" there's not much you can say for certain — the request could've been routed to any number of repo servers, that may be hosting dozens of other repos on them as well. The server that responds today is likely a different virtualized server than the one that responds tomorrow, just attached to the same filesystem. But even if that weren't true, curl-to-bash in this example seems ultimately fine for most people.
If I were doing devops work that required Kubescape for an org, I would definitely package it into a deb (or whatever package format the org used internally) and install it that way rather than curl-to-bash. But that's not because of security gains from a deb vs a script, that's because I'd want to lock it to a stable version rather than always installing latest, and I wouldn't want to tie availability of Kubescape to Github repo servers being up (they do go down sometimes). But for getting started playing around with it, curl-to-bash is low friction, can easily support multiple OSes as compared to debs, and seems generally fine as it is used here.
OP. Of course if there's trusted parties along the way it's safe enough. But it is not durable to that trust being broken. e.g. if the user has learned to do this, and ends up through typo squatting on a malevolent fork of their code.
If it's okay on GH, we train the user to think it's okay on less secure sites.
Considering this practice is in widespread use, I think you might be overestimating the vastness of this "community" (community of what? anti pipe-to-shell community?) "More people are saying A rather than B" is not an especially convincing argument in the first place.
Right. Actually, since it's hard to tell if the earth is round y observation, I'd say it's more like a debate on whether the Sun goes around the Earth. You can just look up in the sky and see it move - the folks saying otherwise are obviously cranks. There's no need to listen to their arguments.
There is "debate" on whether vaccines give you autism... I mean, debate existing doesn't make it a good idea or not worthy of criticism..
Most people who eschew curl|bash as being a fine thing rely quite heavily on the notion that TLS is infallible.
But there's a lot more dangerous to it than that; half-downloaded scripts or scripts which check your useragent and change content depending on the method of download.
It's just a stupid foot gun, best to avoid aiming guns at feet.
Every Linux distribution for the past 20+ years has been using public key signatures to verify software before installing it. This has the benefit of a system already having a trusted key to verify against, so packaging with a Linux distro not only makes installation simpler and more reliable (mirrors for redundancy), it also makes verification simpler. However, it is also trivial to do independent verification.
1) Generate a GPG private key.
2) Upload the public key to a public key server.
3) Sign your software with the key and generate a .asc file.
4) Upload your .asc file along with the software you signed to your hoster of choice.
5) Have the user copy+paste a shell one-liner that will download the files, download the public key from the public key server, import the key, verify the files are signed correctly, and install them.
The end result is cryptographic verification of your software before installation (using a 3rd party, or possibly multiple 3rd parties if you use multiple key servers) and the user only had to copy+paste a one-liner.
It's not as cool as piping to bash, but it's more secure, and more reliable (what happens when your download fails half way through installing through this bash pipe? what happens if bash hangs half way through? what if you want to install offline?)
In all of these cases you also outsource the security to someone that you trust. How is that different from curl https... | sh ? Who says that no one took over the github, or the debian package?
What's the advantage of binary packages through GitHub releases? How do you audit them?
I'm aware of the fact that you can detect curl | bash server-side, and it's a neat trick, but I don't understand the security risk of it. The server is supplying you with arbitrary content that you're not auditing - what does it matter if it supplies you different arbitrary content?
What's the advantage of the GPG approach? Last I checked, the GPG command was capable of signing malicious binaries.
I do agree about the configuration argument. But that's not a security argument.
I think you may be conflating the application owner and the delivery system. If we're installing the application I think we're implicitly trusting the author.
If you copy/paste http instead of https then you've given execution control to every single middlebox along the way.
If the code is hosted on an evil sourceforge, then you've given them execution control.
deb packages will do signature checks, any many authors will list checksums in their releases which we can use to verify.
All these arguments apply no matter the packaging format. Install scripts can be signed and checksummed too. Only by having your package put into a repository already trusted by the user in advance do you solve these problems, it's not an issue with the packaging format being a shell script versus a deb package.
I don’t think you understand how signed packages work. They give you the ability to be sure you are installing software packaged by an entity you trust, regardless of where you get the package.
I understand. That only moves the problem to where you get the signature from, it doesn't solve it (unless you are using a central repo you already trust for example). Therefore binary packages shipped on a GitHub releases page alongside a signature file still face the same issue. An attacker simply has to replace both the package and the signature with one they control. It is nothing to do with the packaging format being a deb file instead of an install script, the issue applies in either case.
Public key servers. Upload your key to 3 different key servers (keyserver.pgp.com, keyserver.ubuntu.com, pgp.mit.edu). Have the user download it from all 3, verify they're the same, verify the package signature with the public key. People already do essentially this (but from one key server) when they add custom PPAs to a distro.
In order to hack that package, someone would have to either A) steal the signing key from the developer's airgapped laptop, or B) hack github.com and 3 independent key servers.
The point is not whether it's a solvable problem. The point is that it's not conventional to go to such extents, no matter what kind of packaging you use. This kind of process could be done with either install scripts or deb/rpm packages with nearly the same level of effort, but nobody conventionally does this kind of thing with either (maybe with the exception of stuff that's in a central repo, where you do get SOME protection by default). So it's wrong to say that install scripts are the cause of the problem. The common pattern of "curl | sh" is just as bad as the common pattern of "download a deb and install it". It's not an improvement to simply shun install scripts without solving the actual trust issue.
Nobody is shunning install scripts. You can still have an install script. Just don't pipe it into bash from curl.
This is not like some kind of "normal" software distribution pattern. Every other modern OS in the world has solved the trust issues by either verifying the software is signed, or requiring you click some button that says "I acknowledge that I am about to totally fuck up my PC with this untrusted software". The Linux distros verify package signatures, Windows verifies exe signatures, Macs do too (afaik?), Android and iOS do.
Curling to a pipe is just devs being lazy. The trust issues were solved a while ago. Not that devs being lazy is anything new. Literally the only reason anyone can use Linux at all without spending 2 weeks setting it up by hand is because somebody other than the software developers did the hard work of packaging it correctly. The curl|bash pattern is just the `./configure && make && make install` of modern devs. (But even then you could still verify the tarball signature before untarring it)
No, you don’t get it. Once you have the root of trust you can download new signed packages in perpetuity and know they came from the developer. They can be delivered over http/smpt/telnet/BitTorrent/ftp/whatever.
You can literally pull it from a compromised machine with an active attacker and it doesn’t matter. It either has integrity and it’s safe or it fails to install.
That’s a huge difference from encouraging people to exec curl statements from random websites loading JavaScript from ad networks, trackers, etc, etc. You’re not only trusting the original dev, you’re now having to trust the web host and all of the other locations the page sources JavaScript from.
This is why any systems serious about security used signed packages. There are way too many systems/parties in between that can knowingly or unknowingly comprise https downloads. Windows updates, Linux updates (for the majority of package managers), iOS updates, Android updates, etc.
The trust model of “curl | sh” is severely worse than signed packages. They are in no way equivalent.
This only solves the problem of trusting updates, not the initial installation. For new software, "curl | sh" is no different from installing a random deb package from GitHub.
It solves initial installation too if you trust particular roots.
There is a reason windows/android/Linux distros/iOS do signed software.
This problem was known about and was solved 20+ years ago with signed updates. “curl | sh” is back in vogue because people don’t understand the problem and think https means secure.
That is what I mean about it only being better in the limited scenario where you are getting the package from an already-trusted central repo. But that is surely not the case in this particular situation for example.
deb packages will not do signature checks. The signature checks for anything based on deb is rooted in the repository- the repositories package list contains the checksums for the packages and the signature of the repository maintainer. If you fetch a single package from a website and install it, no signature is present. Any check would need to happen out of band by explicitly checking a signature presented elsewhere. RPM packages, however can embed a signature.
The only thing the installer script is doing is autodetecting your OS, picking the correct binary release, downloading it, making it executable, and copying it to /usr/local/bin. The installation instructions could just as easily tell you to do that. As with most go programs, the entire thing is just one file. No libraries, no config files, no dependencies.
Heck, wrapping it in a deb, rpm, and tarball for pacman would be easy as heck, just a one-line installer script: install -m755 <program> <prefix>/bin. I guess single-application developers just really don't like package managers.
Using cosign/sigstore to sign release artifacts https://github.com/sigstore/cosign and instructions for validating signature on download could be a good option here.
So if I were to make a daft assumption that frankly, no one needs DNS, and then the NSA were to bring out a DNS infra hardening guide then I would also be correct?
Kubernetes is way too overengineered. It should be as simple as docker compose.
and Kubernetes also added StatefulSets for things like database, but all kubernetes gurus don't recommend to use it in Cloud, but use databases provided by the cloud providers :facepalm because, well, it's too complicated...
Any universal platform or toolkit is overengineered somewhat (or a lot), but this is the name of the game. Flexibility comes at the price forcing you to invest into configuration, automation, deployment and maintenance. Then security comes to close the gaps. Every player in this game is honestly trying to be better and help others. Kubernetes is a fact of life and rightfully so. It needs helper tools in several areas. Security is just one of them and Kubescape is just a one step of many...