Hacker News new | past | comments | ask | show | jobs | submit login
Not a bash bug (lisp.org)
348 points by informatimago on Sept 27, 2014 | hide | past | favorite | 185 comments



> This feature is documented under the -f option of the export built-in command. The implementation detail of using an environment variable whose value starts with "() {" and which may contain further commands after the function definition is not documented, but could still be considered a feature.

This undocumented implementation detail is also a limitation on the use of regular environment variables, and should be documented. When reading documentation about a mechanism, I expect that special magical strings which change behaviour of the mechanism are clearly documented. If such documentation had existed, someone might have noticed it and guarded against it.

> Assumedly programs like apache filter out environment variables properly. But unfortunately, in the validation of input data, they fails to validate correctly input data because they don't expect that data starting with "() {" will be interpreted by their bash child processes. If there's a bug, it's not in bash, but in apache and the other internet facing programs that call bash without properly validating and controlling the data they pass to bash.

It isn't easy to validate and control data against an unknown magical feature in one of many possible shells.

> But on the other hand, it is free software and not difficult to check the source to see as the nose in the middle of the face, what is done. When reusing a component with missing specifications and lacking documentation, checking the source of the implementation should be standard procedure, but it has clearly not been done by Apache or DHCP developers.

I think the shell is specified in POSIX/SUS. Checking the source of all possible open-source shells would be a huge job. I don't know how they should check source code of the closed-source shells. I don't blame them for using the environment variables according to available documentation.

Edit: typo


I agree. This is an interesting idea, so I upvoted the paste. But I don't think this author knows how deeply the bug runs, either; the most recent way to exploit it is to export an environment variable of, say, ls to a bash function. [1]

Usually the amount of toxic environment variables are considered to be finite; PATH, LD_PRELOAD, etc., etc. If the name of any executable on the PATH is dangerous, than the number of toxic environment variables is infinite -- are we to scan the entire PATH for each environment variable to make sure it isn't dangerous? What if the CGI script updates PATH?

There is no way to solve this problem with sanity checks. I've yet to peek at the source, but I'm told this feature is vital to implementing things like backtick operators. I think it is too dangerous however, and I don't want shellshock to become a class of bug rather than an instance of toxic environment variables. We're going to have to rip this feature out and re-implement large portions of functionality.

The author is right that this is a product of bash being written in a more trusting time. This is not the first nor the last time the 1970s security models will come back to bite us.

edit: forgot reference:

[1] http://seclists.org/oss-sec/2014/q3/741

edited to add:

Also, Apache does have a mechanism to filter out toxic environment variables; headers are added as HTTP_HEADER_NAME, because its generally the names of environment variables that allows them to be dangerous and not their content. Executing code as a result of parsing the value of an environment variable with no special meaning is a vulnerability.


> But I don't think this author knows how deeply the bug runs, either; the most recent way to exploit it is to export an environment variable of, say, ls to a bash function.

If you can set arbitrary environment variables, you're pwned and have always been pwned. You can set all manner of interesting things, including LD_PRELOAD, to control the execution environment and potentially execute arbitrary code.

EDIT: Putting random data in an environment variable where you pick the name should always be secure, though, which is an assumption that most of *nix makes.


But the problem with Shellshock isn't random environment variables. It's random environment variable VALUES in well-defined environment variable names. It's pretty well-known that there are certain dangerous environment variables (like PATH, LD_PRELOAD) that should not be blindly set. But CGI only sets CGI environment variables like PATH_INFO, as well as HTTP_. That even these can be dangerous because bash executes code on any* environment variable, is completely unexpected.


Is this really a loose typing issue: we give Bash data that should be of type "display text" (a sub-type of string I suppose) and it treats that data as type "executable command" (also a sub-type of string).

Would it be possible to wrap|tag input to bash so that only when a program|script sets the env variable with string that's typed as "executable" does bash even think of exec-ing it. I guess that removes some of the hack-ability and would need major rewriting of bash.

I'm a layman trying to do CS ... what could possibly go wrong!


The issue is that the environment isn't a bash-specific thing. Anything can and regularly does set environment variables, and there's no space in there to set a flag for "this is executable" - if it's in the value, anything can set that flag, and the problem here is triggered by programs setting environment variables from external data.


Plenty of other shells support backticks without the "export -f" magic. They must, as backticks are mandated POSIX behavior; few support "export -f" at all. (And at least one that did, the old Bell Labs post-v7 "Research Unix" shell, used only environment variables with embedded characters which couldn't easily be created by normal means, to avoid the risk of "magic processing" on things like TERM and HTTP_FOO.)


used only environment variables with embedded characters which couldn't easily be created by normal means

    SOMEVAR="`cat some_binary_file`"


No not like that, something more like this:

()SOMEVAR@%=...

You will get a parse error. There is little more than [a-zA-Z0-9_] you can use in identifiers (except bash adds a few more, grrrr). You can probably pull it of with /usr/bin/env though.


"the most recent way to exploit it is to export an environment variable of, say, ls to a bash function."

Even before the redhat patch you would need something to set echo=() { ... but how will an attacker do that when they can only set something like HTTP_USER_AGENT=() { ... ? See how overriding a builtin is not and never was a vulnerability?


I agree with all your points.

I think the real bug is that all this stuff calls out to a shell at all. Sure, it's convenient, but it's basically eval().


There are two things to differentiate, in my oppinion.

In most cases, the shell is just used to find programs in the PATH when a C programmer uses system(). And for that case, which is probably 99% of the time when /bin/sh is being invoked, it would make perfect sense to implement this with something that exhibits less attack surface.

Taking the "dhcp-exploit" as an example (set a DHCP option on your server to "(){...}; exploit;"), I think it's less clear: Implementing the functionality of updating configuration files according to the DHCP options sent is a prefecty reasonable place to use a script written in sh/ksh/bash! It's easy to implement by any sysadmin, works very reliably with a little care, and performance-wise it's not critical at all.

And regardless of the language you implement it: There's some place where user-input has to be sanitized, but up to now, it was considered common knowledge that arbitrary data in an environment variable is safe as long as the variables' name adheres to some convention (prefix them all with PROGNAME_...). And bash doesn't respect this convention by looking at variable CONTENT, even though I'm pretty sure that it was already established when the bash-project started... (see, for example, handling of "special" variables like LD_xxx in suid programs or the dynamic linker)


> when a C programmer uses system()

I said it in another thread but this is almost always a mistake. The execve family is much less ambiguous about what gets passed to the program. Using it avoids this type of bug by not putting the shell where it doesn't need to be.


And it's not limited to C. E.g. I would be in favor to remove os.system from Python (in favor of subprocess.call). The `-syntax (backtick-syntax) in Ruby is particularly evil. It's so convenient because it is so concise, but I guarantee you that it is the source of a lot of vulnerabilities. It should be removed ASAP. I think that's kind of a theme in Ruby: is it convenient? Then put it in. But I would have expected more from Python.


subprocess.call is also vulnerable to this, though. It calls out to bash.


Vulnerable to what? The the environment variable problem? I was talking about program argument parsing. os.system("ls %s" % foo) != subrocess.call(["ls",foo])


Ah, I misunderstood then. I agree with you on that point. I assumed you were talking about "Shellshock".


I believe you would need to explicitly pass shell=True for that though.


Nope, it's not necessary. Test it with a vulnerable CGI app and call:

subprocess.call(["date"])

Or if bash is not your default shell:

subprocess.call(["bash", "-c", "date"])


Did you read the next sentence?

> And for that case, which is probably 99% of the time when /bin/sh is being invoked, it would make perfect sense to implement this with something that exhibits less attack surface.


I did. I did not find it explicit enough. There was no specific recommendation, for example. Moreover seeing the phrase "when a C programmer uses system()" is pretty jarring. There aren't enough warnings you can add to that to convey how much this gets misused and what a bad idea it usually is.

To me, use of system() is very indicative that you need to find another C programmer. There are few other answers to complete the phrase "when a C programmer uses system()".


Well... that's a pretty drastic reasoning, leaving aside all weighting of facts. Does it also apply to a Haskell programmer running System.Process? ;-)

The fact is: system() and all it's relatives (popen comes immediately to mind, there are doubtlessly 100 others) have been used, will be used, by 'incompetent' programmers[+] and as long as no other method is as widely established (and: even taught in introductory textbooks), we better provide a workaround that closes most of the holes.

[+] or just programmers weighting the merits of having a parser supporting variable and home-directory expansion, curtesy of /bin/sh -c right built in, which is completely adequate for many tasks. And yes, I know the limitations of it, and would not use it myself most of the time.


Yes, it isn't that hard to use exec*() to execute a single program, but it gets rather messy if you want to execute a series of piped commands.

Also another function to worry about is popen().


Look what I made you: https://github.com/panzi/pipes


How is it different to set some environment variables and then call out to a shell script, versus to set some environment variables and then call out to a perl script, or a binary compiled from C?


It's not. It's the "calling out" part that is wrong.

You should never call out to anything by passing untrusted user input directly. Any information that came from the outside must be explicitly passed as data through proper serialization mechanisms.

For instance, you don't piece your SQL queries by concatenating strings. You use an abstraction layers, in which you code the query structure and you pass user input as data. There is this extra step of saying "this is data, not code" that strips the external input from executability.

(for the same reasons, if your templating engine is just concatenating strings and not building the page out of trees, you're doing it wrong, but it's a topic for another day)

It's a problem you get when you believe in "the Unix way" a bit too much. Yes, everything is text, but no, not everything has the same semantics.


> You should never call out to anything by passing untrusted user input directly.

So if I call a CGI script with parameters foo=bar, what data should apache pass to the handler, if not something along the lines of the string "foo=bar"? When I pass the header "User-Agent: baz" and the handler asks for the user-agent, what should it be told if not "baz"?

Environment variables are data, not code. When apache executes a cgi script, whether it's C or perl or shell, it makes the user input available as data in defined locations.

There's a bug in bash which causes some of that data to be executed, but there's no way to protect against that class of bug.

This isn't a case of "you should have protected against sql injection attacks". It's a case of: there is a bug in your sql server, such that the query "select from Users where username='rm -rf /'" will execute "rm -rf /"*.


The point of the OP is that if a program has chosen bash to be handler of untrusted user data, then the program has made the wrong choice, because bash is clearly (hindsight!, I'm not claiming I wouldn't have made the same choice) not designed or that purpose. A handler for untrusted user data should be a program specifically designed for that purpose, which should receive the data directly.

Similarly, if a Ruby or Perl script decides to call out to bash with untrusted user data, it's their mistake to trust bash with it, not bash's mistake that it wasn't designed for that use case.

It's perfectly possible to protect against this attack: don't call a generic program with untrusted user data.


So ruby and perl are specifically designed to be a handler of untrusted data?

How do I know what other programs are designed for such a task? What's a "generic program"? At this day and age, it is expected that pretty much all software ought to be designed with security in mind (not that it always is). Because any piece of "generic software" (or just software) is otherwise going to be exploited. Especially on platform where double-clicking a file is the expected way to open it.

More importantly, the point we are making is that we're not expecting bash to "handle" anything. It gets some data. It's not supposed to do anything with it on its own. Period.


> So ruby and perl are specifically designed to be a handler of untrusted data?

Perl actually is when used in taint mode. http://perldoc.perl.org/perlsec.html


Yes and no. You can still unintentionally call out to bash if you, say, protect your PATH:

  $ x='() { :;}; echo vulnerable'  perl -t -le'$ENV{PATH}="/bin";print `:;date`'
  vulnerable
  Sat Sep 27 10:51:12 PDT 2014


Yeah, I hope I never have occasion to walk through an undocumented minefield, I mean collection of "features", designed by this person.

I say this without animosity to bash devs. I think some blame can be shared. But putting it all on people you expect to understand under-documented behavior and "implementation details" in every possible version of every possible flavor of /bin/sh is madness.


I believe this just applies to bash, not sh?


On some systems, they are one and the same. /bin/sh is often symlinked to /bin/bash, which is making this so exploitable. /bin/sh is invoked by system(), popen(), etc., and referenced in script "shebangs" (#!/bin/sh at top), so I meant that nobody necessarily knows what "flavor" of /bin/sh they're going to get.


There are other methods of IPC other than shell variables. The shell is a known insecure environment, which is where there are limits on setuid for shell scripts.

By letting everyone on the Internet set shall variables Apache and whatever DHCPd (ISC?) did something they could have known would have bad consequences whether this feature/bug existed or not.

The only data Apache needs to control is Apaches.


From what I understand, Apache doesn't send them to bash. It sends the to whatever binary is configured to handle the request (using CGI), which were then calling bash unbeknown to Apache (but implicitly passing the same environment variables).


Lots of functions to start another process start a shell instead and is a command line to be executed, e.g. system or popen. The convenience in that case is that you don't need your own handling of $PATH or wildcards or argument parsing. It's pretty standard on UNIXoid systems.


> The convenience in that case is that you don't need your own handling of $PATH

You don't with execvp or execlp either.

> or wildcards or argument parsing.

IMO this is of dubious value from, say, a C program. Why "parse" the args? Just generate a list...


The original author of bash (a friend of mine, which is why I have this context) has been being interviewed by various newspapers today regarding shellshock, and finds the idea that he might have anticipated the number of ways people integrated bash into various systems (such as with Apache allowing remote control over environment variables when running software in security domains designed to protect against unknown malicious users) quite humorous. Apparently, it has been an uphill battle to explain that this was all coded so long ago that even by the time he had already passed the project on to a new developer (after having maintained it for quite a while himself) the World Wide Web still wasn't a thing, and only maybe gopher (maybe) had been deployed: that this was even before the Morris worm happened...

> In an interview Thursday, Mr. Fox, the Bash inventor, joked that his first reaction to the Shellshock discovery was, “Aha, my plan worked.”

http://mobile.nytimes.com/2014/09/26/technology/security-exp...


"Quite humorous"?

* In Unix, shell scripts and shell subprocesses are everywhere, and are supposed to be everywhere.

* Environment variables are passed across subprocesses by default, you need to explicitly filter the environment to prevent that.

Therefore, if you write a shell, the reasonable assumption is that it's going to be integrated into pretty much all "systems"/programs running on a Unix box with this shell, and environment variables will travel from everywhere to everywhere.

Of course the feature isn't a "bash security bug"; it's just one of those endless poorly documented, weird special cases which together make up what we know as "Unix".

And the answer to anyone shooting themselves in the foot, whether it's one person or a billion, is "you should have read the documentation" - and now, apparently, "you should have read the source".

It's a good thing this kind of "design philosophy" is absent outside the field of computer programming. Generally the vendor should be the extremely diligent party, and the user is assumed to be reasonably naive - even if the user is a professional (think power tools, etc.) It is only programmers, for some reason, who think that it's perfectly fine to add whatever features they want to their code without much worrying about consequences, and leaving the worrying to the users.


Since the sub-process runs as the same user as the original process, it wasn't really considered a security problem. The problem is calling a sub-shell without sanitizing the environment first. It's really a bad idea to connect unsanitized user input to a turing-complete system of any kind.


Ahem. Every web server passes "unsanitized user input" to "Turing complete systems". You have a right to expect certain things from subsystems regardless of their "Turning completeness".

* A file system should store your bytes, though nothing prevents a file system written in C from executing, say, logged HTTP requests as commands.

* A CGI script should sanitize form data, though nothing prevents a PHP script from blithely shoving unsanitized data into SQL queries.

* And a shell should pass environment variables to subprocesses, though nothing prevents it from interpreting variable values (or names, or a combination of names, values and the time of day) as commands.

As I said - I don't think it's a security bug in bash, just another one of endless misfeatures. It's about as crazy to interpret environment variable values as code because they have a special-cased form as it would be to interpret, say, file names as code, or certain byte sequences passed to the write() system call as code, etc.


The point is that, when bash was written, there were few mechanisms for executing code as another user. There were servers/daemons, but they did not execute user code.

Of course some people would pipe to shell in their .forward file and eventually get pwnt, but it was a freshman mistake, and the damage was isolated.

Once you reach the point of executing a shell with an euid other than your own, it's not the shell's job to sanity check your actions.

The web has changed the execution model thoroughly. And people now do lazy things based on their loose understanding of flexible execution models.

This is neither a bug in bash, nor a bug in Apache, etc. It's an integration bug between two complex systems that were designed with zero-to-poor knowledge of each other.


I'm not quite so sure about that. One of the early examples I've seen of Unix use (I think Ken Thompson was actually in the video, from early 80's) showed using the various tools to process file data that got downloaded from somewhere else (where that data was supposedly created by another user). So if you had a script that did:

    cat downloaded_file.txt |\
    while read inputline
    do
        #some processing
        #call another shell program
    done
In this case, bash would still be vulnerable even without the Internet involved -- just processing a data file you got from Bob in Accounting.


Only if you not only set a variable to the contents of data from that file, but then also exported that variable to the environment: don't do that (why would you anyway? ;P).


As I said: it is an uphill battle to make people understand that when this was all built, the idea of a webserver was still just a twinkle in someone's eye :/. To make the statement even more general: the idea that someone would even build a process other than "login" that would accept untrusted data from an unknown random user halfway around the world in the first place, much less pass it to a shell, was not something that really made sense: to run the shell with a custom environment, in the 80s, required control of the parent process, running at the same privilege level (as the same user) as the new shell (unless you ran "su" and went through "login", at which point your environment is reset) meaning if you wanted to run a command you could just do so without trickery. Regardless, thank you for demonstrating this conversational problem in a much more visceral way than I could have alone :(.


A conversational problem indeed... I explicitly said, twice, that it's not a security bug. It's just a feature that makes as much sense as write() interpreting special byte sequences as commands. If such a write() call were exploited through Apache's logging of http requests, would you also defend it on the grounds of Unix predating the web?

As to the article you linked to, I recall that it mentions that the feature in question is actually from the early 90s when it might well have become a security bug... though I still think it's beside the point.


You make this sound hypothetical, so let's make it concrete: if the person who designed terminal escape sequences told me that he finds the idea that he might have anticipated that someone would log arbitrary garbage sent by random users halfway around the world with no way to trace them or account them for their actions "quite humorous", I would still find it inappropriate to go on a ranty screed somehow trying to lay fault with their arguably-poor design decision after exclaiming their "quite humorous" back at them as a question or to use exasperated rhetorical decives like "ahem" while quite explicitly stating that they should have in fact anticipated this because clearly a shell will eventually be in the position of doing these things that again were not at the time being foreseen.


It is impossible to sanitize your data if you don't know how the inband signalling works.


It wasn't foreseen that programs would fill (new) environment variables with (unsanitized) user data as a way to pass that data to subprocesses. I would argue that would indeed have been hard to foresee: there are better ways to pass data and better ways to set the requested subprocess configuration.


Did none of the Bash maintainers since then ever notice this feature?

I'm sure rsh sounded fine when it was written, but....


It is interesting that this feature is guarded by a parameter named priviledged_mode, then no_functions, then privmode, and also another parameter named read_but_dont_execute. So I would say that bash maintainers were aware in general of this feature. But it looks like they never realized that there was a problem there. (Probably they just counted on the guard to disable this feature in sensitive situations (the so called, "priviledged mode")).


> If there's a bug, it's not in bash, but in apache and the other internet facing programs that call bash without properly validating and controlling the data they pass to bash.

It's absurd to think that a transport layer should be responsible for "validating" all possible contexts in which the data it transports could be used. How is apache supposed to know the difference between using a magic string and simply mentioning it? How is apache supposed to know what magic strings apply to all possible subprocesses of sh that inherit environment variables? A program that receives user data has to be responsible for validating that data, and it's not productive to characterize the lack of validation as a "feature" just because it can be used to provide functionality.

This kind of misguided thinking leads to practices like mod_security refusing to allow comments that contain the magic strings "1=1" because they might be trying to inject SQL into something.


The point here is that Bash was not designed to receive untrusted user data in environment variables. On the other hand, Apache/dhclient was designed to receive untrusted data, but then handed that over to someone which did not expect untrusted data without validation.

In other words, if Apache/dhclient wants to put things into environment variables, it absolutely should make sure to do this properly and indeed to take into account all possible contexts.


So if I write a new shell called rash, that executes anything in an env var between, say, backticks, and install it as /bin/sh, it's Apache's job to neutralize backticks in untrusted data? I can agree with some of the things you say, but this seems to be the logical conclusion, and it's crazy.

The most consistent moral I'm deriving here, is that shells are for executing arbitrary commands in a flexible environment, so don't even touch them if that's not what you want. This applies to Apache, CGI wrappers, any other kind of web development.


install it as /bin/sh,

At first glance, I think that's your problem.

Fundamentally, though, you can't always point to "it's this one component that is broken." There is the classic example that was used to criticize Authenticode, which was Microsoft's way of dealing with mobile code back in the 1990's. It said "all the code is signed, so if something breaks, you know who to sue."

Well, imagine two programs:

1. One of them formats your hard drive, and on installation puts itself into your Quick Start list.

2. One of them runs every single program in your Quick Start list at start up.

Which piece of code is responsible for formatting your hard drive?


It does indeed seem wrong to put magical behavior into /bin/sh, but that's exactly what's going on in this bug. bash is often installed as /bin/sh and is doing extra, non-standard, sparsely-documented behavior. I don't know that it really violates POSIX et al, but it definitely goes on the list of factors at fault.

And yes, I agree it's difficult to assign blame.


Because of popen(), it's near impossible to know which binary programs might unwittingly invoke a shell.


popen() is documented as using the shell, so I wouldn't use it if I follow my moral of the story above. Maybe you're referring to the "shebang" (#!/bin/sh), whereby any executable, that you may have invoked without a shell, can signal that a shell should interpret it?

I guess you have to strip down the environment to only the things you need to know. That's unfortunate, because the nice thing about the environment is that it's inherited, so the user can put in configuration that's needed 3 or 4 steps down the line from where it originated.


Apache does not in general know whether or not data will be passed to bash.

And they can't take into account all possible contexts, because that would involve reading the mind of all possible future users of the interfaces they provide, to make sure none of them decides to call out to programs that treats previously inert data as code.


How is this any different from SQL injection? Input from the world must be sanitized. All we're doing is increasing the scope of the word input - ALL input from the world must be sanitized.


In SQL injection, your SQL server behaves as advertised. If you pass it SQL code, it will execute that SQL code, so you need to be careful that you don't pass it SQL code coming from untrusted input.

With this exploit, bash is not behaving as advertised. If you pass it an environment variable with a certain value, it executes code. There is no way to sanitize your input to completely protect against this class of bug.


Exactly, it isn't. And we don't make our web server software try to guess how to sanitize input in order to prevent SQL injection, because it can't: It does not have the context to, e.g. differentiate between SQL injection and someone talking about SQL injection on HN and giving an example.


Apache knows what data Apache passes it its own sub processes, which is the issue here. Nobody is expecting Apache to sanitise other apps, just itself.


But without knowing how other apps treat that data, Apache can not know what needs to be done to sanitize the data.

For what Apache knows, that CGI it executes could treat the presence of the letter "x" in any environment variable as "start a nuclear war".

While it may make sense for Apache to sanitize the data against specific known, common problems, until this week this was not a specific known problem. It might have been if the Apache team audited the code of every plausible piece of code that people might use to interpret CGI scripts, but that's not a realistic scenario.


Apache doesn't have to know how apps treat data in order to use a secure mechanism of IPC rather than passing variables set by people on the internet.

Random person on the internet shouldn't be able to set shell variables. Not 'x', not anything else.


Your response sounds like blaming the world's oldest webserver for being the world's oldest webserver. CGI sucks, but telling the 1990s to go home and not come back until it has a secure mechanism of IPC just isn't helpful: this seriously isn't an Apache (or other CGI httpd) problem.

As someone else said, traditionally only the names of shell variables have mattered, not the content. Apache exports most of its envars named HTTP_* as an attempt to somewhat de-fang them.

That some crusty CGI app spawned a bash process which then chose to do something outrageous with the content of HTTP_COOKIE really isn't Apache's fault. Seriously.


Environment variables are not supposed to be "evaling options in transit". They are a decent method for passing small amount of data to a program you start.


Unix domain sockets existed in the 90s and would have done the job adequately without evaling anything in transit.

Apache transmitting data created by random people to CGI processes using environment variables is most definitely Apaches fault. It was a dumb idea in the 90s and it's a dumb idea now.


There's a lot of Apache hate here but I have to wonder if you've actually used it. Anyone stuck running cgi stuff with apache invariably goes to fastcgi or mod_fcgid for performance reasons, which already uses a unix domain socket.

That shellshock exploits will still be possible in this configuration is once again, not Apache's problem.


Pointing out that people from the internet shouldn't be able to set shell variables isn't 'hate' - it's basic security and knowledge that alternative, more secure transport systems exist.

Saying that Apache (and other apps) passing data from random people on the internet to a known insecure environment like a shell, that was known to be insecure in the 90s, is 'not Apache's problem' doesn't actually absolve Apache of responsibility for its own programming.


1) Since the mid-90s Apache and similar http servers have offered fastcgi as a way of communicating with processes with sockets. So you can understand why hearing you single out apache to "use a more secure IPC like sockets" detracts from the main argument which I'd otherwise agree with.

2) I'd wager that the people who came up with CGI initially had expected that the process apache spawns would be anything but a shell.

3) Non-shell CGI processes happily deal with all manner of binary, back-ticks, dollar signs etc. in their HTTP_ envars all day and have done so for nearly 20 years. It's not a huge leap in logic then that these envars should be considered capable of holding arbitrary data without exploding.

4) You make it sound like nobody has considered environment variables a problem before, but sanitizing the environment before spawning a process from the CGI was already a well-established best-practice way before this bug came along. That process looks like - whitelist of envar names, remove all others; check PATH/LD_LIBRARY_PATH/etc sanity; command string parameters use specialized token substitution (eg. "echo %{integer}%" where some bespoke code throws an error if %{integer}% is interpolated to anything other than an integer), etc.

5) Despite the insanity of running CGI stuff which I would generally agree isn't a great idea, I'm pretty sure I'm allowed to be surprised that the mere content of an environment variable, whose job it is is to contain arbitrary data to be passed on to the CGI app should cause things to explode.


Re 4, I'm actually saying nearly everyone on Unix considered environment variables to be a problem before. Not sure why you got the opposite impression.

Generally agreed though, I'd love to see the shell actually store data from instructions too, but it would break a lot of things.


Where in the CGI spec does it say anybody has to eval anything?


Nowhere. Apache should have used a socket rather than the shell (which effectively evaluates data as instructions). People knew shell was insecure in the 90s, and there were malicious users back then too.

I think environment variables were used either due to naivity or an ultimately mistaken concept of simplicity.

Sunning up the entire thread: Apache should have used a socket, and should have known they needed to.


> Apache should have used a socket rather than the shell (which effectively evaluates data as instructions).

But the shell shouldn't evaluate data as instructions. This is the bug!

Apache could have used a socket and I could write a buggy endpoint which evaluates data read from that socket as instructions. Tada, same problem, same bug.


All Unix shells don't really separate data from instructions. Your PS1, for example, can contain commands in backticks and subshells, it's expected to be able to do so.

Shells are insecure. You could replace them with something compatible but insecure, but they wouldn't be a POSIX shell anymore. Hence the well known SetUID blocking on shell scripts, hence not letting random people on the internet set shell variables.


Well, it's well known that being able to freely set certain environment variables is a disaster. That any variable regardless of its name should have such an effect is, however, news.

You can talk all you want about unrelated gotchas in shells & security, and that's missing the point.


> Apache should have used a socket, and should have known they needed to.

Well, there's nothing inherently dangerous about setting environment values. Yes, some have special meanings in special contexts but so could data piped over a socket.

So your argument boils down to: Apache should have seen the special treatment of data in this context, and used another context instead.

That's fine, but it doesn't address the real problem, which is that the shell was not designed to be executed on behalf of other users. There's no spec to say that data received from the "other" channel will not be interpreted or used unexpectedly.

Until you have that guarantee, you're just rearranging the problem space. There's no systemic improvement.


> So your argument boils down to: Apache should have seen the special treatment of data in this context, and used another context instead.

Yes, the behaviour of all Unix shells and their poor separation of data from instructions is well known in the 90s.

> There's no spec to say that data received from the "other" channel will not be interpreted or used unexpectedly.

That is true. However something specifically designed as a communications channel, such as a socket or FIFO, is generally better suited than something designed as a shell.


CGI originated with NCSA. It was already a de-facto standard by the time Apache was released a couple years later.

Absolutely nothing about the CGI attack vector is unique to Apache. It could occur with any webserver that supports CGI which, up until nginx, was pretty much all of them.


Apache doesn't have the option of using a socket for implementing the CGI spec. The spec specifies environment variables.

Apache could refuse to provide a mod_cgi, in which case it would never have gained the position it did, and some other server with support for CGI would have.


> The spec specifies environment variables.

If that's true, then the spec has been obviously, fundamentally broken for it's entire existence.


You realize that this isn't just an Apache problem. It's a problem with any network-accessible program, directly or indirectly.


If you can't put data into variables then that makes what you can do with shell scripts quite limited.

Your advice is really: don't use shell scripts any where near untrusted data. Tracking which data is trusted and which isn't across different processes in different languages across different systems is not a trivial task. So really the advice is: don't use shell scripts. That's sound advice, but not something Apache can or should enforce.


Yes. Don't use shell for data transmission. Use a socket. Apache had that choice, knew bash was insecure (as everyone did in the 90s, that's why across can't be setuid), and didn't exercise the right option.


>Yes. Don't use shell for data transmission.

That isn't sufficient. Your argument means you can't use shell for anything. If you can't trust it not to execute the contents of a variable, then it should never be used other than on isolated systems where the data it processes comes from completely controlled sources. Using the shell becomes the equivalent of using the gets() function in C.

That means a complete redesign of all linux distros, for a start. You are going to have to some better justification for throwing away an entire operating system ecosystem just to preserve a behaviour in bash that basically nobody uses.


Granted, this export -f feature of bash needs a different, safe, implementation.

But otherwise, in the context of the internet of today, yes, we'd need a completely different operating system (perhaps something based on capabilities). Unix indeed doesn't seem to be good enough for a safe internet.

If we could say that there are N bugs, and with openssh and this bash bug, we only need to correct N-2 bugs and we'll be fine, then perhaps we could keep unix (and similar systems).

But it just looks like it's more a systemic problem (indeed not specifically an apache bug, or a X or Y bug, but bugs emerging from the interaction between two or more components in the unix ecosystem), therefore if we don't change the fundamentals, we cannot exclude that we will keep introducing and discovering this kind of bugs again and again.


Shell executes the contents of variables by design, and users use it everyday - ever had a PS1 with backticks or a subshell? If you don't wish for it to do so (which is reasonable), fine, but you will break compatibility and throw away that entire ecosystem of tools (which may also be reasonable).


> "Don't use shell for data transmission."

Shell != Environmental variables.


That's correct. Environment variables are implanted by the shell. Don't use environment variables for data transmission.


Apache will pass data via more secure means unless you explicitly use mod_cgi, which happens to have an interface that requires that the data is passed via the environment.

If you don't want random person on the internet to be able to set shell variables, either don't enable mod_cgi, or don't use CGI's written in a shell script.


AFAIK Apache itself isn't vulnerable, because it probably doesn't invoke a shell. CGI apps it invokes might, which is how the vulnerability is triggered.


I think it depends on the situation, at least that is what I've gleaned from my reading of it

if you run a .php file and you have mod_php or the webserver has understanding of the concept of PHP and calls the binary directly, all nice and good

if you have a something.randomext or something without an extension at all, then... whelp

luckily there is a program that is dedicated to working out how to run random executable files, the shell (and it uses the shell-bang, or shbang for short), so in that situation, the call will be done though the shell


My understanding is that Apache starts CGI apps in environments with variables for things like HTTP_USER_AGENT that are set by random people on the internet through Apache.


Yes, because that is what the CGI spec demands.

There is nothing inherently insecure in that: The environment is just a bunch of strings.

Whether or not it is insecure depends 100% on the CGI that gets executed. Which programs that is 100% down to the person configuring the website.

Apache does not even have a theoretical way to ensure those applications does not do anything stupid with the data no matter the method used to pass it.

You have the simple solution if you are concerned about environment variable passing of not trusting mod_cgi. Most people have not used it for years anyway, because of the performance impact.


Yes, the CGI spec is obviously broken. Clearing a environment ala Postfix or using a socket would have been the obvious fix in the 90s.


> because it probably doesn't invoke a shell

Some people write cgi in bash.

Yeah.


Yeah, I meant Apache itself doesn't invoke a shell.


> if Apache/dhclient wants to put things into environment variables... it absolutely should ... take into account all possible contexts

Sorry if I'm picking on you too much, but I noticed this and thought of an even better counterexample. By this logic, it's Apache's job to prevent SQL injections. It should know that a single quote, in one possible context, can terminate an SQL string, popping the SQL parser into a state that allows for arbitrary statement injection.

It also needs to know about any other database language that could ever possibly be invented, now and for eternity, because we can't know how long any given version of Apache will remain in use.


> Sorry if I'm picking on you too much

Don’t worry.

> By this logic, it's Apache's job to prevent SQL injections.

Curiously, I’d call this an example for my position, not yours: It is not the job of the SQL server to prevent SQL injections, quite the contrary, it is the job of the calling application to ensure that the things it tells the SQL server to do are actually safe. Similarly, it is the job of Apache to ensure that the things it tells its child processes to do are safe.

Actually, you’re probably right – if I’m guessing correctly that all the "CGI" standard does is to say "These user-supplied values go into these environment variables", then it is not the job of Apache to sanitise them. It merely means that Bash is an unsuitable choice for an environment with untrusted environment variables, i.e. as a CGI script or even for use as /bin/sh if you cannot trust environment variables.

It gets a bit messier with dhclient, because that explicitly calls a Bash script and hence definitely should know about the issues Bash might have with environment variables.


There's no need to guess. The relevant RFC sections are easily located.

http://tools.ietf.org/html/rfc3875#section-3.4

http://tools.ietf.org/html/rfc3875#section-4.1.18

http://tools.ietf.org/html/rfc3875#section-7.2

http://tools.ietf.org/html/rfc3875#section-9

HTTP header values have some limited formatting requirements[0], but are otherwise arbitrary. There is no general way for the server to know which values are safe and which are not. That would require the server to know how the script would respond to any particular input. If that were the case, the script would be superfluous, and the server could simply respond to the client with the result it already knows the script would return!

[0] https://tools.ietf.org/html/rfc2616#section-4.2


> It is not the job of the SQL server to prevent SQL injections

True. But perfectly valid SQL shouldn't cause the SQL server to be exploitable. And the application has no idea which SQL statements are exploitable.

> Similarly, it is the job of Apache to ensure that the things it tells its child processes to do are safe.

True. But the unsaid assumption is "safe from the point of Apache". Not "safe from the point of some unknown exploit".

So it's reasonable for an application to sanitize what it passes to the shell, by dealing with known and documented issues. e.g. Don't pass "$FOO" in a string to system() and expect it to be treated as the literal "$FOO" string.

It's unreasonable to expect that the application know everything about the attack surface of everything it uses. Maybe there's a bug in an SQL library which causes the application to crash when a SELECT statement contains the text "crashme". This text is valid SQL, and it is entirely unreasonable to expect that the application "sanitize" such text.

In the same way, environment variables are strings. It's a bit surprising to discover that bash will execute code contained in random environment variables.


I see your point with the first bit. System A passes safely to System B, B passes safely to C, etc.

The problem, as you note, is that in order to correctly implement the CGI standard, Apache must pass the problematic data. Even if it took care to see "I'm about to pass this to bash, which might do something stupid with it, so I'll fix that", it can't know that a non-bash CGI executable is going to pass it to bash somewhere down the line.

I agree with your other conclusions. People will definitely be re-evaluating shells, CGI, and environment variables for at least a few months or so while this is fresh.


The analogy is very weak, SQL injection should be prevented in your application. Does Apache need to set global environment variables, it seems pretty extreme that it just takes all the user input and makes it available to the underlying shell, there will be a whole class of bad stuff that comes from this!


> Does Apache need to set global environment variables

Apache? No. mod_cgi? Yeah, because that's its fucking job because that's how CGI works http://tools.ietf.org/html/rfc3875


Er... Well it's one way of it working, from the spec:

  'meta-variable'
  A named parameter which carries information from the server to the
  script.  It is not necessarily a variable in the operating
  system's environment, although that is the most common
  implementation.
I suppose the authors of bash know that most of the implementations of CGI do this so they should be preventing bugs like this. Thanks for your clarification. I still maintain that this sounds like a very insecure mechanism for passing information from Apache to your application.


Keep reading. Use of environment variables is specified for Unix. (And, actually, for all other systems which the RFC provides a specification, differing only in minor detail.)

http://tools.ietf.org/html/rfc3875#section-7.2

   For UNIX compatible operating systems, the following are defined:

   Meta-Variables
      Meta-variables are passed to the script in identically named
      environment variables.  These are accessed by the C library
      routine getenv() or variable environ.
This is how CGI works. This is how CGI has always worked. NCSA HTTPd was built on and for Unix, and was the origin of what we now call CGI. This is the interface to which CGI-compliant scripts and webservers have always adhered.

> I suppose the authors of bash know that most of the implementations of CGI do this

1) I don't know why you would suppose the authors of Bash would know how CGI works. Nothing about writing a shell implies knowledge of the intricacies of web technologies.

Certainly many of the people on HN so eagerly looking to blame either Bash or Apache don't seem to know how it works. If they did, they might have realized what quesera so astutely observed earlier today[0]:

"This is neither a bug in bash, nor a bug in Apache, etc. It's an integration bug between two complex systems that were designed with zero-to-poor knowledge of each other."

2) I don't know when this particular Bash feature originated, but Bash as a whole pre-dates CGI by about five years.

> I still maintain that this sounds like a very insecure mechanism

I don't know of anyone who has looked at CGI and thought the use of environment variables was a good idea in the face of the 21st century's security landscape.

This mess was more than two decades in the making. It is the collision of two entirely unrelated courses plotted by two entirely unrelated parties, neither of whom could have been expected to know what the other was doing, and neither of whom could have foreseen how yet more unrelated parties would (in some cases unknowingly!) combine their works into a time bomb.

[0] https://news.ycombinator.com/item?id=8376914


> "This is neither a bug in bash, nor a bug in Apache, etc. It's an integration bug between two complex systems that were designed with zero-to-poor knowledge of each other."

Which makes OP's comparison to Ariane 5, a spacecraft with an infamous bug that could be described as an "integration bug" (between older and newer parts of the software), even funnier. Hopefully the development teams of a rocket were a bit more in-sync than those of Apache, bash, distribution maintainers (some of whom have made /bin/sh link to bash), and CGI programmers.


You say dhclient should take into account all possible contexts before passing data to bash, but it's not easy to know what data shells will parse as code and what they won't[1]. If we have to sanitise data before passing it to shells then the only sane conclusion is to never pass untrusted data to shell scripts. I certainly don't trust myself to know the full range of features in all shells that might suddenly turn a plain text string into executed code. I can't think of any other people that I would trust to know that either.

Is that really what you are advocating here? Don't ever use shell scripts with untrusted data? That has been my philosophy for a long time, but up till now that position has been viewed as extreme by most people I've shared it with.

[1] there are still exploitable bash parser issues to be revealed http://seclists.org/oss-sec/2014/q3/777


Unfortunately the specs for system() and popen() require the use of shells. It was known that you had to escape arguments and control the names of environment variables, but it's entirely bizarre to expect people to know before shellshock that the values of otherwise completely safe and sanitized variables might still be interpreted by bash as code.

We shouldn't have to give up popen() just because bash was designed insecurely; we should fix or replace bash.


Apache has no control over what application code it is that receives the evironment variables via mod_cgi, and most of the time it will not be bash, but indeed a special purpose application or script interpreter that is designed to receive untrusted user data in environment variables since that is what the CGI spec dictates.

That sometimes people opted to use bash as the script interpreter and/or that people sometimes shell out to bash from within other environment without sanitizing the environment they pass along, is not Apache's fault.


Why is bash the only program which has special permission to trust its input?


It has permission to trust the input the user enters into it, because that is its job. 20 years ago nobody assumed variable environments were an issue, but this should have been fixed years ago.

I am also pissed that they patched the parser, the feature is still in there when nobody uses it and it has already proven to be a security vuln once.


Its used by bash itself for communicating with subprocesses.

Input you type at the terminal is not the same as an environment variable; hackers set environment variables much more often than legitimate users.


There are many other ways Bash could do that, though, which would make it much harder for further exploits. Such as opening a pipe and writing them to the sub-process that way so it goes "out of band" from the regular environment variables. Or if it has to be in environment variables: Accept a special argument in the argument list that is a key it uses to sign the variables. It does not need to remain secret - just make sure that just inheriting environment variables with potential attempted exploits is insufficient for anything to get executed.

And/or change the way these definitions are handled - I get that it's tantalizingly simple to just pass this through their usual parser since then you get the function definition parsing for free, but at the very least this parsing of the environment variables shouldn't go through a code path that even potentially executes anything.


> There are many other ways Bash could do that, though

KSH and ZSH will load functions from files they find via $FPATH.


I guess as a POSIX-compliant shell, Bash can be expected to trust its inputs on stdin and in special environment variables like PATH.

But the problem emerges from trusting 100% of inputs, not just the inputs that are intended to be trusted.


Basically what they're saying is that CGI should never have happened.


It goes far beyond CGI. Linux systems use shell scripts all over the place where the data involved may come from untrusted sources. DHCP is one example, but there are many others. If shells can't be trusted not to suddenly execute the data inside variables then shells can't be trusted to do anything.


While that might have been great, the alternative is: People should not have used shells as CGI handlers, and people should (irrespective of CGI) sanitize any environments passed on when spawning other programs from their applications.

This latter is really something we should do irrespective of where the input is coming from. Even if the content comes from trusted sources, passing the environment unchanged to sub-processes means we risk all kinds of unintended consequences if an application developers decides to spawn a sub-process that exec's into some process that makes decisions on various variables.

It's a bit like letting our applications call functions with random junk in the argument list.


One thing I've learnt over the years: anything which relies on "people should..." as a basis is doomed.


> The problem is that 5 years later, new software was developed (apache, dhcp, etc), that uses bash in child processes

I don't think this history is correct.

At the time, most systems used sh as the default non-interactive shell. The other shells such as csh, ksh and bash were considered to add interactive niceties, but programmers and sysadmins expected scripts to use sh.

One of the reasons sh doesn't have a lot of wiz-bang features is that it needs to stay compatible and portable across all the different Unixes so that all the scripts, past and present, will work. And sysadmins liked this conservative approach.

It wasn't until the rise of Linux, and the aliasing of /bin/sh to point to bash, that sysadmins started becoming comfortable with running bash as the default non-interactive shell. And that was only because bash promised to be fully compatible with sh. And bash was compatible but it also added new scripting features. But most developers and sysadmins rejected using these new features because you would lose portability to sh systems if you did so. That is until now, with the dominance of Linux and portability becoming less of a concern.

So really this bug is the fault of bash's embrace-and-extend strategy.


It's an "internet tech" bug. Every developper should know how hard it is to parse textual data vs well defined binary in a secure and fool proof way. Yet every damn internet piece of infrastructure is based on handling textual data, mash it up, pass it around, escape and unescape it in hundreds of stupid formats. No wonder that most security troubles surfacing over the years are some form of abuse of this crazy design flaw: buffer overruns, sql injection, the openssl bug a few month back and now this. Let's go back to sanity and use well defined binary protocols where there is no damn way to send a command by text but only very explicit semantics, and stop the unix way of thinking that text should be more than a human interface. Text should never be used as a command language in between complex programs. Period.


I really don't think making protocols less understandable by humans will solve anything.

This has always been and always will be a hard problem. Consider this quote which I found in The Shellcoder's Handbook:

"Wherever terms have a shifting meaning, independent sets of considerations are liable to become complicated together, and reasonings and results are frequently falsified." -- Ada Lovelace

We've known about this since literally the beginning; we'll be cursing ourselves over it until the very end. Vulnerabilities are going nowhere.


It's not about making protocols less understandable by humain, it's about recognising that we are programming computers, not humans and that it's time we accept that there should certainly be a way for the humans to interact with the program at some point we should not force the same kind of interactions on the programs themselves. It's much harder to make text and text based commands' parsing and handling secure than it is to use binary protocols in the first place.


I just disagree. You seem to be saying we should use binary protocols and load them directly into memory Cap'n Proto style. But what if I'm little endian and you're big endian? Parsing happens. Text-based protocols fit well into human's heads, and its the humans that have to do the debugging. I think it would only make the process of finding bugs slower and more complex, and give advantage to the attackers.


Please describe specifically how would that have helped.

This has nothing to do with parsing text. The problem here is that Apache et all send untrusted data to a process that treats it as code. It wouldn't matter if HTTP was a binary protocol and if bash read a well defined bytecode instead. I mean, look at shellcodes.


You are describing the problem exactly: it all too tempting to pass text around from user input to command line arguments without any way to validate the text data and assume it's ok because it's easy. It's exactly the same arguments that goes in between static and dynamic typing in programming languages: static typing ensures some sort of semantics is respected. If you pass text around, because it's easy and fast, most of the time you will never validate the data and you have no way to ensure that you are not actually handling a bomb. If the protocol was binary there is no way in hell you would be tempted to pass it's data without validation to an external program because you'd have to respect the API and because there would be not way to just send a bunch of commands. The same goes for sql injections, url buffer overflows, etc. Free form text should only be used for actual human textual data and should NEVER be the interface in between programs. It's way too fuzzily defined to serve as a protocol.


If the protocol was binary there is no way in hell you would be tempted to pass it's data without validation to an external program because you'd have to respect the API and because there would be not way to just send a bunch of commands

I assume you're talking about Apache - but Apache had no way of validating the data. The protocol just said "this is a blob from the client", which any binary protocol for the task must be able to handle. Apache had no business validating it, anymore than it should validate any other content - how should it know what makes it valid?

Bash, on the other hand, just received that blob and treated it as an executable. It wouldn't matter if the protocol between the server and bash was binary, since it was a valid value as far as the protocol was concerned.

The problem here is the hidden channel between Apache and bash, which never actually talk directly to each other (it's through the CGI binary) but still pass data. It has nothing to do with text protocols.


no, the problem is that you can treat any kind of text data as an executable. You can try to fix this by adding mountains of complexities and excuses but would still be true: as soon as you have text enter the equation you need to escape/encode/decode and parse. Every time you do that you add more complexity than is needed, and also you add many ways to abuse the programs and create "interesting bugs".


I can craft malicious binary data just as easily to execute a function if you execute binaries that begin with a few magic bytes when you're reading input into a buffer.

You seem to be relying on some assumption that you have about human psychology for your security gain. Somehow people would never do that with a binary protocol, and text protocols make them more comfortable and trusting. At least they can read text protocols directly; binary protocols involve me trusting a bunch of middleware I'm using to read them, too, or writing my own (always great for security.)


no, I rely on the fact that any version of an "eval" function should just no exist and that any text based protocol encourages the existence of such functions that can execute whatever is thrown at them, just because it sounds so easy and a quick shortcut in API design.


If the protocol was binary, exporting variables to subprocesses and exporting functions to subprocesses would go in different places, and Apache would know to send the one but not the other


How, if the protocol in question - the environment variables - has no concept of functions?

The matter is that Apache and the protocols (HTTP and environment vars) are just being used as a tunnel between the attacker and bash. They can't pass functions via another channel because they don't know what functions are. All they know is they're passing blobs of data - which any protocol would do, binary or not.

Bash happens to recognize a text value as functions, but it could just as easily recognize the magic value of an ELF binary and execute that, or any other binary format used to encode functions.


The problem is that Bash is using the same channel for two quite different things - values and functions. It's doing that because the channel is a string; if there were a proper protocol for passing environment to subprocesses, that protocol would make a distinction between the two.


if there were a proper protocol for passing environment to subprocesses, that protocol would make a distinction between the two.

TCP is a binary protocol, how does it distinguish between executable and plain text formats? Answer: it doesn't, because TCP doesn't know or care about that, that's left to the layers above to handle.

Likewise, environment variables don't know or care about "functions", that's a concept that doesn't enter into the protocol, since it's not a shell specific protocol. All it transmits are keys and values, which are generic blobs of data. That bash uses the protocol to transmit code mixed up with data is no more the protocol's fault than the fact that TCP was used to transmit those same functions on HTTP requests.


In principle, a distinguishing protocol could be embedded within the undistinguished one. If the actual environment variables were preceded with a sequence indicating the type of the contents in all cases, this would not be an issue.


You're getting close, but you missed the essence. It's not about text vs. binary. It's also not about "well-documented" vs. "ad-hoc". It's about preserving semantics.

"The Unix Way" means throwing away all semantic data - passing plain strings with no context, which are then parsed and re-parsed in an completely ad-hoc manner, usually with regexp-based shotgun parsers.

Note how SQL injections, or XSS attacks are prevented - people stopped stiching strings together and started generating proper instructions through code. User input is sanitized and driven through process that converts it from untrusted string to trusted data structure. Typing SQL queries in a semantic-aware system looks almost the same as stiching strings (thanks to SQL being flat), but now you can't possibly SQL-inject yourself.

So in general: stick to text formats or not, but whatever you do, never glue data structures using tools that work on data medium layer, that are not aware of the structure and meaning of data they are operating on. E.g. never glue strings to build SQL queries or HTML code.


It's not about text vs binary but about well-specified, documented and understood, versus ad-hoc, convention-based, "infinitely extensible", and taught-through-blogs.


All very well said, but until tools like Protocol Buffers became popular the tooling for working with custom binary protocols has been pretty dire.


This seems like contrarion nonsense. Therte is no sane reason for a variable definition to result in code execution. There is no sane reason for a function definition to result in (immediate) code execution. Part of those things very purpose is not to result in execution, because we have explicit syntax for asking for execution, precisely because the purpose of these constructs is not to execute. If anyone ever intended this, then they intended what is still best described as a bug... It is not as if intention is proof against being a bug! I have written bugs with full intent.

Disclaimer: I may be crabby from a busy week of mitigation. Still, I don't hold any particular ill will for the original bug. But a bug it is.


It is an amazingly powerful bug. In what system can you plug a magic value ("() {") into a variable which everyone expects to hold static data, and have the value of the variable interpreted as code and executed spontaneously on invocation?

And: The magic value is undocumented.

And: Any instance in the entire tree of calls will do this.

Some of this is strictly on bash. But some of it also has to do with the environment feature of Unix, which is basically a god object with all the power and temptation ("look what we could get for free if we let it be a function!") that entails.


> Assumedly programs like apache filter out environment variables properly. But unfortunately, in the validation of input data, they fails to validate correctly input data because they don't expect that data starting with "() {" will be interpreted by their bash child processes. If there's a bug, it's not in bash, but in apache and the other internet facing programs that call bash without properly validating and controlling the data they pass to bash.

Bullshit, this may well be perfectly valid data and web servers are not in the business of shielding shells against their own misfeatures.

And if they were where would they stop? It would require that webservers do arbitrary context-sensitive data analysis of everything doing through in case this is a malformed JSON string triggering a bug in GSON while that is data injected unescaped into an SQL variable and the other one's too big for an underlying C buffer.

You can only end up with a webserver refusing to do anything, because some idiotic application somewhere may misuse or misunderstand anything it lets through.


Your comparison doesn't make any sense. It's an obvious requirement for a JSON parser that it be able to parse input from arbitrary sources, including malicious ones. It's not so obvious that a shell should have to deal with malicious environment variables, due to the reasons outlined in the original post.


'It's obvious' isn't an argument. It certainly didn't seem obvious about YAML for some people [ http://www.kalzumeus.com/2013/01/31/what-the-rails-security-... ] to exactly the same effect.

edit:

"A brief description: Ruby on Rails makes extensive use of a serialization format called YAML, most commonly (you might think) for reading e.g. configuration files on the server. The core insight behind the recent spat of Rails issues is that YAML deserialization is extraordinarily dangerous. YAML has a documented and 'obvious' feature to deserialize into arbitrary objects. Security researchers became aware in late December that just initializing well-crafted objects from well-chosen classes can cause arbitrary code to be executed, without requiring any particular cooperation from the victim application."

So what was 'obvious' then is the opposite of what is 'obvious' now.


Except the JSON parser may be using specially-formatted input[0] as parsing directives[1], the XML library has various custom directives and is probably sensitive to billion laugh attacks anyway, the SQL library doesn't correctly handle part of the DB's dialect, etc… in the same way you've got a shell somebody decided should smuggle code from ENVvars and didn't realise it was implicitly executed OOTB as cherry on the cake.

> It's not so obvious that a shell should have to deal with malicious environment variables, due to the reasons outlined in the original post.

Which I don't care for, my point is that the webserver can not wipe the ass of every bug or misfeature implemented by the shit put behind it. It's just not possible.

[0] e.g. "magic" object keys or keysets, most libraries expose ways to hook into object deserialisation to do exactly that but they could do it by default as well, and I'm sure there are some which do

[1] which is exactly what bash does here


It's the requirement for the JSON parser, not for apache. It's like saying apache should quote SQL strings automatically so that SQL injections can't happen. This is not apache's job!


It's the job of the CGI program, which is the same code that would have responsibility for sanitizing environment variables before calling bash.


There's nothing for the CGI caller to sanitise, how could and why should it know how arbitrary programs are going to arbitrarily misinterpret what it forwards? The CGI script could be Python eval()'ing it or Ruby interpreting it as a local file to display or delete, and it's no business of the CGI caller that they do.

All the CGI caller can and should do is forward correct data as defined by RFC 3875, the rest is not its job.

> sanitizing environment variables before calling bash.

The CGI caller may not even be calling bash, then what? Should it remove anything which looks like valid PHP code because it's calling a PHP CGI? Oh but now the PHP CGI uses system() which creates a subshell which is still holed, and we end back with: if it becomes the CGI caller's job to cleanup data which could be misinterpreted by application code, the only thing it can do is stop working entirely.

Now if you want a mod_bash_is_retarded prefilter feel free to implement one, but it most definitely is not mod_cgi's job to fix that crap, mod_cgi's job is to correctly implement RFC 3875, and the number of times bash is mentioned in RFC 3875 is 0.


I think you're in violent agreement with the comment you responded to in this case (especially judging with what he's written elsewhere on this thread).

He's saying if Apache passes a request to mod_cgi, which spawns "someapp", it is not Apache, but "someapp" that should sanitize the environment before it calls bash.

(and of course if the developer/admin has chosen to write their script to be run by bash, that's their mistake)


A shell certainly is not supposed to evaluate random environment variables.


Exactly. It's impossible for bash to know the difference between wanted variables and possibly-malicious ones. According to Postel's Law, it's Apache that should be more careful with its output, not bash that should be suspicious of its input.


Still sounds like a bug in bash to me.

What if I have a printer called "() { :;}; echo lol", so I manually export PRINTER="() { :;}; echo lol" (see http://www.tldp.org/HOWTO/Printing-Usage-HOWTO-4.html)

I don't think this was the intended behavior even 25 years ago. :)


The saner way to implement this bash feature would be to have a single environment variable containing inherited shell functions. A special prefix is an alternative, but a bit harder to filter.


I personally feel bad for Chet about this whole thing. For those of you (probably most of you) who do not know Chet has been maintaining Bash for free in his spare time for the last 25 years. He began working on it because he was not satisified with the shells available at that time:

    In 1989 or so, I was doing network services and server support for
    [Case Western Reserve] University (CWRU), and was not satisfied with
    the shells I had available for that work. [1]
I had the priviledge of hearing Chet speak about his experiences maintaining Bash.[2] From my perspective he has done a really great job over the years making software that many people love to use and abuse.

So while this is a really bad network security situation for the internet at large I think it is dubious to hold Chet or even Bash at particular fault. Rather, we are all at fault. We have been writing software that just shells out to Bash or sh or z-shell for years because it is convient. We could have easily have written our subprocess code in better ways but it was easy to use shells and we used them, even when we didn't really understand them.

[1] http://www.computerworld.com.au/article/222764/a-z_programmi...

[2] The venue was Link State a student run conference here at CWRU


This is completely wrong in numerous ways.

Firstly, the "security concern" is definitely a bug.

It is related to bugs in a feature; nobody is saying that that feature itself is a bug, though many are saying it's a "misfeature". So the idea that the feature itself is a bug isn't something that requires opposition.

The idea that it's the Apache people's fault somehow, because they didn't inspect the implementation of something that they rely on, is wrong.

> When reusing a component with missing specifications and lacking documentation, checking the source of the implementation should be standard procedure, but it has clearly not been done by Apache or DHCP developers.

Firstly, the CGI mechanism doesn't rely on Bash; it relies on the passage of environment variables.

Secondly, for the shell language, there is a specification: POSIX. Someone calling a shell implementation should be able to rely on the interface contract. Nowhere in POSIX is it documented that code from environment variables is to be executed by the shell.

Apache can run on systems that don't have Bash. A shell other than bash can be used for running a CGI shell script. You don't get the source code, necessarily; how can the Apache developers inspect the source code of a proprietary shell on a proprietary Unix?

Even if the Apache or DHCP developers were to (insanely) take responsibility for this flaw, the workarounds in their code would be Bash-specific hacks: basically they would have to parse anything that goes into an environment variable and validate that it doesn't have the syntax which exploits the Bash issue. That would clearly indicate that it's a Bash problem.

What also indicates that it's a Bash problem is the way it is being handled: fixes have been issued against Bash, not against other programs. The fire is where the smoke is, generally, and that's where you pour the water.

Lastly, please don't post diatribes to a code paste hosting service; it is not your soap box. Thanks!


This was my initial opinion of the bug, as well. The parent processes are in control of the environment and should be validating input.

On the other hand, after thinking about it, there are a number of reasons why I decided that this is at best a misfeature of Bash.

It is incredibly undocumented. I've been a Unix guy for over 25 years, and I've been using Bash for most of that time. (Sorry, David Korn.) I've used Bash a lot. But I've never heard of this thing.

It violates some ill-defined, personal, un-thought-about assumptions about environment variables. An environment variable with executable code? That's as terrifying as LD_LIBRARY_PATH, and that is very well known. One reason I've probably missed this feature is that it is something I would never consider using.

In my opinion, it's almost impossible to secure this on the parent process' side. Sure, the parent can look for magic Bash strings, but.... This isn't just Apache, it's potentially every other network accessible program that calls a shell, and that is a very common thing to do in Unix.

Finally, consider some of the special behavior of execlp and execvp:

"If the header of a file isn't recognized (the attempted execve(2) failed with the error ENOEXEC), these functions will execute the shell (/bin/sh) with the path of the file as its first argument. (If this attempt fails, no further searching is done.)"

You could end up starting a shell without knowing.


"Security in internet software and protocols were often just not considered at all in that time..." I find that a rather perplexing issue with many, especially software based products and research. How do you, in early, usually time, effort, support, and resource constrained stages of a project or research discover and identify all dependencies and requirements to the best of your knowledge so that there is not a type of runaway train effect where momentum is gained and speed is accumulated but it is frequently overlooked that there are all kinds of things like security, anonymity, etc. that are not being considered even though they will invariably become monumentally important.

Take the internet today in general as a huge example of that issue; it was never developed with anonymity or privacy or security in mind and here ware are, horrified of even just the tip of the iceberg that was revealed through Manning and Snowden. If the early researchers and engineers had built the early technologies with fundamental, most basic human considerations in mind we might not be looking down the barrel of a dystopian dawn.

So my question is whether anyone is aware of a method, procedures, techniques, etc. to plan for such a paradox?


If you look at early RFCs, you will see that if they mention security considerations at all, it's often just to mention that they haven't been addressed at all.

Specifically, the literal string "Security issues are not discussed in this memo.' is found 568 times in the 3000 first RFCs. (6 times in the RFCs from 3000 to 5887).

When searchers were inventing the internet, they just put aside security considerations. In a way, security was enforced at the boundary, by universities controlling their teachers and students who could use it.

When the internet becomes a public network, where anybody can send packets on it, of course security considerations become a priority, but the protocols weren't designed for security. Like IPv6, we'd need to design a new set of protocols for this public internet, taking into account security considerations as a priority.

But given the speed with which IPv6 is adopted, you can guess with what readiness a new set of secure protocols will be adopted (you'd also have to be able to trust them, that no NSA or other backdoor is hardwired in those new protocols).

In short, this is not a simple situation and there is no simple solution.


The implementation detail of using an environment variable whose value starts with "() {" and which may contain further commands after the function definition is not documented, but could still be considered a feature.

If this is considered an "implementation detail", then I'm even more convinced that the whole idea of hiding implementation details, and thus these surprising (mis-)features, is fundamentally flawed.

But on the other hand, it is free software and not difficult to check the source to see as the nose in the middle of the face, what is done.

There is absolutely nowhere in the official bash manual that mentions the special behaviour of environment variables with values starting with '() {', not even in the "differences from POSIX/Bourne Shell" list, so the natural expectation is that any sequence of bytes not containing the 0 byte (since this is a C interface) can be put into the contents of an environment variable. On the other hand it does have an extensive list of reserved variable names which have special meaning.

To quote the POSIX spec on environment variables (emphasis mine) - http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_... :

The values that the environment variables may be assigned are NOT RESTRICTED except that they are considered to end with a null byte and the total space used to store the environment and the arguments to the process is limited to {ARG_MAX} bytes.

Thus the reasonable expectation is that Bash behaves according to the POSIX spec; it's even mentioned in http://www.gnu.org/software/bash/manual/bash.html#Major-Diff... that "Bash is POSIX-conformant."

The fact that the function used to evaluate imported function definitions was named parse_and_execute(), and is basically the same function that executes regular commands at the prompt, was what stood out to me the most upon hearing of this behaviour although in retrospect, it wasn't all that surprising.

This is a bug that, by any other name, would be just as disturbing.


It took a lot of discussions with pjb on irc for me to decide he is not actually a troll. His is very literal. The part where assigning a particular variable to any environment variable causes arbitrary code execution is a bug.

The part where you can define functions by setting environment variables is a feature. It would be far better if the variable names needed a prefix (e.g. BASH_FN_foo='() {...}')., but even without that change, fixing this bug still preserves the case that allowing arbitrary data into the values of a whitelist of environment variables is safe.

In general, allowing modifications to a subset of the environment (and by this I mean the entire system environment, not environment variables) needs to be safe. Allowing programs to upload arbitrary data to /tmp/uploads is safe, allowing programs to upload to e.g. ~/.profile is clearly not safe.

In PJBs world it is not possible to set any environment variable to any value without reading the source code of every program that might possibly be called by any of your children. This is clearly not tenable.


If an environmental variable is like a file, this bug is like automatically executing code stored anywhere in the filesystem if the file happens to certain a magic number. Using a prefix (if documented) is like looking in a certain directory. It should have been obvious that scanning ALL environment variables is a bad idea, if not for security than for correctness, because interpreting data belonging to another program is likely to cause unpredictable behavior.


agreed


So an ENV is a dict string => string with this feature we have a dict that point to functions. Basically it is an object. It could even has been used for passing structured data with their own functional compiler. Or objects with stateless (lambda) functions/methods usable for parallel computing.

Oh fuck, this feature is a wonderfull feature in a controled environment for passing objects/code over a simple octet stream. With safe computer paradigm.

We could have done RPC easily with xinetd + and shell scripts. With PAM we could have even be able to use kerberos to control the security...

I could have done lots of things... I still can ...

Bash I still hate you for not documenting this, and more YOU Advanced Bash Scripting guide for being so awesome and missing that. ABS you failed me. http://www.tldp.org/LDP/abs/html/functions.html


I agree with this. Whenever something is interacting directly with a shell (or any other program that can execute passed in code dynamically for that matter), it should sanitize the input as to be certain nothing gets executed later, unless it wants something to be executed later.

Whether that be Apache (if it's passing data or commands directly to shell that contain data from the external environment) or a CGI script it is calling, whatever happens to be interacting directly with the shell should be sanitizing its inputs.


And what sanitization should Apache be doing to all of the environment variables? A blacklist against "() {"? That syntax is specific to bash, bash's support for it is undocumented, and it can be in any environment variable. It's more than a bit arrogant of bash to have an undocumented claim on all environment variables.


I think you cannot really say who has the bug in this case, from this point of view. The problem is that in order to blame one component (e.g. Apache) over another like bash, you would have to have a "rigorous" spec of the entire stack -- from network drivers to user facing programs, in order to say either "this is doing something unspecified" or is flat out wrong. This may be possible, but it also goes somewhat against the UNIX philosophy with which all of this was built in the first place.


> I would argue that the bash security concern is not a bug. It is clearly a feature. Admittedly, a misguided and misimplemented feature, but still a feature .... The problem is that it was designed 25 years ago. Apache didn't exist yet for five years!

The linked article's premise requires that the bug/feature be present when Bash was first written, not be the result of more recent changes, a a time when the risks were obvious. I can't show this, but I doubt it.


Yes, this part of the code is basically unchanged in all versions from the oldest I could get (1.14.7).

What made me look for it, was a suspicion that it was some backdoor added more recently. But apparently no, it's a feature that always existed in bash (granted, an ill-advised and ill-implemented undocumented feature, but still).


False premise: "Unintentional failure is not a failure." Lack of intention for a failure certainly weakens the ethical culpability of the action, but it doesn't weaken the technical or pragmatic severity of the failure. What's the point in saying the bash failure isn't a bug. It's an unwanted behavior. What else is a bug?


My understanding was that using this "feature" would sometimes lead to a crash of the executable. That seems to me to be clearly just a bug.

Even absent that, contrary to the article, I think I'd still call this a bug in bash - but the crash (presuming I'm recalling correctly) makes this position absurd instead of just (IMO) incorrect.


I don't understand this situation at all. Are people using bash as a secure jail and expecting it not to be able to arbitrarily execute code and access the user's complete environment?

I think the maintainer is right. Bash was never marketed as a secure sandbox and anyone who uses it as such is taking that risk on themselves.


Even through bash is open sourced, But the author of dhcp or apache can't spare many time to see bash source code. It's not a bash problem and not dhcp problem. Just because every code got a some bug hiddened.


I agree with the OP. Bash is a scripting language. If you use it, you must be aware that you can damage the system. If you allow your script to get input from the untrusted sources it's your job to sanitize it.


> If you allow your script to get input from the untrusted sources

This wasn't the problem. The problem wasn't with scripts getting untrusted input. The problem was with bash getting untrusted input -- input that isn't supposed to be evalled in the first place.


Undocumented feature = bug.


Using Nginx, how could one filter/drop/reject http requests where a http header start with "() {"?

I know patching `bash` is most important (and I have). But it would give some extra comfort.


> The problem is that it was designed 25 years ago.

It's just my imagination, but I keep picturing someone who did point out the possible security risks but was then dismissed as being too paranoid.


"I would argue that the bash security concern is not a bug. It is clearly a feature. (...)" The default excuse of all programmers. :)


I don't agree. bash is meant to be a sh replacement, compatible with sh. So any correct use of sh has to give the exact same result under bash. bash should only extend sh in a way where the bash script would be an error under sh.

bash might not be intended to be that way, but it is assumed by developers to be that way, which means it has to be that way (or it can never be used as system shell).


ofc its not a bug its a security vulnerability. Where hes wrong is having an undocumented unnamed unknown function that has eval abilities is plain idiotic; who cares if its a bug?


All the programs at risk here should really be using rbash.


The bug may be bash's, but it's the entire chain that makes it a serious one: http://senko.net/shellshock


So basically "this code has no bugs, just undocumented features".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: