An attacker can execute any ruby code he wants including system("unix command"). This effects any rails version for the last 6 years. I've written POCs for Rails 3.x and Rails 2.x on Ruby 1.9.3, Ruby 1.9.2 and Ruby 1.8.7 and there is no reason to believe this wouldn't work on any Ruby/Rails combination since when the bug has been introduced. The exploit does not depend on code the user has written and will work with a new rails application without any controllers.
I don't speak Ruby. Can you or someone else be more precise about where that introduces the vulnerability? (Surely it isn't that YAML::load(content) can run arbitrary shell code?)
Calling YAML::load on attacker-controlled content in a Ruby app of any complexity is very bad news. As Ben and 'judofyr said: this is remote code execution.
Is this because Yaml doesn't whitelist the classes for the objects that may be instantiated? They are allocated and then instance_variable_set'd so I'd be Very interested to learn how this poses a risk.
If I implied that I doubted them, then I failed to communicate my point effectively -- I am very curious about how to turn a class allocate + instance_variable_set into remote code exec. I see how you can create the arel objects for sqli, but not arbitrary ruby.
If you wait a couple of weeks you are much more likely to get an answer. Since all rails apps were vulnerable, most people who know how to execute arbitrary code are keeping silent for now.
In much the same way that letting attackers control the parameters to fork() would be a bad idea for a C program or letting attackers control the parameters to Runtime.exec() would be a bad idea for a Java program.
This is a Rails vulnerability, not a Ruby vulnerability.
I think it's better to not discuss this openly for a few days. The exploit isn't obviously (as you've noticed) so hopefully users will be able to upgrade before the script kiddies discovers this.
As there have been many exploits / issues recently realized in parameters parsing, why isn't there more of a focus on security here? Specifically, this is where users/hackers can put ANY DARN THING THEY WANT and your server has to deal with it.
As a simple solution, one could pass a signed auth-hash of the fields generated by form_for, and the server could re-hash the fields submitted to ensure the form data you asked for is what you get (this solves the primary issue with attr_accessible). I feel getting this right is crucial to Rails' future.
> why isn't there more of a focus on security here?
More compared to what, exactly? This vulnerability was responded to pretty damn quickly after it was reported, given that almost nobody is even paid to work on Rails. If you saw Aaron tweeting about "working over the weekend" a few days ago, well, now you know.
That said, you mention attr_accessible in your post: that's gone as of the next release of Rails. Basically everyone agrees that strong_parameters is a better approach, which is why a gem was released that works with 3.2: you can use that better approach now.
As was mentioned below, security is a process, not a result. Nobody wants these kinds of issues to happen, but they will happen, and do to every single framework that's used by lots of people.
Part of being a good developer is understanding your framework and making sure that your app has the level of security you need. The framework cannot protect you from everything.
The team responded really quickly. Aaron is a really talented developer and a nice guy. We should all be thanking him. If you don't feel that enough emphasis is being put into security in rails beat away at it, find holes and then get involved in developing fixes for them. That is the beauty of open source.
Matz developed Ruby for adults. You get a lot of power, but also a lot of rope to hang yourself with. This is why testing in rails applications is so important.
Here are some good and experienced Rails developers who apparently had no idea that Rails would automagically suck in XML and YAML and turn it into symbols instead of strings.
Clearly they aren't the only ones who didn't "understand the framework" or we wouldn't have gone a week with the impression that CVE-2012-5664 was only exploitable in specific circumstances.
I made the "understand the framework" statement in response to some mistakes I have seen introduced by sloppy dev's on some rails projects I have worked on...That obviously wasn't the case here.......
If security is important you should periodically try to hack the system and consider incorporating automated penetration testing software against your application. I haven't had a need to go this far with my own recent apps so I cannot speak authoritatively on this, but I think there are some tools that can help with doing penetration testing etc..
In a past life I had to work on some pretty secure systems and did some crazy testing on things. I saw a lot of good developers introduce pretty big security holes....my favorite was when our /etc/password was served up by an application...and this was a well known team of craftsman that did this on a fairly large project. None of these have been limited to Ruby projects....they have included Java, C etc..
In my view security is a moving target. If the cost of a attack warrants the effort to protect against it then you do. If not then you don't. Even if the developers of the framework concentrate on security, there will always be ways to get around it. Safe's are rated on the amount of time it takes to break into them, if someone wants into bad enough they will get in. The same is true with software.
Should I be more aware of the security on my apps? probably. Should we as a community be better with it? Yup.
But unless I've taken the time to really dig into it, offer constructive feedback and be willing to jump in to fix it I have no business criticizing the state of things.
I'm just saying Ruby is good at enabling DWIM and Rails has taken it and run with it to the point of magical thinking.
It's very hard for an app developer to test for vulnerabilities such as this one which seem to involve a combination of contributory factors. When magical stuff is constantly willing to help the app developer out in the background, it's very difficult to get a handle on what our true attack surface is.
Rails has definitely started to get complex, which is why many have chosen to go towards the simplicity of Sinatra, Rack etc.. Any time a framework get's complex this happens.
Look at big data and AI. There are loads of permutations. How do you prevent stuff from just happening? How do you make sure that things are correct etc.? Trust me I would love to do TDD on an AI / big data analysis project. But the reality is that no-one has figured out how to reliably test things....so TDD is not the right tool at this point. But there is also the potential to introduce vulnerabilities. That doesn't mean I am not going to use AI or do big data.
It's always a balance between tighter security/less complexity, or more complexity and less security etc...obviously there are other factors as well, but my point is to choose the right tool for the job.....sometimes it is Rails, and sometimes not......
And beyond that give the Rails team kudos for a taking care of things like this when they do find them.
Let us agree in giving the Rails secuity team many well-deserved thanks!
We on the sidelines primarily criticise as a stage of our shock and grief. That the patches keep flowing is unsettling in the short term but reassuring in the long term.
Thanks again everyone who dropped everything and worked so hard to get this set of fixes out!
I really hope this isn't the sentiment of the RoR community. There has to be a place for critique when it's warranted. At this scale it's not a joke any more, you ask for what you need and otherwise you do less.
The more magic, unexpected behavior you have when parsing untrusted input, the more likely you are to have security holes.
Instead of building up some complex object based on untrusted input, the author of the application should specify the values and types expected, and the parser should parse those and nothing more. This would lead to much simpler code paths, as the user never has an object that has unexpected keys, values, behaviors, etc. Don't parse the object using complex, general purpose code, then hand the user an object that they have to treat specially; if their form only expects 5 values of given types, then parse only those values and those types.
The problem is, all of this kind of magic is at the very heart of what Rails is. I don't know if you could eliminate it all and still have Rails be Rails.
There are specific things that could be said about the bug in question, like not being secure by default, but this doesn't fix the underlying problem. The development team should recognize that security is an important part of the project and act accordingly.
Whether money is involved or not, for a framework, developers should either be committed to their products or not.
If this was a different sort of product then that limitation of not getting paid might carry some weight, but when you encourage people to develop on top of your platform and when it is shown to have egregious flaws there is no excuse. You either get to work fixing them or you tell the world to stop using your framework because it's broken and not going to be fixed.
Fortunately, the Rails devs are seriously committed to their product, which is why these fixes came out so quickly.
Edit: The Rails team is certainly deserving of many thanks, but they don't get a pass on problems just 'cause they work for free. Similarly, if someone gives me a free car I will thank them, but if that car starts a fire in the garage and burns down my house I will curse them too.
Wow, that actually gives me more confidence because we have people doing it because they feel passionate about it. Thanks for your hard work....
I've been busy with some other small OS projects (and stuff that pays the bills) but personally feel like I need to try to carve out some time this year to do something to contribute back to Rails.....
It was opinionated to use attr_accessible until a better approach came along. Beginning in 4.0 it will be opinionated to use strong_parameters, but they can't just take attr_accessible away because a lot of people are upgrading apps.
There are always going to be security holes in anything we make. We can be a bank and focus two feet ahead on making sure everything is as secure as possible, or stay aware of security (and not do anything stupid) while moving fast enough that any flaws are irrelevant/fixed when exposed.
It also highly depends on how much risk you're willing to accept. For the average rails app, absolute security is not as important as moving fast. Be an adult and make adult decisions about your tools and processes to suit your circumstances.
>why isn't there more of a focus on security here?
Because this is ruby we're talking about. A "Fun" language that has 100000 ways todo the same thing, so newbs find it fun and easy. You can almost guess how the language works and almost always be right. Thats cool, great for learning, makes you feel like a superstar when you're just getting started with programming... but its really not such a good thing when it comes to maintainability, and security.
This breads a community of people who arent very mindful of anything but having fun coding. (not always a bad thing, but certainly not conducive to good security)
The second you talk about "Multi-platform" or "security" to your average ruby user, their eyes glaze over. They just want to make cool stuff on their Mac, not worry about Security and best practices.
You could say the same thing about a lot of interpreted languages, but Ruby is especially bad.
You misinterpreted my comment. I agree, the language is largely inconsequential. Its the culture around the language that is the problem. Ruby's is particularly bad. All I'm saying is the "1000 ways todo the same thing" nature of Ruby, has the unfortunate consequence of attracting newbs, and making 'best practices' hard to nail down.
Other languages tend to have more support in corporate/educational areas, tend to have more money backing them, tend to have more 'best practices', tend to have more rigorous testing and review. Ruby is the hipster hacker's language.... and the quality of code shows this. (in the core of the language, and by the individuals who use it)
Other languages absolutely have these problems too, but it has been my experience that Ruby is particularly bad. You are welcome to disagree with that part.
But IMO it makes sense. The quick and dirty 'million ways todo the same thing' nature of Ruby breads this kind of culture. I certainly didnt mean to single out interpreted languages tho, or imply other languages dont have security problems, or unique issues with their cultures.
PHP is probably about just as bad. I'd put Ruby and PHP high up on the 'fun to program in' list, and low on the 'secure, quality languages' list.
I haven't seen many seasoned developers claiming PHP is a 'fun' language to program in. It was my first web development language and it was fun back then. But the honeymoon period gets over quickly once you realize the limitations of the language and see what other languages like Ruby has to offer.
The distinction is rather academic with ruby and rails. 90% of the answers to "how do I do this in ruby" on forums are actually "how to do this with rails" answers, but they never mention that little detail, because who'd ever write a ruby program without rails, right? Trying to find straight ruby answers is annoying as hell.
I've had frustrations with certain "ruby" libraries requiring methods like "blank?" (IIRC) that are provided only via Rails, not Ruby. It made developing on a machine that Ruby but not rails rather annoying.
> As a simple solution, one could pass a signed auth-hash of the fields generated by form_for, and the server could re-hash the fields submitted to ensure the form data you asked for is what you get (this solves the primary issue with attr_accessible).
It does not solve the issue of javascript generated forms.
Sure, but as with all things, it could be turned off. The more I think about it, the more I like this idea. I may as well try it out and mock up a pull request.
If you turn it off then you're back to square one security-wise. Apps that have neither APIs nor JS are an increasingly small share these days. Also consider what is possible to sign. In most cases there will be some non-enumerable data in the field, leaving you with only being able to verify the field names, but there could be nested data and it seems like a 50/50 shot that whatever unforeseen vulnerability would not need to change the top-level params anyway. I don't think this would afford much of a security guarantee.
The only way to fix this by "more of a focus on security" would have been not to do clever things with parameters in the first place, but the clever things provide a lot of value, so the next best thing is security auditing and be on top of patching any vulnerabilities.
That's an interesting suggestion but it has nothing to do with the problem at hand.
The problem here was a feature somewhat haphazardly added to Rails for ActiveResource was turned on-by-default and enabled features that should only be active for interactions with trusted clients (i.e. authenticated services running in your own infrastructure)
Your suggestion is not without merit, but this is a case of having to learn to walk before you learn to run. There are clearly much more egregious parameter parsing vulnerabilities which need to be solved before the things you're describing would ever make it into rails-core.
Hey, Everyone! Leave a note (and, if you want, a hug picture) to say thank to Aaron for his hard work. This isn't any easy responsibility.
http://www.hugboard.com/e5f69b274a/contribute
Getting continuous errors on deploy during the bundle stage like so:
/usr/lib/ruby/1.9.1/rubygems/remote_fetcher.rb:215:in `fetch_http': bad response Not Found 404 (http://bb-m.rubygems.org/quick/Marshal.4.8/activesupport-3.2.11.gemspec.rz)
This is a mirror, so that's why it's probably out of date...can you try just production.cf.rubygems.org ? I have heard zero reports of downtime or other issues. :/
Hi - thanks for responding. My deploy's gone through now, but I'm curious how I ended up contacting a mirror - I was running the default bundler capistrano task, so (approx):
$ cd <deploy-path> && bundle install --gemfile Gemfile --path <shared-path>/bundle --without development test
I haven't deliberately pointed anything at a mirror - do you have any idea how might my install have ended up doing so? This is with rubygems 1.8.11, bundler 1.0.21.
For the time being I've rolled back the gem changes and applied the suggested hotfix (removing XML from the default params parsers). I've been trying for a solid hour to get a deploy out with the updated rails version, and it's just not having any of it.
Edit: finally got it out. This deploy model is completely screwed, though. It just shouldn't be normal to have a service like rubygems.org in the daily deploy loop. This is absolutely not a knock on the fantastic volunteers that run it - they simply shouldn't be dealing with this sort of load spike.
Agreed. Watching this develop from behind the scenes (at a distance), I'm not sure any core team could have done better managing the deployment and announcement in a more timely and careful fashion. Well played.
Could you confirm that upgrading to the fixed Rails versions means I don't have to add the line changing `ActionDispatch::ParamsParser::DEFAULT_PARSERS`? Or do I have to do both?
This vulnerability is also present in other other Ruby libraries. I would advise anyone to do bundle install --deployment in there development environment then 'grep -r "YAML::load"' and 'grep -r "YAML.load"' in the vendor/bundle directory. If you have YAML::load(user_controlled_value) or YAML.load(user_controlled_value) then you might be vulnerable to remote code execution. There are some other ruby libraries that are vulnerable to this attack but I don't want to post about them until their authors have fixed them.
I don't use Rails, and read up on the vulnerabilities. Here's a quick summary:
1. This class of problems is not unique to Ruby.
2. Similar problems have been identified in Struts, and python's pickle.
3. Specifically in this case, YAML.load() can deserialize unintended object types. In the case of Struts the problem was the expression library used can also deserialize unintended object types (like File), plus setting properties on these types can have side effects (such as dropping files into your system).
4. I took a look at Microsoft's WCF. The DataContractSerializer states that it only is allowed to load types that are specified by a contract. http://msdn.microsoft.com/en-us/library/vstudio/ms733135(v=v... This should be the gold standard. In addition, it warns that even loading XML documents can be dangerous if we then load remote DTDs for validation.
1. all deserializers should be viewed with suspicion.
2. A deserializer which does not implement a whitelist of types that it can deserialize to is not suited for handling arbitrary data.
3. For example, it is capable to creating untainted/trusted objects in application servers, which some time later, may be used for XSS, or execution in SQL. In the Struts case, the standard Java libraries have constructors and methods that deserializing is enough to result in an arbitrary file being dropped on the remote file system.
Very similar vulnerabilities have definitely been discovered in other platforms.
This vulnerability is most similar to the object loader vulnerabilities found in Spring a few years back. It is the kind of vulnerability that is occasionally found in Java web stacks.
It is a simpler vulnerability. This is a double edged sword. On the one hand, it is easier to fix (and to be sure we've fixed) than the objectloader-type stuff. On the other hand, it's so easy to reason about and work with that the exploit is straightforward. It was very difficult to find ways to talk about the general pattern of weakness in this code without immediately disclosing the exploit.
The vulnerability is similar in spirit to Python's Pickle, which is also unsafe for untrusted data. A difference between Raila and Django, though: while specific Django apps have had Pickle exposures, I'm not sure Django itself ever did.
PHP has vulnerabilities that are similar in impact to this vulnerability. But there's a big difference between this flaw (and the Python issues) and PHP: PHP grappled for years and years with a publicly known bug class (remote file inclusion) that coughed up code execution. It's not impossible that more RCE flaws will be found in Rails, but it's unlikely to become a class of bug that every Rails developer will need to adopt best practices to stop.
No mainstream web platform has ever survived long deployment in popular applications without some horrible finding. Nobody's hands are clean. It is very difficult to get security right in every single component that a full-featured web framework needs to offer. It only takes one mistake.
You are dead right about deserializers in general.
This is bad, bad, bad, bad! SQL injections, remote code execution, DoS. Pretty much everything is possible with this exploit. You don't even need the secret key which was required in the previous vulnerability.
Posting the gory details this early on is not a nice thing to do. It's probably best to hold off for a while until everyone has had a reasonable chance to upgrade.
At this stage with the vulnerability publicly and widely reported - demonstrating an attack vector that involves seemingly harmless code is perfectly acceptable. Not everyone understands the magic involved and it would be able to spot exploitable code.
A harmless payload can be absolutely trivially turned into a malicious payload.
I intend to share some details about this later on, but not so soon after the vulnerability is announced. There has to be a reasonable amount of time allowed for people to patch their servers.
If anybody thinks we're solving vulnerability full disclosure once and for all on an HN thread about a Rails vulnerability, that person is pretty naive.
We've now officially captured both sides of the argument and can safely move on.
If you want to get into silly analogies, compare the US to Australia. Tight firearms restrictions in AU makes it significantly harder for criminals to obtain guns.
Hmm, looking around I am thinking that if you don't want random object instantiation, this monkey-patch:
module YAML; @@tagged_classes.delete('tag:ruby.yaml.org,2002:object'); end
makes user-supplied YAML a lot less dangerous. I am going to poke this into a production application and see if anything breaks - it really really shouldn't <g>
It breaks YAML deserialization in other places. You could enable and disable it on demand in the XML parser, but a more sensible solution is just to get YAML the hell out of the XML processor. Trying to make YAML safer is probably not the right approach.
It's meant to partially break YAML deserialization :) My apps do care about YAML, so I've an interest in cleaning this up. Is there some unintended consequence? You can still instantiate some Ruby classes (Regexp, Symbol etc.) in the YAML loader, or you can go through @@tagged_classes and pick out any other types you don't want.
But by taking out Object, YAML is only left with a whitelist of types that are safe, anything else will get turned into a YAML::DomainType.
What I mean by that is, this workaround breaks application code that depends on other portions of Rails that use XmlMini. In exchange, it allows you to potentially expose YAML to HTTP requests, which is still an extremely bad idea.
I don't see why YAML is a dangerous serialization format - the other type deserializations in the code seem sane and limited enough. (I wouldn't use YAML over e.g. JSON these days but I'm fixing up quite old projects)
For all its reputation as overly strict and pedantic, one of the good things Java did was make sure that if something looked like a string it was going to be a string...
I think we're all saying the same thing. But this particular vulnerability described in the OP allows SQL injection via a different means than the one I had linked to (from 5 days ago). But yes, it's all SQL injection (and more, in this case).
It's (apparently) a remote code execution bug. You can also use it to trigger SQL in the sense of simply executing arbitrary SQL. There's no need to bootstrap or trampoline, the doors are swinging open already.
> There are multiple weaknesses in the parameter parsing code for Ruby on Rails which allows attackers to bypass authentication systems, inject arbitrary SQL, inject and execute arbitrary code, or perform a DoS attack on a Rails application. This vulnerability has been assigned the CVE identifier CVE-2013-0156.
> Due to the critical nature of this vulnerability, and the fact that portions
> of it have been disclosed publicly, all users running an affected release
> should either upgrade or use one of the work arounds *immediately*.
I'm the author of the "Rails SQL injection vulnerability: here are the facts" blog post last week. This vulnerability is a different and unrelated one, and is very serious. Upgrade immediately.
But thats not how the security community works. Once this was posted literally every security team jumped on writing a PoC exploit and distributing it to customers. There will be point and click modules in Metasploit, IMPACT, and CANVAS by the end of the week at latest.
This is the key bit for me: Rubygems is literally straining with everyone being frantic to upgrade. Giving it a few days means that everyone can patch their apps.
I don't believe that everyone will listen to little old me, of course, but that doesn't mean I can't tell them I don't think it's a not-great thing to do.
You completely mis-understand my point. I don't think that this is the only person who knows this, that'd be idiotic. They are, however, the only person who posted it in this thread. Giving it more publicity. I don't think that that extra publicity is appropriate.
Even now you still think it's useful to hide information from the "general public" and avoid "extra publicity"?!
The cat is out of the bag. You can no longer negotiate with this reality.
Publicly disclosing a bug is like birthing a baby. Once it's sticking halfway out, just get it all the way out because it's counterproductive to try to hold parts of it back in.
The post does not include any directly usable exploit code and does not describe command execution vectors. Furthermore information about the bug were published on twitter almost a week ago. But I probably will not convince you about the advantages of Full Disclosure :)
No, I don't see what this adds, I only see how this can cause harm. Anyone who wants to learn more can wait until everyone's had a chance to patch their apps; they can also figure it out themselves.
All this does is allow people who want to do harm to not have to figure it out themselves.
This hits one my basic complaints about Rails: it activates too many features _by default_. Even if your app does not parse XML params, the parser is active. I know its convenient, but hey - is this worth the price of exposing _everyone_?
That's not a fair critique here. The problem isn't that Rails exposes XML by default. Everyone knew it did, and just processing XML isn't the issue.
The problem is that the XML code used in the untrusted request path was also used by code that handled trusted messages elsewhere, and those trusted messages had requirements that weren't appropriate for request-path messages.
Because JSON is so much more popular than XML in Rails apps now, a reasonable workaround for this problem is to just turn off XML if you're not using it. More importantly, it's a workaround that (a) does more to reduce the attack surface given how XmlMini works, and (b) was a workaround that disclosed less of the vulnerability last week. But don't let that confuse you about the nature of this bug.
Yes, it is a valid critique. You are right that just deactivating XML parsing is a reasonable workaround - and in my opinion so reasonable that it should never be activated by default in the first place.
A lot of people get bitten by a component they never consciously used and activated in the first place. While the second part is true for almost every part of a framework, the first one is problematic. ("XML? Why do I have a vulnerability through XML and YAML in a JSON-only app?")
That should be pretty easy to scale up. The past couple of weeks have established a decent precedent. Just have an endpoint called vuln_present or similar.
Well, not quite (when the messages are signed and the key is not stored on rails.org). However, as was pointed out, said attacker could indeed collect the ip-addresses of the polling servers - hence the idea to use twitter for the broadcast (a few comments down).
Of course Twitter is not exactly the most reliable platform but the likelihood of a twitter-downtime to coincide with a critical vulnerability seems relatively low.
Just playing devil's advocate here: a truly evil attacker could use the access logs from all the apps phoning home to build a list of vulnerable targets! :)
Well, you are right, the idea wasn't thought out very well. I was in a bit of a bad mood during patching up various rails deployments around here...
However, perhaps they could just promise to post a signed message, in a specified format, on a dedicated twitter account, if such a thing happens again. This would seem like a relatively low-tech approach, about adequate for such a rare event (just keep that secret key secret!).
The community can then roll their own gems to watch said twitter-account and act according to any user preference. Perhaps one of these gems would even make it into rails-core after sufficient review.
Obviously one can always argue whether such a rare case deserves dedicated infrastructure. But on the other hand we have yet to see how many rails deployments will be bitten by this incident in the long term. It's not uncommon to see years of exploitation for a vulnerability in a popular piece of server software.
Your entire thread of comments here make me want to gouge my eyes out. A signed message on twitter? Low tech? What in the hell are you talking about?
I'm going to be the asshole here, because it is vitally important that no one responsible for security ever listen to what you're saying. You're advocating some Orwellian kill-switch mechanism based on unspecified "signed messages" over a third-party social messaging service (limited to 140 chars, no less), and throwing in meaningless phrases like "low tech". What about this problem leads you to believe we all need something low tech?
I am not qualified to design such a system. You are negatively qualified to even comment on such a system. Please stop.
Since you apparently neither understand what was discussed (an optional rather than an "Orwellian" kill-switch), nor the implementation options (a signed message via any broadcast mechanism), nor why using twitter as the transport would be feasible and "low-tech" versus most alternatives, you should perhaps refrain from commenting on this thread at all. - And especially not in that tone.
I'm guessing that might not work great considering last time I checked almost no one was using the debian packages due to antipathy between the debian maintainers and rubygems folks. Any know of any progress on that front?
I'm saying for Debian packages in general; I don't think anyone uses the Ruby packages in Debian/Ubuntu. It's a bit sad that people got in such a tizzy over it, because the Ruby people could learn a lot from Debian about packaging stuff and managing it over time.
Who'd have thought it? Hugely dynamic language turns out to be difficult to audit or analyse for security issues.
It was never about Java(C, C++) vs. Ruby despite what fanboys on either side made out. It was about conservative vs. devil-may-care. All that "convenience" and "it's so clean" came at the price of a whole load of code executed behind the scenes. You didn't write it, and the Gods of TDD preached that you don't need to test it because that's Someone Else's Problem. Auditing it is nearly impossible not least because it's a moving target, so you either get stuck in a backwater of obsolete versions that once got audited and approved or you live on the bleeding edge constantly (and DevOps hate you forever).
FWIW we (large satnav company) prototyped the last major service in Rails (actually a one-man hack proof of concept) then implemented it in production in Scala (and that was a bridge too far for a lot of people) because no-one wanted to run Rails in production.
I'm happy you work in the kind of place that audits all of its software, though. I'm sure you've all read through all of Hibernate, Spring and not to mention all the .NET framework code.
That is a straw man. Nobody claims there are not issues in other software.
The claim here is that people in dynamic languages tend to misuse that and write all sorts of magic that are pure gold for 10-line snippets but open up a vast attack surface, like building completely arbitrary objects from string input.
> Could you show me how I could possibly create such a hole in a language like ocaml or haskell?
First you'd have to write the equivalent of Rails in Haskell (I'm not talking about an MVC framework, but something as large, complex, and featureful.)
No, I am asking how you could actually create this vulnerability in haskell at all. No framework required, just actively, intentionally trying to create this hole.
> The value of a parameter id was reflected to the HTTP response
The link you posted illustrates that it is unfortunate that Java supports reflection, and even more unfortunate that various "enterprise" software stacks abuse reflection in ever more creative ways. Stay away of Java reflection and/or use C/C++, and you'll avoid this kind of vulnerabilities.
This isn't about languages; what the language gives the language can take away. This is about diligence, responsiveness, and transparency.
Rails will always have zero-day security issues; I'd hazard that all web apps of any notable size will. What matters most to me is how the core team and community respond on those three criteria above.
Lately the Rails team has been performing exceptionally.
All software has bugs, and the subset of bugs which turn out to be security vulnerabilities is borderline asymptotic.
That's not to say all frameworks are created equally secure, but the differences have more to do with the culture around the project than any technical decisions (minus some very specific language-related issues).
Rails is a big, public project; with many years now of being used by a sizable number of people. There will be more security vulnerabilities discovered, hopefully they'll be addressed quickly and communicated well (which this one seems to be).
That's a long-winded way of saying to be cautious with picking some other framework because it has less security vulnerabilities reported, that almost certainly has no bearing on it being more secure.
Security is a process; what matters is how people respond to new vulnerabilities. I'm naturally biased pro-Rails, but so far I don't feel uncomfortable with how it has been handled.
I can't comment on how on-the-ball the Rails security team is, but I can say it's really easy to update your apps.
It's also relative to your alternatives. It's way safer than not using a framework. Is it safer than Django? That's kind of unknowable; maybe, maybe not.
I've worked with other vendors. The rails security team is the best I've worked with. The major positives:
* Quick turn around. I have another vendor where it takes up to 3 months to get stuff fixed. :(
* They give you a patch to review before releasing publicly. This is very important and gives researchers a chance to fix any problems with the patch. With another vendor their fix missed a really obvious attack vector and anyone who diffed the code would have been given a free zero day vulnerability. :(
As an average joe web dev I also found the security team very easy to work with when I discovered a vulnerability. In that case I worked directly with them to create the patch that was released as Rails 2.3.5. It was something like 48 hours from the time I discovered it to the release.
Please know that Rails is definitely very secure, and it has gone through many years of testing and review. However, no framework is immune. We should be grateful that the bug was found, patched, and notified, instead of being silently exploited by some black-hat who discovered it first.
The following is speculation, but keep in mind that this bug may have been found because the Rails team has been looking for security holes similar to the exploit that was found a few days ago.
I agree completely, it's just that this error seems so obvious and dangerous that it's odd it hasn't been caught for over 7 years.
ps: I love Tekpub! Are there any plans for some new Rails videos? I purchased the Rails 3 series but it's a bit outdated, given how fast Rails moves. Would purchase a series on Rails 4 day-one. :)
I'm in a similar boat. I've just began with Rails in the last six months, and I've really been enjoying it. These vulnerabilities are unnerving but I am comforted by the visibility of the announcements. The community seems to get the message out very quickly when an exploit is found/patched.
My entire Twitter feed is filled with Rails developers telling others to upgrade immediately, and HN has multiple new posts about it.
If you read the insinuator.net article posted elsewhere on this thread, you'll see that Java (Struts), PHP, and Python have all had their own remote-code-execution vulnerabilities over the years.
I hate to say it, but if you're running a public facing Rails application, it's near imperative that you know what Action Pack is and why it's significant.
Hmm, this may explain why the vulnerability patched in 3.2.10 was more dangerous than it seemed, eh?
The 3.2.10 announcement provided an example of `Model.find_by_id(params[:id])` as an exploit, but nobody could figure out how you could get a hash with a _symbol_ key into `params[:id]`, which is what it would take for that to be an exploit. So people were confused.
But the pre-3.2.11 exploit, apparently, possibly provides ways to do just that, eh?
That's how this vulnerability came to light. After finding out about the last vulnerability, there was a huge amount of interest in seeing if parameters could be exploited, leading to a number of people simultaneously discovering this flaw.
I heard through the grapevine that YC affiliated companies were tipped off to this exploit/patch before it was made public (really; a YC affiliate asked me today about the vuln before it was disclosed). Could anyone comment on that?
The existence of this vulnerability (without details) was disclosed publicly last week. So far as I know, nobody was told any details about the vulnerability itself; just that a patch was coming, and it was very important to apply it.
So I've applied the workaround, which is great, but how do I test that the workaround is indeed working?
I realize that providing an in-depth answer is tantamount to publishing an exploit how-to, but some reasonable way to privately test this would be very useful.
Maybe a "simple" URL tester hosted by a trusted Rails source (e.g. rubyonrails.org)? Ok, has the obvious issue of showing the world who they should target, but maybe you can riff on that theme?
Auditing and stuff you know. For some reason people in charge get really upset when all our base are belong to the bad guys.
Why doesn't Ruby (and Python and all other languages) have Perl's tainting built in and always running?
I'm not advocating it as the only security mechanism, but rather as another barrier to be overcome just like address-space-randomisation, data-exection prevention and all the rest...
(Haven't Google recently shared a valgrind-lite runtime bounds checker which is being incorporated into GCC etc? Might lead the way on how this can be down with the minimum of runtime cost.)
Because tainting is an inherently flawed way to do security. Blacklisting capabilities/methods/data always leaves holes behind, and it's nearly impossible to secure a system using tainting alone. Even the Perl folks say it shouldn't be used as a security mechanism...it should be used to help thin out security issues during development and testing.
If you want to secure a system...whitelist, don't blacklist.
My understanding of the taint flag as implemented in Perl is that it is very much a whitelist. All user input is born tainted and much be verified clean before the flag is removed. It's possible to screw this up by verifying too much, but that's an overly-expansive whitelist problem, not a blacklist that isn't restrictive enough.
You mean JRuby and Rubinius, but yes...it's as flawed as every other blacklisting security mechanism. We mostly don't implement it because, well, "here, add these security checks and tainting propagation to EVERY METHOD IN THE SYSTEM and if you don't do it right, you're totally effed." Sounds great.
You add tainting propagation to every method in the system that handles tainted user input. Hopefully, this will encourage you to untaint the user input ASAP and build native objects out of it. Now, of course, if you take untainted strings and feed them into reflection/eval you are in a world of hurt, but perhaps you should stop using reflection/eval.
Can anyone with a more intimate knowledge of the inner workings of Ruby on Rails speak to how detrimental this exploit is in practice? I seem to recall a fair number of people feeling the SQL injection exploit from a few days ago was being blown out of proportion and I was wondering how this particular exploit stacks up against it.
This one is not blown out of proportion. Lots of people have working proof-of-concept exploits for this. The vulnerability has no app dependencies. You don't need a session secret. You don't need a login. There are vectors for the vulnerability that will work against applications that don't even have exposed controllers.
Author of the SQL injection exploit blog post last week. This vulnerability is definitely not out of proportion: it is extremely critical and can be exploited without any conditions. Everyone should immediately upgrade.
I'm not going to say "told you so" because I said nothing and I'm just a layman in this...but when people were pointing out last week that the bug was "overblown" I had wondered if they were underestimating the tendency for such vulnerable patterns to propagate. The mechanisms that let even an edge case in are not always isolated.
Oh I'm saying "told you so". Since years and years.
The real problem is the very mentality of the people who downplay security issues, always saying "this is not a serious issue" (or, worse, saying "but language xxx / framework yyy" suffers from issues too, it's how the world works).
That mentality is the reason why such exploits do exist in the first place. Security is nearly always an afterthought.
The most braindead argument being: "My goal is to sell xxx, not to have an unbreakable server".
Once you read that one, you know you have reached the low of the low.
I was curious about why Rails parses YAML nested inside XML to begin with. Turns out it was put in way back when so that ActiveRecord's from_xml/to_xml work as expected when a model contains serialized (ie. yaml) attributes.
Thinking aloud, do we need some kind of auto-update feature for rails apps? This kind of exploit suddenly exposes the multitude of Rails apps out there to remote code execution. I know it wouldn't be a trivial thing to make, but we already have yum auto update for linux and auto updates for Windows, OS X etc, it should definitely be feasible. Scope could be severely limited, so for example, a monkey patch for big vulnerabilities like this, while sending a notification email to the app maker.
The last Rails SQLI vulnerability was mitigated by the way ActionPack parsed request parameters, so lots of people dove into that code to see if the mitigation could be evaded with JSON or XML. That gave people incentive to review Rails XML parser wrapper class. The problem with that class is pretty obvious.
> Considering it affects all versions, what are the odds of multiple people pointing this out at the same time?
My understanding is that while investigating the SQL issue a week or so back, it gave several people ideas on how to make this exploit happen, and they all reported it.
You could also deduce from the previous vulnerability disclosure or comments from rails developers who knew about the vulnerability that there was a way of generating symbols. This is how I found it. But there is still a big step from knowing about loading YAML to creating an exploit.
Github.com (built on Rails) is currently having issues. If I had a tin foil hat, I'd put it on. Hopefully their issues are not related to this vulnerability.
More likely the result of millions of Rails site owners crying out in terror... pulling and then committing updates. I fear something terrible has happened.
As someone stuck maintaining an older rails app with no hope of upgrading anytime soon, any information on patching rails 2.1.0 against this vulnerability?
You make it sound as if "Every software suffers from security issues" was brought up as a reason not to put effort into security. It was not.
It is very valid to reason within constraints of reality. Like knowing that a car "which will never ever have an accident. ever" is a lie. We know that driving a car brings a risk of an accident. That is realism. Some turn that reality into dangerous behaviour. Saying things like "Statistics tell me I will have an accident no matter what. So I can just as well finish this bottle of whiskey before driving at 150Km/h home". You are making it sound as if the Rails developers follow that logic.
They don't. There simply is a certain realism that, no matter how much effort you put into security, there will be security issues. But nothing more. Or less.
This problem is due to deserialization creating object (sub)graphs which are unintentionally too powerful. Statically typed languages (especially without dependent types) can do this too, even when the root object(s) matches the type(s) expected by the caller. The cure is http://en.wikipedia.org/wiki/Capability-based_security: write out what the caller is currently allowed to do, rather than blindly granting dangerous privileges and relying on the code's design never to use them. Even tainting, a very crude manual form, seems like it could have caught this.
update your Gemfile and set the version you want. In my case:
gem 'rails', '3.2.10'
locally, run
'bundle update rails' which will update your Gemfile.lock
check-in and deploy your code. If you are using capistranso, the default 'deploy' task should handle everything for you. Otherwise, run 'bundle update rails' on your production server.
Which is in fact why it's probably wiser to list `gem 'rails', '~> 3.2.10'` (or 3.2.0 or anything) instead, and then `bundle update rials` will update you to latest 3.2.x (but never 3.3.x), in this case 3.2.11, instead of only to the exact version you specified (3.2.10, incorrectly).
The advisory also provides several workarounds that dont' require you to update Rails, all pretty simple ("drop a file into config/initializers and reload) which also work.
Here is the commit where it was introduced: https://github.com/rails/rails/commit/27ba5edef1c4264a8d1c0e...