My sympathies are with the Drupal community, it may never fully recover from a crisis of this magnitude despite whatever.
At times like this, I really wish there were more best practices to mitigate against these type of vulnerabilities.
One idea is using the remote-execution vulnerability itself to remotely patch the servers by passing the antidote as the payload. I am not sure if this would be legal.
The other is something what CloudFlare already does but doing it at the hosting-provider or ISP layer. It need not be as proactive as CloudFlare which can inspect HTTP requests for known exploits and blocks them. It could be something as simple as running scanners and routinely informing customers that their websites are vulnerable. Or better still, temporarily disabling the account or putting it into read-only mode, thereby forcing users to take action.
> At times like this, I really wish there were more best practices to mitigate against these type of vulnerabilities.
There are many best practices:
- Use version control.
- Have remote backups for the past day, week, month, etc.
- Apply security patches immediately (and have test infrastructure so you can make sure they don't break anything).
- Be able to rebuild a server from scratch w/ backups (configuration management really helps here).
- Subscribe to security mailing lists/RSS feeds/Twitter accounts.
This vulnerability is pretty bad, of course, but there have been many like it before, and there will be many like it in years to come.
If nothing else, use this situation to prepare for the worst on your own site; how would you respond if your site was hacked, your database in an unknown state, and your codebase potentially backdoored?
Configuration management is an enormous pain for systems like Drupal that are essentially built ad-hoc, though. It's not something that's practically baked into something like Chef unless you're willing to spend a ton--a ton--of time baking your assumptions into code. And that (along with stuff like a test infrastructure) assume the capability of even doing it; Bob's Bait Shop does not have the technical staff necessary to act according to best practices but definitely needs a website. ("Use SquareSpace" is rarely an answer, though depending on the project it can be; anything remotely clever is going to need to go outside of the usual SaaS providers and that's why there are so many rickety Drupal sites out there. I used to write them, I know.)
Once again, software is harder than it needs to be. Gary Bernhardt has been going into this in depth on Twitter lately, and I agree with him more and more as I pay more attention to the shitshow around us.
Things have improved greatly! With Drupal 7, there are two main solutions, the configuration management module, and features. With Drupal 8 (likely coming out within the next year, currently in beta) YAML-based exportable configuration management is baked into core and will be much more accessible for custom development and contributed modules.
Sure. Now they need to grapple with the fundamental differences between install-and-tweak and design-through-code to actually make this meaningful in a modern (Chef/Puppet/Docker) environment. I won't be holding my breath.
A modern, well-architected site build works wonderfully with the aforementioned tools. In fact, I have a demonstration VM (Vagrant + Ansible) that you can use to bootstrap a Drupal site with a given install profile/configuration in a few minutes using a Drush makefile: https://github.com/geerlingguy/drupal-dev-vm
Sadly, most Drupal developers and development shops either don't know about or don't care to take the time to build sites in this manner (instead of schlepping databases and file dumps all over the place, export everything to code)... but if you do, team-based/large project Drupal development becomes so much more sane.
I'm aware of drush and its makefiles. In the overwhelming case, that's antithetical to how working on Drupal actually is, because of the amount of exploration and tweaking involved. It's not a sufficiently simple product such that you can just eyeball a makefile and go.
To be honest, it's not actually as dire as it may seem in my opinion. Consider, if you're on shared hosting, which a lot of drupal sites will be then the server itself isn't your problem only the site is. Copy the site off and have your host reinitialise a clean account, that's taken care of. In terms of the site you're looking for code backdoors, there are a relatively limited number of places these can live. You can completely overwrite drupal core + contrib modules with good clean versions, that really limits the places where bad code can live to the files directory and the themes directory. Files for the most part shouldn't contain any executable scripts so locating backdoors there is relatively easy.
The theme will contain php files but their number is relatively limited and is primarily templating related, it should be possible to find bad code here if you know what you're looking for, after all the theme doesn't handle authentication so can't exactly introduce subtle 1 character tweaks here to make the site vulnerable.
The database is more complicated unfortunately, besides simple privilege escalation and permission alteration which are relatively easy to find, drupal often allows php to be stored and executed from within the db. I can see if an attacker is particularly sophisticated a backdoor in there being harder to find, but I imagine there will be people working on tools to hunt that kind of thing in the imminent future.
Or, autopatching of security patches without human intervention. This stuff isn't that hard too figure out. The Drupal community needs to stop pretending that its 1996 anymore and start taking security seriously. I think the CMS world needs a Microsoft circa 1999 moment.
That is a absolutely horrible idea for frameworks that Drupal. That essentially implies that someone has arbitrary write access to your servers and is free to modify things as he or she wishes.
Many web hosts do this. At <redacted UK-based LAMP host> we do both the WAF stuff (which Cloudflare don't actually do, unless you pay them), and your latter suggestion of read-only'ing accounts on discovery of vulnerabilities.
The unfortunate reality of the latter one is that it's going to break their site, probably, since we don't know what they run, and you can't just read only an entire account blindly without breaking something. We could do better there, but it will never be perfect in this environment.
IMO a very important practice is designing to block escalation of privileges. For example, if your web server never stores a list of all e-mail accounts but instead queries a different service for an authenticated user's e-mail then even a complete compromise of the web server would not allow an attacker to get a list of all user's e-mails. Perhaps the canonical example is the handling of passwords in Unix/Linux. Most competent developers know how to handle passwords but it seems not necessarily the underlying lessons/principles. Even if you manage to gain root privileges you can not retrieve user's passwords. Another way of thinking about this is defense in layers while assuming the exterior layers can be completely broken.
I like the idea of deploying an antidote but that won't always work and I can see people not liking it. CloudFlare would be an example of defense by layers.
Another observation is about how the vulnerability is disclosed. I don't think this was handled properly here. Ideally you want to see a disclosure that allows users to take action without actually revealing the specific vulnerability. An extreme example would simply be to tell all Drupal users to take their sites down and give them enough time to do this before disclosing anything. Possibly a less broad action could be taken while still hiding the specific exploit. To release a patch revealing the exploit while many users are still open to an exploit is extremely problematic.
It's a matter of controlling that window and minimizing the number of affected sites.
Scenario #1: Release patch with no advance notice. Takes time for people to take notice and apply. In that window lots (12 million?!) sites get hacked. That doesn't sound like a very happy outcome.
Scenario #2: Announce a patch will be released in advance. Get the word out. Give people time to hear it. You're not giving any specifics other than there is some vulnerability. That's doesn't help any hacker, every product has some vulnerability. There'll probably be a lot less than 12M sites hacked. Also consider closing the vulnerability in ways that don't directly touch the affected code to make it more difficult to reverse engineer.
Obviously if everyone knows of the exploit you don't have the time but often the way hackers hear about the exploit is through your announcement...
[EDIT: Thanks to aryx above I learnt there was a 5 day window in this specific case (An Oct 10th announcement for Oct 15th security update). I don't think that announcement made it clear enough what the consequences of not applying the patch immediately would be. I think this is something that needs more careful consideration in the future.]
You can announce the timing of the patch availability in advance. You can tell everyone to take their site down between the release of the patch and the application. Depending on the vulnerability there may be a way of blocking it without specifically advertising the vulnerability (e.g. block it in unrelated code).
At any rate, worse thing is simply announcing [EDIT: i.e. releasing] a patch that is not going to be applied immediately by a large percentage of your users and allows all attackers to attack those sites. I can't quite think of a worst approach; even never patching at all might be preferable to that.
> You can tell everyone to take their site down between the release of the patch and the application.
Do you honestly believe businesses relying on Drupal for core parts of their business will do this? You are seriously naive.
> Depending on the vulnerability there may be a way of blocking it without specifically advertising the vulnerability (e.g. block it in unrelated code).
If there's a patch it's trivial to find out what it affects.
> I can't quite think of a worst approach; even never patching at all might be preferable to that.
No, that's just stupid talk because it assumes that no one else has known about the vulnerability before the announcement.
"In its "highly critical" announcement, Drupal's security team said anyone who did not take action within seven hours of the bug being discovered on 15 October should "should proceed under the assumption" that their site was compromised."
This quote is not about the people who knew about the vulnerability before the patch was released on Oct 15th, right? I'm sorry but I still think that if the outcome of the release of the patch is 12M sites hacked this isn't a good outcome. I wasn't aware of the 5 day window but maybe a bigger announcement should have been made. I never heard of the issue (and it wasn't on HN) prior to the release of the patch. The Oct 10th announcement wasn't proportional to the size of the issue.
It's trivial to find out what a patch affects but it's not necessarily easy to find out what the issue is unless the patch actually addresses the specific bug. I'm talking more generally than this specific exploit. Let's say there's an OpenSSL bug in AES/128 implementation. You could release a patch to the AES/128 code or you could release a patch disabling AES/128. The former is an open invitation to hackers. The latter still leaves a potentially large task of figuring out what exactly in the AES128 is the issue. I'm not saying this is always possible but I'm pointing it out as an option which should be explored (and I've never seen utilized). By all means just feel free to give the hackers the detailed attack vector as a lot of the latest releases/patches have done.
> I never heard of the issue (and it wasn't on HN) prior to the release of the patch.
What? HN is suddenly the be-all and end-all of security announcements? I heard of the issue way before the release of the patch. Anyone that subscribes to announcements from Drupal has heard of the issue.
You say that 5 days isn't enough? What period of time is? A week? A month? A year? You can always find people who somehow miss the announcement.
> It's trivial to find out what a patch affects but it's not necessarily easy to find out what the issue is unless the patch actually addresses the specific bug.
It is trivial Take the recent POODLE attack for example. The rumors floating around points to it being an issue in SSL 3 and not TLS 1.0. That contained enough information for someone to preempt the actual announcement with the exact attack.
> Nov 1st: Story about scale of hacks hits mainstream media.
Who the hell cares about the mainstream media when monitoring issues related to software you administer? That's just negligence.
OK. Let's blame the users. We have 12 million negligible administrators. Problem solved. This is a typical engineering attitude. Blame your users.
You're completely missing my points. HN is not the be-all and end-all. It's a proxy for the visibility some specific announcement gets. You must update or you will get hacked would have gotten noticed. A mild message about some upcoming unknown security patch, not so much. And yes, by drawing more attention you increase the risk of getting attention from attackers but in this case it doesn't seem like the right trade-off was made.
Given the specific scenario there are certain variables under your control. There's the timing and "volume" of the announcements. There's the timing and content of the patch. You are trying to set those variables to minimize the number of affected people. If you think this case (12 MILLION) was anywhere close to the minimum I think you're wrong. The period that is long enough is the one that minimizes the number of sites hacked, in this case 5 days from this non-announcement was obviously not enough. I don't use Drupal and I've no personal connection to this issue whatsoever I just judge it by the end result.
It's also absolutely clear there are degrees of disclosure for the specific vulnerability. Having a clear description of the vulnerability makes it easier for someone to take advantage of it. Your sample of one counter-example doesn't make any difference. I'm not saying you can always avoid someone taking advantage I'm just saying if there's a choice between making it easy and making it a little less easy you should chose the second. It's just like having a lock on your bicycle doesn't make it impossible to steal. It may cause the thief to move on to an easier target.
One of the nice things about major releases of Wordpress is that it will auto patch minor releases. I believe this is a reasonable approach for most open source CMS even if it might break some functionality. You can also fairly easily add auto update for plugins and themes by modifying wp-config.php [1].
I suggest that themes/plugins with 100% compatibility ratings should be auto patched too. Auto patching themes can be problematic because updates override changes you've made to the theme files. So my other suggestion would be to automatically create a child theme for every installed theme so that devs can easily update the parent theme and keep changes made to it.
Said operating systems typically require you to use an account that is inaccessible to almost all automated processes. This would not be the case, the two are not analogous.
I really don't think this is such a bad idea, I would actually like to see this. In my hosting environment I have a single location for all my drupal websites core files, and I just update that.
Why does Drupal use string manipulation to generate SQL statements instead of bind parameters? It is 2014, hasn't this been best practice since at least the late 1990s? Wouldn't all security audits, including automated ones, pick this up?
And before someone makes a "joke" about PHP, they're using the PDO framework which supports bindParam(). Seems like woful incompetence in the Drupal codebase to me.
They're using bind-params, but instead of using "?" as placeholder, they were using "named" placeholders, and, the placeholder name was constructed using the associative array key (externally received).
The mistake was using a external provided value instead of using "?" or generating independent names for placeholders.
The query in question is using an IN clause which takes a tuple of values.
I don't think bindParam will allow you to bind an array into a tuple , the only way to pass the tuple is by forming it as a string and adding that to the query string.
It's awful how poorly IN-clauses are supported (in lots of places, not just PHP) considering how often they're used and how useful they are. I mean basic stuff like being able to use them securely or making programmers write their own checks to prevent SQL-errors on empty lists.
Not many people here will care, but SQL Server with .NET do support binding table valued parameters in lowlevel System.Data (SqlCommand).
I'm using a Micro ORM (Insight Database, https://github.com/jonwagner/Insight.Database) that takes advantage of this. It maps a parameter of type IEnumerable<SomeType> automatically to a TVP. This ORM works brilliantly when you want to get everything out of SQL.
No, it doesn't. PHP can handle it fine and we've undergone multiple attacks and security audits [both daily automated ones and professionals by hand]. :/
The problem here was a mistake someone made, not a fundamental support problem with the language.
This is precisely why people mock PHP developers. :/ So many don't even understand how the language f'n works.
I never said it was a problem with the language. I said it was poorly supported in a lot of platforms including PHP and I stand by that.
It has nothing to do with passing security audits or withstanding attacks, there's not a security flauw in the way PHP handles this because PHP or specifically the PDO framework relies on the user to implement this themself. There obviously can't be a security flaw in something which does not exist.
A quick Google search suggests it is not at all obvious to many how to do a parametrised query with an IN-clause using PDO. The highest ranking answer is this SO post: http://stackoverflow.com/a/1586650
Having to iterate the array yourself adding the right amount of placeholders and binding individual values is secure but a lot of boilerplate. Escaping values in PHP and concatting them in the old fashioned way ought to be safe but everyone switched to parametrised queries for a reason: in practice it's often fucked up which leads to security vulnerabilities. The last one, the find_in_set trick, is a clever kludge but a kludge nonetheless.
People shouldn't need to roll their own way to do this because that's where unnecessary mistakes get made.
Just as I have no sympathy for programmers forgetting to always call mysql_real_escape_string and setting their encodings right in the old MySQL driver, it's not difficult to get right, but tons of people didn't and it made the web a worse place for everyone.
Plus, they might be able to figure out how to iterate and count an array but they might also figure out how to use implode instead which is less code and programmers tend to be lazy. And suddenly they've opened their app up to SQL injection because they forgot or are unaware they now need to do escaping despite using prepared statements.
And since their app might contain my data, I care about this and not just think "those idiots brought it upon themselves".
You can write a function for it using ?'s being generated inside your code only. Its perfectly safe to do it that want and allows you to bind parameters.
The only explanation for not doing that is ignorance and/or incompetence. Mistakes happen but to claim its a language problem is incorrect.
<?php
/* Execute a prepared statement by binding PHP variables */
$calories = 150;
$colour = 'red';
$sth = $dbh->prepare('SELECT name, colour, calories
FROM fruit
WHERE calories < ? AND colour = ?');
$sth->bindParam(1, $calories, PDO::PARAM_INT);
$sth->bindParam(2, $colour, PDO::PARAM_STR, 12);
$sth->execute();
?>
So what you do is you write a function to convert the array to a series of ?'s for the IN clause/tuple and then iterate through the array to bind the parameters.
> You can write a function for it using ?'s being generated inside your code only. Its perfectly safe to do it that want and allows you to bind parameters.
Of course you can. Everyone here is saying you shouldn't have to. This is the perfect type of thing to move into the db library. That makes it much safer because you don't have to hand-roll the same stupid code in 50,000 places (and possibly fat-finger it once).
I don't think that would work in this case, because there is no way to bind a PHP array to an SQL tuple. Available param types includes string , int , bool but no array or tuple type.
You can write "SELECT calories FROM fruit WHERE name IN (? , ?, ?)" and then bind the parameters as strings but this will only work in cases where the length of the tuple is known and fixed. If you need to allow for a variable length tuple then you will need to concatenate the query string yourself.
Then what do you do if the passed tuple has zero length?
Dynamically rewriting the SQL isn't going to be tractable in all cases - doing `IN (NULL)` might be a valid value - and throwing an exception is poor form for an actually valid case.
You need to count the array at runtime and convert to a tuple, the only way this can be done using PDO (AFAIK) is by appending the tuple to the string itself because PDO provides no tuple type.
>Why does Drupal use string manipulation to generate SQL statements instead of bind parameters?
Is 'elegance creep' a thing? I would be willing to bet money I don't have, that it's because someone thought it was more elegant and clever, and therefore just better.
Legacy? Tons of code to change, and users more interested in new features? No tests? Developer conceit about their own abilities to "do it right"? All the usual reasons any project has old code that exhibits bad practices by current standards.
Generating SQL statements always involves string manipulation at some stage, doesn't it? I mean at some point even PDO has to send query strings to the SQL server.
The question is whether Drupal really needed to implement their own engine for this, though.
> Generating SQL statements always involves string manipulation at some stage, doesn't it? I mean at some point even PDO has to send query strings to the SQL server.
If your database supports it set PDO::ATTR_EMULATE_PREPARES to false. If not, I'd still take PDO's implementation over a million different implementations which other projects come up with such as this one.
And that was at the same time that everyone was worried about POODLE and the media was going crazy over it.
Somehow this vulnerability went over the radar.
What is interesting is that based on our own data, we started noticing attacks around 8 hours after it was disclosed and we shared some of the payloads being used here:
I cannot think of how many charities this will affect. Almost without exception, every small or medium UK charity I know uses Drupal in some way for their work. These are usually customised for a specific project, and some of these will have donation facilities built-in.
The worrying thing as always is that few people upgrade these instances once they are launched and operational.
I'm wondering how many Australian government departments - who're all encouraged to use a government supported Drupal platform - are dealing with this properly: https://agov.com.au/
In the WordPress world, there are many "managed" WordPress hosting providers (including wordpress.com) which will apply WordPress core updates automatically.
In the Drupal world, there are some similar managed Drupal hosting providers (e.g. Drupal Gardens), but they're much less common. I wonder why.
There are a few, such as Pantheon[1], Acquia Cloud[2], Platform.sh[3]. All three of the above providers did add some level of protection immediately following the security announcement, and they (and some other providers) helped ensure sites were updated to 7.32 as soon as possible.
I have to link back to my earlier comment about the key takeaway here[4]—not just for Drupal sites, but for anyone who operates any site on any server. You can't afford to let your site sit unmaintained if you value the information within; and if you build sites for other people, you have to convey the importance of that to your customers... 'With great power comes great responsibility' and all that jazz.
Your site is either currently broken, or will be someday; it's not about making 100% secure code and servers (you strive for that, of course); it's about your response once something happens (e.g. a security patch is released).
I have started to use Drupal to create a small site for a non-profit. I disabled the site a couple of days after this vulnerability. Is there a way to determine if I have been compromised or since I have put about 30 minutes of work into it so far am I better off rebuilding from scratch?
Small sites may not have been hit if you turned it off soon enough. Quick tips:
* Look through the menu_router table for suspicious looking entries.
* Look at users & user roles, anything new?
* Look for scripts and executibles in your public and private upload files directory.
If you really don't know drupal and it's a new project, start from scratch (New Database but you're code should be safe assuming your repo isn't stored on the same server and there's no write to repo access on server)
No: that's not good advice. The OP said "I disabled the site a couple of days after this vulnerability", but we know that there were attacks in the wild within hours of disclosure. If it's only 30 minutes work you might as well start over.
Also, apparently some sites were found, or it's at least feasible, that someone got control/access of the site and applied the patch themselves to keep others from doing the same. Like a burglar locking the door behind themselves.
I've updated my site the minute I got the e-mail, though I am not sure if it was fast enough, because there is no way to know if there is any backdoors, besides installing a fresh version and checking the differences.
I've cleaned up a bunch of compromised servers, and I've never encountered—at least with exploited wordpress installs—an attacker sophisticated enough to change the timestamps on the files containing back doors, so, while I always just overwrote everything from a backup, finding all of the backdoored files right away would amount to something like:
I read that article last night and was wondering if someone here was going to point out how the 12m figure is made up.
Were there compromised sites? Sure, but I would be surprised if it were more than an order of magnitude less than what the bbc reported. That is still a lot of sites, but not a monumental cluster-fsck that the aftermath is being made out to be.
It's quite a good CMS. Lots of modules that let it be easily extended. Has a cli management tool (drush) that allows for a lot of automation. Non-coders can build complex content sorting and display pages with Views, etc.
It is a bit of a culture shock if you're using to spewing your PHP all over everything ala other lesser blogging tools, but once you've got your head wrapped around how it works internally, you can be very productive and do things that would be difficult/impossible in other systems with almost no code.
I wonder if Drupal will ever recover after this incident.
I haven't used Drupal because the job postings usually advertise super low wages and I think I saw the code base and was totally freaked by how messy the modules and things were.
I won't even consider building something with Drupal unless it was secure but I'm sure this makes sense only in hindsight.
In a nutshell: it's a hell of a lot easier to get going on than Drupal. I've been developing Drupal sites, along with much more experienced (general and drupal specific) developers for the past 6 months. And it's not easy to get going on.
Also, Drupal is often coined a CMS. If you google "drupal is not a cms" you'll find a number of people referring to it as CMF - content management framework. It really allows you to build a custom CMS for a specific purpose rather than customizing a CMS for a specific purpose. It may sound like semantics, but if/when you have a few Drupal sites under your belt you really get what they mean.
Wordpress and Drupal have similar features, but typically WP is thought to be more of a blogging platform while Drupal has a platform for not just blogging, but online community, e-commerce, etc.
Well, truth be told Drupal is 13 years old and WordPress 11 years old. I am sure that they both had a lot of time to add tons of features so I am thinking that the gap between them is more in theory, judging by their roots, and less in practice. I know that WordPress has for example BuddyPress for social networking / communities and a number of (free) e-commerce plugins as well.
Yeah not really that much of a distinction these days. People have built just about anything on WordPress. Community & ecommerce are both covered pretty solidly by the plugin ecosystem.
You can build a skyscraper using a hammer, but you probably shouldn't. It's the exact same argument with WordPress and Drupal. Drupal is vastly superior for most dynamic website tasks then WordPress is. Views + content types provide that.
Worked on both as a dev and decided to use WP for our business that sells mid-six figures annually through our shopping cart, which is just modified WooCommerce. I don't agree that Drupal is vastly superior at all; maintenance and sustainment is far worse with the Drupal project.
I guess that's kind of the point. It all works just fine if you're a decent developer and know what to outsource and what to avoid. The platform/framework/language wars are a useless waste of time.
Moving from Joomla 2 to 3 wasn't a walk in the park, either. Mostly due to modules that were installed that simply were not compatible with version 3 and didn't have replacements.
At times like this, I really wish there were more best practices to mitigate against these type of vulnerabilities.
One idea is using the remote-execution vulnerability itself to remotely patch the servers by passing the antidote as the payload. I am not sure if this would be legal.
The other is something what CloudFlare already does but doing it at the hosting-provider or ISP layer. It need not be as proactive as CloudFlare which can inspect HTTP requests for known exploits and blocks them. It could be something as simple as running scanners and routinely informing customers that their websites are vulnerable. Or better still, temporarily disabling the account or putting it into read-only mode, thereby forcing users to take action.