My sympathies are with the Drupal community, it may never fully recover from a crisis of this magnitude despite whatever.
At times like this, I really wish there were more best practices to mitigate against these type of vulnerabilities.
One idea is using the remote-execution vulnerability itself to remotely patch the servers by passing the antidote as the payload. I am not sure if this would be legal.
The other is something what CloudFlare already does but doing it at the hosting-provider or ISP layer. It need not be as proactive as CloudFlare which can inspect HTTP requests for known exploits and blocks them. It could be something as simple as running scanners and routinely informing customers that their websites are vulnerable. Or better still, temporarily disabling the account or putting it into read-only mode, thereby forcing users to take action.
> At times like this, I really wish there were more best practices to mitigate against these type of vulnerabilities.
There are many best practices:
- Use version control.
- Have remote backups for the past day, week, month, etc.
- Apply security patches immediately (and have test infrastructure so you can make sure they don't break anything).
- Be able to rebuild a server from scratch w/ backups (configuration management really helps here).
- Subscribe to security mailing lists/RSS feeds/Twitter accounts.
This vulnerability is pretty bad, of course, but there have been many like it before, and there will be many like it in years to come.
If nothing else, use this situation to prepare for the worst on your own site; how would you respond if your site was hacked, your database in an unknown state, and your codebase potentially backdoored?
Configuration management is an enormous pain for systems like Drupal that are essentially built ad-hoc, though. It's not something that's practically baked into something like Chef unless you're willing to spend a ton--a ton--of time baking your assumptions into code. And that (along with stuff like a test infrastructure) assume the capability of even doing it; Bob's Bait Shop does not have the technical staff necessary to act according to best practices but definitely needs a website. ("Use SquareSpace" is rarely an answer, though depending on the project it can be; anything remotely clever is going to need to go outside of the usual SaaS providers and that's why there are so many rickety Drupal sites out there. I used to write them, I know.)
Once again, software is harder than it needs to be. Gary Bernhardt has been going into this in depth on Twitter lately, and I agree with him more and more as I pay more attention to the shitshow around us.
Things have improved greatly! With Drupal 7, there are two main solutions, the configuration management module, and features. With Drupal 8 (likely coming out within the next year, currently in beta) YAML-based exportable configuration management is baked into core and will be much more accessible for custom development and contributed modules.
Sure. Now they need to grapple with the fundamental differences between install-and-tweak and design-through-code to actually make this meaningful in a modern (Chef/Puppet/Docker) environment. I won't be holding my breath.
A modern, well-architected site build works wonderfully with the aforementioned tools. In fact, I have a demonstration VM (Vagrant + Ansible) that you can use to bootstrap a Drupal site with a given install profile/configuration in a few minutes using a Drush makefile: https://github.com/geerlingguy/drupal-dev-vm
Sadly, most Drupal developers and development shops either don't know about or don't care to take the time to build sites in this manner (instead of schlepping databases and file dumps all over the place, export everything to code)... but if you do, team-based/large project Drupal development becomes so much more sane.
I'm aware of drush and its makefiles. In the overwhelming case, that's antithetical to how working on Drupal actually is, because of the amount of exploration and tweaking involved. It's not a sufficiently simple product such that you can just eyeball a makefile and go.
To be honest, it's not actually as dire as it may seem in my opinion. Consider, if you're on shared hosting, which a lot of drupal sites will be then the server itself isn't your problem only the site is. Copy the site off and have your host reinitialise a clean account, that's taken care of. In terms of the site you're looking for code backdoors, there are a relatively limited number of places these can live. You can completely overwrite drupal core + contrib modules with good clean versions, that really limits the places where bad code can live to the files directory and the themes directory. Files for the most part shouldn't contain any executable scripts so locating backdoors there is relatively easy.
The theme will contain php files but their number is relatively limited and is primarily templating related, it should be possible to find bad code here if you know what you're looking for, after all the theme doesn't handle authentication so can't exactly introduce subtle 1 character tweaks here to make the site vulnerable.
The database is more complicated unfortunately, besides simple privilege escalation and permission alteration which are relatively easy to find, drupal often allows php to be stored and executed from within the db. I can see if an attacker is particularly sophisticated a backdoor in there being harder to find, but I imagine there will be people working on tools to hunt that kind of thing in the imminent future.
Or, autopatching of security patches without human intervention. This stuff isn't that hard too figure out. The Drupal community needs to stop pretending that its 1996 anymore and start taking security seriously. I think the CMS world needs a Microsoft circa 1999 moment.
That is a absolutely horrible idea for frameworks that Drupal. That essentially implies that someone has arbitrary write access to your servers and is free to modify things as he or she wishes.
Many web hosts do this. At <redacted UK-based LAMP host> we do both the WAF stuff (which Cloudflare don't actually do, unless you pay them), and your latter suggestion of read-only'ing accounts on discovery of vulnerabilities.
The unfortunate reality of the latter one is that it's going to break their site, probably, since we don't know what they run, and you can't just read only an entire account blindly without breaking something. We could do better there, but it will never be perfect in this environment.
IMO a very important practice is designing to block escalation of privileges. For example, if your web server never stores a list of all e-mail accounts but instead queries a different service for an authenticated user's e-mail then even a complete compromise of the web server would not allow an attacker to get a list of all user's e-mails. Perhaps the canonical example is the handling of passwords in Unix/Linux. Most competent developers know how to handle passwords but it seems not necessarily the underlying lessons/principles. Even if you manage to gain root privileges you can not retrieve user's passwords. Another way of thinking about this is defense in layers while assuming the exterior layers can be completely broken.
I like the idea of deploying an antidote but that won't always work and I can see people not liking it. CloudFlare would be an example of defense by layers.
Another observation is about how the vulnerability is disclosed. I don't think this was handled properly here. Ideally you want to see a disclosure that allows users to take action without actually revealing the specific vulnerability. An extreme example would simply be to tell all Drupal users to take their sites down and give them enough time to do this before disclosing anything. Possibly a less broad action could be taken while still hiding the specific exploit. To release a patch revealing the exploit while many users are still open to an exploit is extremely problematic.
It's a matter of controlling that window and minimizing the number of affected sites.
Scenario #1: Release patch with no advance notice. Takes time for people to take notice and apply. In that window lots (12 million?!) sites get hacked. That doesn't sound like a very happy outcome.
Scenario #2: Announce a patch will be released in advance. Get the word out. Give people time to hear it. You're not giving any specifics other than there is some vulnerability. That's doesn't help any hacker, every product has some vulnerability. There'll probably be a lot less than 12M sites hacked. Also consider closing the vulnerability in ways that don't directly touch the affected code to make it more difficult to reverse engineer.
Obviously if everyone knows of the exploit you don't have the time but often the way hackers hear about the exploit is through your announcement...
[EDIT: Thanks to aryx above I learnt there was a 5 day window in this specific case (An Oct 10th announcement for Oct 15th security update). I don't think that announcement made it clear enough what the consequences of not applying the patch immediately would be. I think this is something that needs more careful consideration in the future.]
You can announce the timing of the patch availability in advance. You can tell everyone to take their site down between the release of the patch and the application. Depending on the vulnerability there may be a way of blocking it without specifically advertising the vulnerability (e.g. block it in unrelated code).
At any rate, worse thing is simply announcing [EDIT: i.e. releasing] a patch that is not going to be applied immediately by a large percentage of your users and allows all attackers to attack those sites. I can't quite think of a worst approach; even never patching at all might be preferable to that.
> You can tell everyone to take their site down between the release of the patch and the application.
Do you honestly believe businesses relying on Drupal for core parts of their business will do this? You are seriously naive.
> Depending on the vulnerability there may be a way of blocking it without specifically advertising the vulnerability (e.g. block it in unrelated code).
If there's a patch it's trivial to find out what it affects.
> I can't quite think of a worst approach; even never patching at all might be preferable to that.
No, that's just stupid talk because it assumes that no one else has known about the vulnerability before the announcement.
"In its "highly critical" announcement, Drupal's security team said anyone who did not take action within seven hours of the bug being discovered on 15 October should "should proceed under the assumption" that their site was compromised."
This quote is not about the people who knew about the vulnerability before the patch was released on Oct 15th, right? I'm sorry but I still think that if the outcome of the release of the patch is 12M sites hacked this isn't a good outcome. I wasn't aware of the 5 day window but maybe a bigger announcement should have been made. I never heard of the issue (and it wasn't on HN) prior to the release of the patch. The Oct 10th announcement wasn't proportional to the size of the issue.
It's trivial to find out what a patch affects but it's not necessarily easy to find out what the issue is unless the patch actually addresses the specific bug. I'm talking more generally than this specific exploit. Let's say there's an OpenSSL bug in AES/128 implementation. You could release a patch to the AES/128 code or you could release a patch disabling AES/128. The former is an open invitation to hackers. The latter still leaves a potentially large task of figuring out what exactly in the AES128 is the issue. I'm not saying this is always possible but I'm pointing it out as an option which should be explored (and I've never seen utilized). By all means just feel free to give the hackers the detailed attack vector as a lot of the latest releases/patches have done.
> I never heard of the issue (and it wasn't on HN) prior to the release of the patch.
What? HN is suddenly the be-all and end-all of security announcements? I heard of the issue way before the release of the patch. Anyone that subscribes to announcements from Drupal has heard of the issue.
You say that 5 days isn't enough? What period of time is? A week? A month? A year? You can always find people who somehow miss the announcement.
> It's trivial to find out what a patch affects but it's not necessarily easy to find out what the issue is unless the patch actually addresses the specific bug.
It is trivial Take the recent POODLE attack for example. The rumors floating around points to it being an issue in SSL 3 and not TLS 1.0. That contained enough information for someone to preempt the actual announcement with the exact attack.
> Nov 1st: Story about scale of hacks hits mainstream media.
Who the hell cares about the mainstream media when monitoring issues related to software you administer? That's just negligence.
OK. Let's blame the users. We have 12 million negligible administrators. Problem solved. This is a typical engineering attitude. Blame your users.
You're completely missing my points. HN is not the be-all and end-all. It's a proxy for the visibility some specific announcement gets. You must update or you will get hacked would have gotten noticed. A mild message about some upcoming unknown security patch, not so much. And yes, by drawing more attention you increase the risk of getting attention from attackers but in this case it doesn't seem like the right trade-off was made.
Given the specific scenario there are certain variables under your control. There's the timing and "volume" of the announcements. There's the timing and content of the patch. You are trying to set those variables to minimize the number of affected people. If you think this case (12 MILLION) was anywhere close to the minimum I think you're wrong. The period that is long enough is the one that minimizes the number of sites hacked, in this case 5 days from this non-announcement was obviously not enough. I don't use Drupal and I've no personal connection to this issue whatsoever I just judge it by the end result.
It's also absolutely clear there are degrees of disclosure for the specific vulnerability. Having a clear description of the vulnerability makes it easier for someone to take advantage of it. Your sample of one counter-example doesn't make any difference. I'm not saying you can always avoid someone taking advantage I'm just saying if there's a choice between making it easy and making it a little less easy you should chose the second. It's just like having a lock on your bicycle doesn't make it impossible to steal. It may cause the thief to move on to an easier target.
At times like this, I really wish there were more best practices to mitigate against these type of vulnerabilities.
One idea is using the remote-execution vulnerability itself to remotely patch the servers by passing the antidote as the payload. I am not sure if this would be legal.
The other is something what CloudFlare already does but doing it at the hosting-provider or ISP layer. It need not be as proactive as CloudFlare which can inspect HTTP requests for known exploits and blocks them. It could be something as simple as running scanners and routinely informing customers that their websites are vulnerable. Or better still, temporarily disabling the account or putting it into read-only mode, thereby forcing users to take action.