I think Dropbox would benefit from bringing in a competent PR person to advise them on making this kind of announcement and communicating with customers. (which is saying a lot, since I generally disdain PR people). Commenting on a problem you've caused people without actually apologizing is kind of a dick move. Even the US military does a better job when it accidentally kills people!
The statements should come from the top, but should probably only be written by founders if they're actually good at this kind of thing.
There are a lot of companies who are relatively good at crisis communications with affected parties. Network carriers are often good (at least when talking to other networking professionals), although they often try to restrict the spread of information via NDA.
There are some reasons to limit disclosure immediately after an incident (if you think the vulnerability still exists and might put you at risk, especially for a security threat and not a regular outage). You absolutely want to include "why this won't happen again" in your message (which Arash kind of did this time), but you also want to accept blame at an emotional level -- you can do this in ways which make you look perfectionist and hyper-professional, vs. weak.
I don't actually care that much about Dropbox security myself, but if anyone in the cloud computing space fucks up badly enough and frequently enough to make users distrustful of cloud resources, it makes life harder for everyone else in the space. That is a problem for me, and for other startup founders.
> Even the US military does a better job when it accidentally kills people!
What does the US military do when it perpetrates a boneheaded security screwup that might or might not have compromised a small amount of data belonging to a few people?
(No question, Dropbox's response to this screwup is totally inadequate, but comparing it to how another organization responds when it has killed people is a bit silly.)
Shooting someone for speeding toward your checkpoint who turns out to have actually been deaf or in a hurry to rush a sick kid to the hospital is sad, really bad for everyone, should be avoided if at all possible, etc. There are also a lot of liability and cultural concerns. If you're willing/able to make a real apology for that, it should be really easy to do so for something comparatively minor.
if anyone in the cloud computing space fucks up badly enough and frequently enough to make users distrustful of cloud resources, it makes life harder for everyone else in the space
That's exactly how I felt after the AWS outage. There was a slew of (incorrect) media reports about why cloud computing was inherently bad for your business.
If you’re concerned about any activity that has occurred in your account, you can contact us at security@dropbox.com.
I'd like the full access logs, including timestamps and IP addresses of every time my account was accessed in this timeframe. I've written security@dropbox.com about this, and am waiting to hear back.
We're working around the clock to gather additional data. We will
notify affected users if we detect any unusual logins or activity in
their account. We are reviewing our logs that record password
authentication events in accounts.
We have not been able to detect any relevant account activity for your
account during the time period in question, so we believe that your
account was unaffected by the bug."
They should. Could also do what Facebook (optionally) does: require you to send them a 4-digit code via your phone to allow access to any new unrecognized devices.
I don't understand why they are not assuming everyone is going to be concerned about activity in their account instead of saying "if you are". I would personally expect every customer to be provided with the relevant information proactively. Should it really be up to the user to go through the geebees of files in their account?
Somewhat disconcerting that I am a Dropbox customer, yet neither the breach nor the explanation has been communicated to me by email; I've had to find out about it by reading HN.
I completely agree. Sure, this way >99% of their users will never hear about this episode. But it makes dropbox look so much worse innthe eyes those of us who do.
>There is no way dropbox would be able to explain to them what happened without scaring them silly.
Opposed to who? People who do know what it means and should be scared silly but aren't because they've been beaten into submission by breach after breach after breach.
You're making the rather large assumption that this was a one time goof on the part of Dropbox.
IMO, this is reflective of a corporate culture that places testing and security on the back burner. And while some people may be OK sending their data to such a company, the rest of us might not be.
That doesn't help anything other than dropbox's efforts to obtain new customers because they would be ill informed about the track record of the service.
I don't understand why Drew doesn't either make these posts or find a PR person to filter Arash's comments through. Arash never comes off well in these blog posts or in his comments here surrounding these issues.
His comments always come off feeling like 'sorry bros, it's really not that bad, and we fixed it so no problemo' unlike Drew's comments which generally have a much more personal feel to them.
Agreed. This would be like a major bank saying "for a 4-hour window yesterday, anyone could withdraw money from anyone's account. But don't worry, it's not a problem, because only a few people actually did."
To each their own, but I am far more interested in what happened, what they are doing about it, whether it can happen again, and why I heard aboutnit on HN instead of directly from Dropbox.
I personally am not interested in an apology or a gaveling tone. These people are not my personal friends, I don't have an emotional investment in whether they pretend to care about my feelings. I have an objective intrest in how they choose to act and the information they give me so that I can make my own informed choices.
That makes it sound like it was primarily the fault of the developer who created the bug - things like this indicate an inadequate process or culture not simply a mistake at the developer level.
Edit: Does Dropbox have any testers or is all testing based on automated tests created by the development team?
Or decide how to balance the risk-benefit equation. Perhaps this will inform my choices of which things to keep in my dropbox and which to keep elsewhere. For example, I might decide that my secrets are fine there, but it is inappropriate to store client secrets in Dropbox since the client has certain expectations around my respect for their privacy.
For the duration of the event described in the post on your blog on Sunday I'd like the time of login and IP address of any authenticated sessions during the window. I'd also like to understand why a post from one of your customers linked to via hacker news was the only notification I received as one of your paying customers. If you know which user accounts we logged into during the event it seems rather straight forward that you would notify those impacted. It seems clear that had a 3rd party not brought this to light you'd have felt it unnecessary to notify your customers.
As Requested, I received the response about an hour after I posted this/sent the email. Can't say the response is terribly reassuring, but I suppose if no authentications have happened I don't have to be concerned about THIS incident.
We're working around the clock to gather additional data. We will notify affected users if we detect any unusual logins or activity in their account. We are reviewing our logs that record password authentication events in accounts. We have not been able to detect any relevant account activity for your account during the time period in question, so we believe that your account was unaffected by the bug.
Security leaks happen. The lesson here is treat anything that you put on a machine you don't completely control as being at-risk, even if you pay for top notch security and the vendor guarantees it. The guarantee (and any compensation) isn't a lot of comfort if the information hits the wild. This includes everything from web servers on Amazon to your Gmail account.
In the case of Dropbox, I would suggest PGP or Truecrypt anything sensitive and keep the keys locally or in another location that is completely unrelated to the box.
Not that it would really change anything, but there isn't much wording around apologies or being sorry about letting that happened.
More details would have been nice too. Allowing anybody to log into anybody's account is a big deal, even if in the end a small percentage of people were likely affected. It's not like I couldn't access my account for a few hours or that the sync got messed up somehow.
Also, it'd be nice to know how the bug was discovered on Dropbox's side: did they realize it themselves or was it from nice people who found the problem?
Yeah, that's kind of what I'm curious about: did Dropbox learn about it through that guy's discovery? If so, we're lucky that that guy came across it the very same day the bug was introduced. I'd assume there aren't that many people who would have found the security hole, been nice not to abuse it and cared enough to let Dropbox and the world know…
Perhaps the idea of monitoring the number of failed authentications and placing both an alert when the rate goes over a certain threshold and when it goes under a certain threshold doesn't seem so apparent until after the fact.
Might be a huge lesson here to learn for other services to monitor their authentication system in both directions.
This reaffirms my belief that integration/functional tests give far more bang-for-the-buck than unit tests for deployed applications, though each have their respective merits.
This is not an either or scenario. Both test different things and should be written. I don't want to criticize DB since god knows everyone makes stupid mistakes, but this finally convinced me that I will never pay them for their service and that I should encrypt ALL data in that folder (instead of just the sensitive data, which is what I do now).
Better yet, basic unit tests that verify passed and failed authentication of test accounts. They should run before and after each code deploy (among other unit tests).
It's very disconcerting that they don't seem to be doing this.
Unit tests are a good thing, but could miss problems that crop up only in production; for instance, if they delegated authentication to another server, or if something got deployed out-of-process.
I don't like to be pedantic but code that tests multiple components and how they interact together aren't really "unit tests" anymore since they are testing more than one unit - it's pretty reasonable to assume that an authentication system is comprised of many modules / classes / etc.
The blog post mentions that they ended all logged in sessions, but did they also rollback any new sync settings set from the start of the incident to when they did their cleanup? eg:
User Joe's account is logged into by attacker Tom. Tom sets his computer as one of Joe's "My Computers". Does what they did to clean up the problem invalidate this or does Joe have to log into his account, look at his list of "My Computers" and remove the ones he doesn't recognize manually to stop Tom's system from automatically syncing all his stuff until he does finally notice (which is likely never for most users, I'd assume)?
I'm not affiliated with DropBox or familiar with the way their systems work internally, but it seems pretty likely to me that the "magic files that let you log in without a password" are what DropBox calls "sessions", since this is the only form of access control you have after a password. So yes, they did roll these back.
Doesn't this completely blow their "only a few employees have access to your data" stance out of the water? Doesn't mean that, for example, every one of their programmers has access to user data? If a presumably simple bug can give the whole world access to your data, then it can't be that difficult for any dropbox employee to access it (again, from the inside).
If you weren't around for the last discussion and haven't figured it out by the web client: dropbox encrypts and decrypts your data on their servers which inherently provides anyone with access to them and their encryption decryption methods access to your files.
The problem as discussed many times is that of security and usability, dropbox takes the usability route and for some users this is great for others not so much.
Compare and contrast how Dropbox has communicated this breach with the storm that rained down on LastPass when they forced password changes because of an activity they couldn't explain.
This might explain why Dropbox's poor communication of this security breach is the #winning move.
LastPass Disclosure Shows Why We Can't Have Nice Things
08 May 2011
A few days ago, LastPass announced they would be forcing their users to change their master passwords in response to what was essentially "something weird":
"We take a close look at our logs and try to explain every anomaly we see. Tuesday morning we saw a network traffic anomaly for a few minutes from one of our non-critical machines. These happen occasionally, and we typically identify them as an employee or an automated script.
In this case, we couldn't find that root cause. After delving into the anomaly we found a similar but smaller matching traffic anomaly from one of our databases in the opposite direction (more traffic was sent from the database compared to what was received on the server). Because we can't account for this anomaly either, we're going to be paranoid and assume the worst: that the data we stored in the database was somehow accessed."
LastPass acted exactly like we wish most companies would act: responsibly. And the media's response? Declaring LastPass "hacked" and "vulnerable", and placing them in the same category as Sony—who definitely were hacked—with sensationalist headlines like:
WARNING: Your Web Browser's Master Password May Have Been Stolen -- Change It Now
LastPass Has Been Hacked And Asking Everyone To Change Their Master Passwords
LastPass Hacked, Change of Master Password Urgent
LastPass Is Hacked – Change Your Master Password, But Don't Panic
Should the LastPass, Sony hacks make you fear storing data in the cloud?
The story isn't over yet. I'm still using Lastpass; I'm certainly not going to continue using Dropbox. I'd wager a fair amount of people will do the same.
There has been a lot of comments bashing Arash's response. His post certainly wasn't personal (and should have had a more apologetic tone), but beyond that what would you've liked to see him communicate? He seemed to do a good job explaining what happened and outlined a plan moving forward.
I'm not trying to defend him but I am genuinely curious what other people think his response was missing.
2. The error is presented like casual news. Sort of downplaying the incident and not acknowledging that they screwed up. If the accounts were freely accessible for 4 odd hours, it is pretty serious.
3. No actual details of bug introduced were provided.
4. The communication was done on blog, and emails were not send. Again, it is an attempt to downplay the incident.
So, to take it to one extreme, you think that if zero accounts were accessed, then it would be reasonable for zero users to be directly notified about this?
Some people might also be concerned that the note about sending emails wasn't added to the post until after there was mass outrage at the lack of notification. Makes it feel very inauthentic, as though they were really hoping they'd be able to sweep it under the rug.
I'm not sure I totally understand. Is he saying that the users who were already logged in during the code update were the ones who had access to any account? Or is he wrangling words and letting us know that < 1% of Dropbox users had active web sessions at the time?
I understand the challenge that the Dropbox team finds themselves in here, but I'd very much like more details as to exactly what happened and what was possible for 4+ hours. I'm less interested in they're eventual report of the bad stuff they think did or didn't happen during that time.
He's saying that for any account to have been potentially compromised due to this bug it would have had to occur during the ~4 hour window the bug was active. During that window less than 1% of all dropbox users logged in, so that puts a cap on how many users could potentially have been compromised. Of course, of that "less than 1%" most were probably valid logins, so that 1% doesn't represent only compromised accounts but rather a ceiling on how many could possibly have been compromised.
That said, I can't help but feeling misdirected. I mean, obviously the cap on the number of compromised accounts is relevant, but I think more relevant is the fact that 100% of the accounts were completely insecure for hours.
Misdirected, because as a user I don't care at all how many accounts were actually compromised. This isn't a no-harm-no-foul incident. It's an enormous breach of trust that causes me to completely rethink what I'd be willing to do with their service.
As it could so easily be seen as bashing/trolling I need to preface this question by saying that I'm asking in complete seriousness.
How does one accidentally set the failure mode of any piece of authentication code to escalate privileges instead of denying them? Even when I was first learning to code web apps I never found myself accidentally accepting any password.
It's shocking (to me, at least) because it seems like such a beginner mistake.
On possibility: A developer short circuits permissions checking for local testing and accidentally checks in the change. But automated testing should catch these kinds of mistakes.
Lots of ways, for example: Delegate authentication to another server, have your authentication server go down, mess up a try/catch clause in your app that doesn't handle a bad connection correctly
It's very doubtful something like Dropbox is as simple as
They made an apparently appalling security error; in the name of transparency they should be more specific about what code was the problem and how this bug worked so people have a better idea of really how bad they screwed up. Right now it could be something that could have blindsided anyone or something incredibly obvious that they should have seen or had some process to detect.
Since a few people have commented on finding out via HN rather than Dropbox directly (as did I), I thought I'd post that I received an email from them just then.
In my case though it included "we noticed some potentially suspicious activity during the period", which for me was linking the desktop app. I don't know if everyone will be receiving a note, or just those who fit the potentially-suspicious profile. A quicker notification would have been nice, but I appreciate that they're also obviously looking for any aberrant behaviour.
You're assuming their internal tools have the same fields and field validation requirements as public-facing login mechanisms, while is likely not the case.
What are people storing on dropbox that they're getting so upset by this? The chance that someone took advantage of this bug and accessed YOUR account are so small. I thought the blog post was fine, but I guess it's a good lesson for founders to take these kind of security issues more seriously than they they are - or else your users will freak out... It shouldn't have happened, but they'll learn from it and improve their system so it won't happen again.
If Dropbox would have used client-side encryption (in addition to their server-side security measures) you wouldn't need to worry about issues such as server security.
That's why you shouldn't trust statements like: "Theoretically we can read your data, but we make sure only you can access it". If there's the theoretical potential for abuse it will happen sooner or later. Either deliberately or not.
This is true, but I don't wish for this. Dropbox with client-side encryption would be a different product where a lot of their features would be very difficult to sustain. For example, easy web access or having iOS and other apps easily sync using the Dropbox API.
I am happy with not putting sensitive files on Dropbox, and encrypted the few I do.
Sure. But here's the thing: people don't usually forgive things like this. People heard about an issue that potentially affected their accounts through the media before they heard it from Dropbox - that's the disappointing bit.
On top of not letting something like this happen (where were their tests?), they should be diligent in talking about something like this immediately so that people know they're not being lied to, ever, and that they can trust the service with their data. By not doing so, we have to wonder about how many other bugs do we not know/hear about that might affect our accounts too?
Wouldn't a functional test have caught this better? Ideally a unit test would use mock objects for all external resources, so it probably wouldn't have caught this. A full end to end functional test would have.
No one knows that they didn't unit test these methods. Bugs can arise from the unexpected interaction of separate components also - does anyone seriously think that unit testing is a silver bullet?
Well, I think it goes to show that an automated process that attempts to log in with a random string every 10 minutes to a random account should probably be a standard feature of production. (Or maybe just to a specific account so you don't lock people out, but random would be ideal.)
someone at work just asked me: what's a better alternative? any suggestions? they are reasonably technical, so can cope with something a little more complex than dropbox, but they want secure, easy-to-use, long-term reliable...
They're getting torn apart there. Come on guys, it was a bug that existed for a short amount of time and nobody was affected afaik. I know I'm gonna get shit for defending them, but come on. If you're using Dropbox to store your passwords or confidential documents, you deserve to have your account hacked. It's harsh, but it's an unfortunate reality.
Totally agree... It sucks that it happened, but come on, take a deep breath. I've been trying to think of any documents I might have that would really cause me to get so upset about this... Some might be a little embarrassing, but I really can't think of anything that would make me this pissed off.
Arash, man, I don't know you, but I think it's totally badass that you guys apparently deploy code on Sunday afternoons. This was a pretty terrible oversight, but whatever your 'safeguards' are, don't do silly things like "only production releases on Wednesdays" or all the usual things that add up and cause a company to shrivel into a petrified husk.
The statements should come from the top, but should probably only be written by founders if they're actually good at this kind of thing.
There are a lot of companies who are relatively good at crisis communications with affected parties. Network carriers are often good (at least when talking to other networking professionals), although they often try to restrict the spread of information via NDA.
There are some reasons to limit disclosure immediately after an incident (if you think the vulnerability still exists and might put you at risk, especially for a security threat and not a regular outage). You absolutely want to include "why this won't happen again" in your message (which Arash kind of did this time), but you also want to accept blame at an emotional level -- you can do this in ways which make you look perfectionist and hyper-professional, vs. weak.
I don't actually care that much about Dropbox security myself, but if anyone in the cloud computing space fucks up badly enough and frequently enough to make users distrustful of cloud resources, it makes life harder for everyone else in the space. That is a problem for me, and for other startup founders.