Hacker News new | past | comments | ask | show | jobs | submit login
DigitalOcean not destroying droplets securely, data is completely recoverable (gist.github.com)
187 points by nixgeek on March 31, 2014 | hide | past | favorite | 66 comments



I think the moral of the story is that if you are so concerned about IMMEDIATELY deleting your data and a 48 hour period where you can recover it is unacceptable, you should definitely run your own servers.


If I may suggest a slight rephrase: If you are concerned about RELIABLY destroying your data (which involves being able to verify for yourself that it was thoroughly destroyed when you think it was destroyed), you should keep that data on your own devices.

If your data is stored on a server that someone else controls, you can't reliably destroy that data. Even if it appears to have been destroyed immediately, there could be other copies somewhere. So wherever possible, sensitive data should not be stored on someone else's server. Even in encrypted form.


Or run on an encrypted volume that you scrub prior to killing off the instance.


you shouldn't rely on your hosting provider to secure/shred your data, do it yourself.

encfs is really easy to setup and use, everybody should be using it, especially on VPS's:

    $ sudo apt-get install encfs
or:

    $ sudo yum -y install fuse-encfs
then:

    $ sudo encfs ~/.encrypted /home/private
the /home/private folder is where you place your files (web sites, etc.). ~/.encrypted is where the encrypted version is stored. when prompted hit 'p' for the default paranoid mode, enter a password and you're done. read more about it on the homepage[0]

When you are done with a server and need to secure delete, use shred -- which is installed on both RHEL and Debian based distributions:

    $ sudo shred -fz ~/.encrypted 
    $ sudo shred -fz /home/private
[0] http://www.arg0.net/encfs



encfs is not completely transparent, so don't just blindly do this, you need to check that your data/workflow does not interact badly.

Two examples: (Depending on the settings you choose) hard links might not work at all, and renaming can be very very slow. Both of these are common in certain applications.


Wouldn't an encrypted volume be essentially random data to anyone who doesn't have the key? So all you'd need to do is scrub any traces of the key itself.


Sure, but if't it's sensitive data, the extra couple minutes is probably worthwhile.


Both retaining the IP and being able to recover a destroyed droplet are strictly features. It would be a problem if someone else could create a droplet and recover your data, not when you can do it.


> It would be a problem if someone else could create a droplet and recover your data, not when you can do it.

This. Sometimes it really is better if it's not truly gone forever - in other words, you still have the option of recovery. I'm speaking from the experience of someone who has accidentally lost important, irreplaceable data that had yet to be backed up.

Especially after reading this:

There was also an incident where a third party provisioning service which integrates with our cloud as well as others was compromised and all of the servers for those customers were destroyed


Exactly. The default behavior is exactly as it should be-- scrub the VM itself but keep a snapshot for 48 hours in case it was deleted accidentally.

There is no security concern here but if some users are still upset with this default behavior then they should be manually scrubbing their sensitive data prior to deleting the droplet or, better yet, they should be using their own hardware.

Frankly, I'm shocked at a lot of the responses here. When I first came across this article it was immediately obvious to me that this was not an actual security concern but rather a mixture of misunderstanding and skepticism. So I came here to hackernews feeling entirely confident that this article would have been efficiently debunked and instead I find all manner of ill-informed arguments. The people that don't understand the original article, its fallacies, and the reality of how Digital Ocean is handling droplet destroys, should not be running servers IMO. This stuff should be obvious to you and furthermore, if your data is particularly sensitive, you should have already been under the assumption that the host could not be trusted and that data scrubbing must be done manually.


While I love these features of my droplets, I see how it barely is a far stretch to construct situations in which you wouldn't want to be able to reconstruct it. Most likely destroying evidence is the most obvious one.


In which case you should be manually scrubbing your data on your own. Problem solved.


DigitalOcean instances run with a machine-shared storage pool (think EC2 ephemeral storage), which is why not securely erasing them was a problem.

The "destroyed instance" you see in the "spawn instance from a template" UI, on the other hand, is a snapshot of the destroyed instance, taken upon the instance's destruction. Snapshots are stored in a separate network-object-storage pool (think S3), and raw-reading your ephemeral storage won't turn up deleted snapshots.

Securely erasing an instance means erasing its data from shared ephemeral storage. It doesn't mean erasing any snapshots of it, because snapshots aren't located on shared ephemeral storage.


The issue I have with your statement is that whilst you are likely entirely correct, it was not the expectation defined by DigitalOcean when they built the interface.

The interface clearly states:

"This is irreversible. We will destroy your droplet and all associated backups"

It is very reversible so that statement is plainly untrue.

There are also nasty 'corner cases' (very unexpected) in the resuscitation of droplets, whether you had 'Scrub Data' checked or otherwise, wherein DigitalOcean is tampering with the snapshot contents and replacing the SSH host keys. Totally understand the point Moisey made about needing to poke in ~/.ssh/authorized_keys so a customer can gain access but that's very different to the /etc/ssh ones which are being replaced entirely unnecessarily, and with significant security detriment since you now have a harder job in verifying this is indeed the instance you think.


I agree. I would have expected data to be nuked, not nuked on one volume and retained on another.


Would you expect deleting a Docker container to delete a Docker image made from it?


That's not necessarily a good analogy since you're talking about a COW (copy-on-write) situation, not always, granted, but in this case it seems true.

DigitalOcean admitted that in some cases they are using qcow, so I wouldn't say I expect the image (e.g. Ubuntu 13.10 x64) to disappear when I hit 'Destroy!' (with the appropriate bits set for secure destruction) but any changes which I have made should certainly be gone.

Single Layer: http://i.stack.imgur.com/Riw7y.png Multi Layer: http://docs.docker.io/en/latest/_images/docker-filesystems-m...

In Docker parlance I would not expect the base image to disappear forever, that's a separate layer, but I would expect that were DigitalOcean serving up containers with a 'Scrub Data' option then the contents of my writeable container would be securely erased.


The "destroyed instance" snapshot is a template (just like everything else in the "spawn an instance" UI.) It's not meant to be used to "resuscitate the instance"; it's meant to be used spawn new children from a prepared state. This is what differentiates it from an instance's "backups", which, as DO says, are purged by secure-erase.

Note a word above: children. As in multiple. If you're spawning multiple children from a template, each child needs its own identity and unique tokens (e.g. SSH keys.)


I didn't ask them to make a template from an instance which I wanted to be securely, irrecoverably destroyed and one of the key lines of explanation for this entire issue by DigitalOcean was to allow customers to undo mistakes which to my mind reads more like resuscitation than templating.


I'm not sure I criticize a service for not destroying my data by default when there is an easy way that I actually can destroy my data.


Thanks for pointing this out, nearly 1 year ago we implemented a backup mechanism which stores a destroyed machine for 24 hours. This is only enabled for users with a valid paying account and we used this mechanism sometimes to return a droplet to a customer who accidentally deleted it, other times there were problems with an integrated third party where we were able to recover customer droplets because of a security problem.

We will take the necessary steps so that if users are enabling scrub all data permanently then we will not store this temporary image and therefore destroys will be immediate and permanent.


Hi Folks,

Just wanted to clarify the issue for anyone who didn't have the time to read the full gist.

When we first started DigitalOcean we occasionally received tickets from customers about recovering a droplet that they had destroyed. Unfortunately when a droplet was destroyed it was gone from the system and it wasn't possible to recover. To help our customers we decided that it was a good idea to take a temporary snapshot of the droplet after the destroy was issued that would automatically expire. This way if someone mistakenly destroyed a droplet they could still recreate it.

This proved to be a lifesaver for many customers of DigitalOcean when a third party company that provided a provisioning service that integrated with DO, AWS, Rackspace, etc. was compromised and the attacker issued a delete to all customers and all instances. Because this mechanism was in place we were able to recover almost everyone's droplets.

We ran into an issue with securely scrubbing data which was publicized on HN and we implemented a fix immediately with a scrub flag. Unfortunately we made a mistaken and made the default setting false. Most customers often click the default, and I myself do the same thing, since I assume that the default is the best course of action and this led to this issue resurfacing. This also was posted to HN and we immediately decided that the default behavior should be to scrub.

Prior to this when a customer selected scrub securely because they were taking two actions, issuing a destroy, and setting a flag, it was safe to assume that they indeed want the data completely destroyed. However when we had to reverse the default we were left in a situation where the default would not create a temporary snapshot if we used the secure destroy flag as the indicator for whether or not a temporary snapshot should be created.

Since we've implemented the temporary snapshot feature we have had 1154 droplets that have been restored after a destroy from different 752 customers.

That's 752 customers that were elated to find out that they could recover a droplet that was mistakenly destroyed so obviously this is a very beneficial feature since each time one of those customers recovered a droplet it was a huge win for them.

We assumed that since the temporary snapshots are automatically destroyed this would not be an issue. In fact in the control panel we provided an additional feature which would make the snapshot permanent otherwise that snapshot is deleted.

I think the issue that is brought up here is definitely worth a discussion and we treat security very highly. Since we had the prior HN post regarding changing our default behavior we have been working behind the scenes to ensure that would be the default behavior so that the scrub flag could be removed entirely and that all destroys regardless of how they would be issued would be secure.

That behind the scenes work is almost entirely done so this discussion of the temporary snapshot is great because it allows us to revisit this issue once again.

We have not had any other customer complaints that during a secure destroy the droplet and the backups and the snapshots were not immediately destroyed. So it was great to engage in a conversation with the customer to understand their view on how they wanted these commands to functions.

We'll be engaging with engineering tomorrow to see if it's time for us to begin to phase out the scrub data flag and instead perhaps open up a new flag which would create a temporary snapshot.

For the UX/UI of the control panel we would make the default behavior of the destroy create a temporary snapshot and we would have to discuss to see if the API should behave the same way.

Often API customers are creating and destroying many servers so it may be safe to assume that they do not want a temporary snapshot, though having default behaviors differ between the control panel and the API is generally not a good idea.

I think in general this highlights an issue that all startups deal with. That is as the product grows and matures and as features are added there are often unintended cascading consequences.

In this case we have done our best to do right by the largest number of customers to ensure that data is safely and securely destroyed while still providing a default behavior that would protect customers against accidental destroys whether they be self-initiated or otherwise.

If anyone has any questions regarding this issue or anything else always please feel free to email me directly, my first name, Moisey, at DO (expand that) . com.

Thanks, Moisey Cofounder DigitalOcean


Availability is a core part of Security (not just Confidentiality or Integrity). In that respect, I think DO is doing the right thing - they eventually scrub all data, but leave a small window to recover from mistakes.

Of course, it depends on the use-case and customer priorities, but in my opinion, availability is a much bigger concern for most DO customers than confidentiality. If customers are deeply concerned about the latter, I would say they are less likely to choose a virtual platform in the first place.

I think this could be corrected with a small note (backup data will be scrubbed within 24 hours. If you wish to remove this data sooner, click here).

Having experienced making a mistake and removing the wrong virtual host (albeit on Linode), I really appreciate the fact that it could be recovered. And yes, I have external backups and could have recovered it otherwise, but it was much easier to have an 'undo destroy virtual host' button (so to speak) than to kick off my restore process.

I hope Digital Ocean and other cloud providers don't change the default behaviour. It will hurt many more customers than it would help in my opinion.


I agree with you temp images are good however that check box text is not phrased right it should either say that it takes up to 48h to happen or specify that it only happens on the hypervisor and give you the option to not make temporary backups as well.


Your approach seems very sensible and the data you provide is in support of this. I do hope you stick with this, and more importantly, your data drive meta-methodology, and not let an HN post who don't face the use cases etc. drive you to abandon this.


We love to hear feedback from customers because it allows us to refine the product. From our internal metrics we know that providing an automatic snapshot recovery function is essential given the number of times its been implemented but we can also understand this customers issue with data being destroyed.

The data is destroyed securely, but not as immediately as they would have liked since the default behavior is to preserve a snapshot until it automatically expires.

These discussions are great for us because they allow us to refine the user experience and ensure that we can provide a great service. As we've been doing a tremendous amount of work on the backend code this is a great time for this customers concerns to be discussed as we are knee deep in large rewrites at the moment and its the perfect time to address this, rather than deploy to production and then have to go back and retool.

Given that we've been working on the secure destroy all the time having that be an option doesn't make sense and instead we can use that UX/UI real-estate to expose the temporary snapshot issue but again would love to hear more feedback on whether that's necessary.

We would of course love to please every customer but then we would have a platform with 1,000 features so sometimes you have to politely decline a request because it doesn't fit in with the general use cases, but it doesn't mean that we shouldn't engage in a discussion as that is always helpful.

Even if action isn't immediately taken it often leads to us asking numerous questions that are always great that can lead to new product and feature ideas that may not be tied to the original question in mind but still can move the product forward in other areas.

Thanks!


I agree. I also think that the poster could have talked with a DO rep. before making this public. It seems like the intention was to "expose" DO. I could be wrong though.


Something doesn't add up. The user who wrote the report did not request that the destroyed droplet be recovered. The existence of lingering data was disclosed by traces which appeared in a brand new droplet. The only reason I can come up with for why that would ever happen is that you are re-using droplets, perhaps because erasing and resetting a few files is cheaper and faster than provisioning a brand new instance. If this is indeed the case, then what you are suggesting -- namely, that what looks like sloppiness is in fact conscious, customer-driven product design -- is completely false, because it means that the droplet recovery feature is available only in those cases where the customer did not create a new droplet after destroying the droplet she would now like to recover. I guess that this is why you were "able to recover almost everyone's droplets" after an attack, but not everyone's. The "temporary snapshot" feature is not actually a feature, or even a "mechanism" -- it's a bug, just one with a side-effect is occasionally positive.


Sorry that's on me for not explaining better:

I created a droplet and then destroyed it with the 'Scrub Data' checkbox enabled, and was surprised when I noticed it had been turned into a "temporary snapshot" rather than eliminated entirely in a secure, irrecoverable manner as the text around that UI element would have suggested.

I then went ahead, to prove a point, in restoring said snapshot onto an instance called 'test' and what you are seeing in terms of lingering data is from 'test' as very primitive proof that the snapshot did indeed occur and 'Scrub Data' doesn't behave how I think it should.

tl;dr - raiyu is being perfectly forthcoming and DigitalOcean is not reusing droplets to avoid erase/reset, we're just quibbling over the semantics of whether the 'Scrub Data' box should have a safety net or not, and whether the presence of a safety net could be deemed a security issue.


I read the blog quickly and didn't understand that you restored the snapshot either. I think that's something you should highlight better, because it completely diminish the level of the issue.


Agreed. It took me a while to figure out and it basically makes your post moot, from what I can tell. I thought you made a brand new droplet and found your old data on it.


Moisey - Hat tip definitely due, you've been nothing short of responsive about the concerns raised and I can completely understand what led you down this train of thought, although I continue to think it presents an unwelcome surprise to customers thinking about their data security.

Destroyed should mean destroyed and if the outcome is a UX/UI change to remove the 'Scrub Data' (making it default) box and replace with a 'Temporary Snapshot' box (along with accompanying tooltip for explanation) then I think that would be a perfectly great way to assuage my concerns, and a positive development to the DigitalOcean platform.

Obviously there remains a question of how best to tackle this from an API perspective, perhaps you can consider adding a new parameter to /droplets/:id/destroy along the lines of 'temporary_snapshot' (bool) or similar, with a well-documented default behaviour if not passed in.


Agreed, this highlights a larger issue that we've been dealing with as we've grown to over 100,000 customers and the platform has evolved.

When you have a singular product with a very narrow focus, say Instagram, it allows you to really strip away everything besides the core functionality.

When building a platform often you start off with something simple and elegant and as the number of customers grow you quickly begin to realize that as a platform you need to provide more customizability than first intended.

With that in mind we've been working with our customer support team to understand what are the most frequently asked questions and began to automate that directly into the support system as well as provide a more relevant and up to date help guide for the platform. Both inside the control panel directly but also searchable.

Of course one of the thoughts was to make all of the help documentation public on github and then also field requests from the community to help rewrite any documentation that wasn't clear.

Sometimes when you are engineering a product you take for granted all of the knowledge that you have and something that is obvious is entirely missed.

So that would be the idea behind asking our community to help us update the documentation since they come in with fresh eyes and can spot things that we can't see because for us it's been a continuous spectrum of development rather than something that we just walked into at some point in time.

Thanks!


You mentioned in a comment above an occurrence where someone's DO account was compromised leading to many droplets being trashed but that the account holder was able to recover because of the waiting period. If you add a flag to bypass the temporary snapshot would not this mean the hacker could have succeeded in their attempt to wipe out all the droplets?

I actually prefer the enforced cooling off period for destroying droplets. (As long as the UI/API docs are cleaned up a little to communicate it better.)


Replacing one checkbox with another is not solution, since it will remain optional for customer and thus can easily be missed when submitting the form. For such sensitive topic there must be explicit confirmation of user intents.


I see where your coming from a lot of people don't seem to understand the meaning of permanently and irrecoverably deleted.

My suggestion is to remove the secure delete flag/option there's little reason not to securely delete things and make the default behavior on both websites and the API as delete with 24h-48h backups.

However add a flag/option that does permanently and irrecoverably delete the droplet without making any backups.

This flag/option should require active input from the user so there are no excuses or misunderstandings once checked it's gone for good.


Semi offtopic... I love your interface and service. However, IPv6 is a requirement and you don't offer it. It's been promised for years now, but you leave your customers in the dark. Either say no, like aws, or yes. Please don't keep dicking with us.


There was just some news on that front, actually. There should be an IPv6 beta in Singapore in a few weeks.

http://digitalocean.uservoice.com/forums/136585-digital-ocea...


Hope so this is a programming issue.

But I think DigitalOcean is doing the right thing for customers.


How did you solve the original performance problem with scrubbing by default?


You've responded well -- all you can hope is that people understand.


This is a total non-issue. OP was wrong about his interpretation of the recoverable droplet, then pursued an argument with the same high-pitched rhetoric about what amounts to a UI issue. Sorry, not news


Surely this whole mess can be fixed with a 5 minute UI change - reword the copy and add a checkbox to do/don't take a snapshot (default to checked (take a snapshot)).


Kudos to raiyu and nixgeek for one of the most civilized discussions among disagreeing hackers I've seen in quite some time! :)


The notion of having having 'secure' data on someone else's hardware is just a bit silly.

I think the OP here definitely points at something, but primarily that the `scrubbing` checkbox is essentially a placebo button.

Getting a little meta: a 'no matter what delete this in a fully 100% absolutely totally unrecoverable forever fashion' checkbox is just begging for a generic law enforcement ping which DO would be forced to provide covertly.

I appreciate OPs side, I definitely see DO's viewpoint of customer happiness >> accurate UI.. but the lesson here is definitely to own your own necessarily-secure data.


You have to hand it to Digital Ocean for actually listening to customers, explaining themselves thoroughly, and taking the issue to the community (HN on several instances) for discussion. Totally civil all around. Thanks to nixgeek for raising the potential issue, and thanks to raiyu for engaging in a meaningful back and forth discussion with everyone.

The issue itself? I have accidentally "terminated" a few AWS instances that I instantly wished I hadn't, and so I can see the benefit of it sticking around for 24 hours. This would have saved me a few times if I was using DO instead.


Apparently, this is UX issue. The checkbox is not the right choice for allowing customers to make the decision.

I can suggest couple solutions: Option 1. Explicitly state during the signup process that for recoverability purposes all customer data is removed in N hours after request. At this point DO may loose some customers, which have wrong expectations from the service, but the interface will be simpler.

Option 2. Replace checkbox with additional confirmation page, that asks customer about data removal strategy ("thrash can" or "scrub"). There should not be default selection here. Additional safety measures can be implemented to avoid occasional selection of "scrub" - confirmation by e-mail, using SMS security code or some other "two-factor" approval.


The behavior sounds considered; the web site doesn't describe it to the user.

How about if the Destroy dialog read something like:

  This is irreversible. We will destroy your droplet and all associated backups immediately. We will keep a snapshot that you can use to recover your droplet; you can disable this below.

  [v] Scrub data - [etc.]

  [v] Temporary snapshot - this will keep a snapshot that you can use to recreate your droplet. This snapshot will be destroyed in 24 hours.
and the Select Image list showed something like:

  Destroyed Droplets

  chef.nl-haa1.infr.as f… — automatically deleted at 2014-04-01 09:25 UTC


In my experience the people that wants to restore a destroyed instance outnumber the ones that want the instance scrubbed right away by 10-1 or so (if not more), so basically we (at another ISP) would decommission the instance (which could not be allocated to another customer) and leave a grace period after which it would be effectively scrubbed. If a user wanted to scrub immediately they could send a ticket and we would do it right away (this was noted in the "power down" email to the user), we saw very few of those.


Question for fellow paranoid HNers: what do you use to decommission a server? Do you run shred(1) on all "interesting" files? Do you write over the block device itself with random data?


A quick pass of bcwipe/shred/etc. just to make them less sensitive during transport, and physical destruction of drives. I pretty much just store the drives until I get enough of them and then angle grinder fun, or, if I have someone paying for it, a commercial drive destruction service (and if they let me, I throw personal drives in at the same time, since it's usually no marginal additional cost).

I haven't owned SSDs long enough to need to destroy any, but some physical destruction is the only way.

I do use full disk encryption on drives and then repurpose machines by changing the full disk encryption keys, but those machines haven't left my control -- it's just for changing e.g. a photo drive to a movie drive.

I'd be a bit conflicted if I were buying FusionIO or high end SSDs, but I generally buy 1) fast/big consumer SSDs and 2) big spinning drives, and keep both in service until they're essentially valueless.

IMO, degaussing is probably a good solution in a high volume environment as an early step, but it's not as good as physical destruction, and it wrecks the drive, so you really need physical destruction.

I dream of having an office with wet lab, machine shop, private SCIF/VTRs and a destruction facility with soundproofing.


Physical server or virtual server?

In the latter case (virtual servers, or containers) you have almost no real guarantees that shred(1) or friends will be effective, because you likely have no idea how the provider is implementing storage under the covers. Therefore you are entirely reliant on them, this opacity is a serious issue in the industry (IMO) and even worse is when you are told to expect one behaviour and encounter another!

It's not all that uncommon for physical servers to have their drives pulled and then subjected to a three-stage destruction process wherein they are first degaussed, then thrown into a "shearing" device which cuts them up into more manageable chunks before being "shredded" into fairly small chunks.

There are all kinds of standards for drive destruction but I know some units can output nothing larger than 0.75" x 1" chunks which coupled with degaussing is probably "very fatal".

Data destruction in this manner is required in many government applications (usually depends on the Impact Level) and most large corporations have fairly rigid policies governing how data (and the things used to store it) are destroyed once they are no longer useful.

Erasing data properly from NAND is seriously difficult so even things like shred(1) are not guaranteed to work. Writing over the block device from within the OS also may not get it all because of the firmware doing interesting things (i.e. wear-leveling) however it is thankfully still vulnerable to being bashed with a hammer or shredded.


You don't need shred. A simple zeroing of the data works fine.

The badblocks program from e2fsprogs works very nicely (and is often installed already on bootable linux images):

    badblocks -s -t 0x00 -w -v /dev/foo
I would not shred only interesting files, with journaling file systems (the default these days) you can't really tell for sure where your data ended up.


Earlier using DigitalOcean I also noticed that in the .bash_history there would be a wget for a script on the website of a DigitalOcean employee which had all kinds of clean-instructions.


Anyone else feels like hiring Moisey immediately?


I suspect he's waiting for the $1B+ exit so he can buy his own island complete with secret volcano lair, and that any attempts to hire him would be unsuccessful.


Well, you can't suspect that when you're just reading that convo without prior information!


this headline is misleading click-bait. the customer recovered only his/her own Droplet, not someone else's. and the DigitalOcean explanation is perfectly reasonable. they should only perhaps improve their UX to not surprise users with this.

all in all, someone is venting frustrations.


This headline is a bit sensational given the the OP was incorrect in their assumption. Should probably be changed to: DigitalOcean leaves your droplet around 24 hours after you destroy it. If you care, destroy your own data.


destroy: put an end to the existence of (something) by damaging or attacking it: the room had been destroyed by fire.

--- Oxford Dictionary



Also putting the word out via Twitter:

https://twitter.com/nixgeek/status/450438984574193665

I think awareness is key with these type of issues, infrastructure providers are very opaque beasts and the underlying platform behaviour varies with each of them.

Knowing that you may need to erase sensitive data yourself before initiating the destroy, so that it is not captured in the snapshot, that's probably half the battle.


The snapshot is on object storage. Just delete the snapshot after it's made, and everything will be okay.

For DO staff: also add an option (orthogonal to the secure-erase checkbox) to not make a snapshot.


It's actually impossible to delete the snapshot as a single operation, first you have to 'Restore' it. Oh the irony.

The entire motivation behind this submission was to be educational and informative: the fact is you either need to erase sensitive data from within your droplet before initiating the destroy, or take additional actions (including jumping through the 'restore before destroy' hoops) to eliminate the snapshot after the fact.

Note also that DigitalOcean said that in most cases the snapshot is just deleted via `rm` and is not overwritten with zeroes or random data, so you're still vulnerable to your instance contents potentially showing up down the road when a hard drive ends up in a dumpster somewhere.


> It's actually impossible to delete the snapshot as a single operation, first you have to 'Restore' it. Oh the irony.

Now that's a bug! DO staff, fix this!

> so you're still vulnerable to your instance contents potentially showing up down the road when a hard drive ends up in a dumpster somewhere

From my experience in the computer refurbishing industry, any US corporation with a legal department has data-disposal-related asset-liquidation procedures. If the company is sensible, this results in giant magnets or DBAN; more often, though, it just results in a concrete warehouse floor and a sledgehammer. Either way, client data isn't getting out of the building. (see the sibling comment at https://news.ycombinator.com/item?id=7499125 for more details.)

It's true that someone who hacked into DO's live snapshot servers could dump and examine the disks and possibly find your data[1]. But they could, equally easily, hack into DO's live compute servers and dump your keys from your VM's memory. Until we've got homeomorphic machine-emulation software, instance memory, not snapshots, are the weakest link in your security.

[1] If the user-provided-snapshot servers are themselves a cluster of DO droplets--running, say, OpenStack Swift--then those instances would certainly get secure-erased. This is probably the way I'd set up the system myself, though I have no idea whether DO does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: