Hacker News new | past | comments | ask | show | jobs | submit login
Code Spaces data and backups deleted by hackers (codespaces.com)
119 points by leejacobson on June 18, 2014 | hide | past | favorite | 78 comments



Leaving aside the obviously deficient sysadmin work here: the timeline of the story doesn't add up. I can only hope this explanation is not accurate.

You find notes in your AWS control panel saying you should contact some Hotmail address. OK. So the first thing you do is reach out to that address and take the time to communicate intricate extortion details? Only after that you think maybe it's a good idea to start changing passwords, and right then the other party takes action and deletes all the things?

If that's what actually happened then I'm afraid something like this was bound to happen sooner or later.


I feel that a lot of people here are being unnecessarily harsh. It was all a bit of a silly mistake in hindsight but Code Spaces was a very new service I'm not even certain it had secured funding yet.

The timeline looks to me like email address shows up. Check email address. Email address contains extortion details. Try to change passwords. Hacker gets in again and again while deleting stuff. Cannot get rid of hacker. Do not have money. Within 12 hours everything is gone.


They have existed for a number of years[0]. Even still, being a new service is no excuse for such poor handling of OpSec.

[0]: https://twitter.com/CodeSpaces/status/265757401368637440


Is it possible to 'lock' your amazon control panel to a specific set of IP addresses?

In the payment world it is a fairly common feature to use a block-by-default strategy for such crucial controls.

Hosting your project management and your sources with other companies always did feel strange to me. I can see how it works well for open source project and git (after all, every repo is a complete copy) but to host the master of a subversion repo 'in the cloud' and to have your project management in the cloud feels uneasy to me.

If this or something like it would happen to github and all the github issues would be lost that would be a fairly major disaster.

You never know how solid the infrastructure and solutions chosen behind a nice looking web front are until it goes down, and this one went down hard.

Condolences to the users of codespaces.com, they are the ones who lost most in all this.

From codespaces backup page:

" Real Time Backups Backup

All your Source Code is backed up in real time, so that in the unlikely event of a system break down your data is safe.

Not only do we Back Up your data we also give you access to our backup up data via the Code Spaces Admin console so you can keep your own copies.

Whenever you make a change we make a backup."

So much for that I guess, if it is spinning and online it is not a backup.


> So much for that I guess, if it is spinning and online it is not a backup.

True, although with S3, you can make backups very difficult to remove:

http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelet...

"If a bucket's versioning configuration is MFA Delete enabled, the bucket owner must include the x-amz-mfa request header in requests to permanently delete an object version or change the versioning state of the bucket. The header's value is the concatenation of your authentication device's serial number, a space, and the authentication code displayed on it. If you do not include this request header, the request fails."


Yes, so all it takes then if for amazon to either be buggy or to fail for some reason. You can't really outsource responsibility for stuff like backups. They should be under your control and yours alone, and they should not live in the same DC or with the same provider where you store the rest of your data.

And when you've made your backup you take it offline so that no matter what you can get back in business.


Okay then. Where would these backups go? S3 is easy to backup to. Tarsnap is nice. Rsync.net works as well. But these are all online backup options.

If you're advocating offloading to physical media, you need someone who is going to religiously do it (execute code, pull to removable flash drive/SATA dock), and the more up to date you want your backups to be, the more tedious it becomes.

How much are you willing to spend to have AWS Export send you a physical SATA drive nightly?


> Where would these backups go?

Why not to a big disk sitting on a computer in your house?

The server doesn't push to it, the home computer pulls from the server (via a cron job or something). That way an attacked can come in from somewhere else. It doesn't have to be perfect, and it shouldn't be your only backup. But having something like that is great for catastrophes like this.


> Why not to a big disk sitting on a computer in your house?

I used to do this to save the $6 and backup locally. Unfortunately, it stopped being practical once the backups hit 3-4 GB since they'd interfere with my internet access in the morning.

Realistically, what I'd do, is have each founder [or just any 2 technical folk really] each setup seperate accounts at two vendors [e.g. Backupsy + Kimsufi] and have the VM pull the backup from the source. I'd keep a week's worth of backups in this way.

No one person could destroy 100% of the backups. A single breach would not destroy 100% of the backups [although it might destroy the production environment depending on permissions].

The cost for such a solution to cover like 1TB of data? $40/month x2.

If you are a funded startup and you aren't able to spend $100/month on securing your backups I'm not sure what to tell you.


> I used to do this to save the $6 and backup locally. Unfortunately, it stopped being practical once the backups hit 3-4 GB since they'd interfere with my internet access in the morning.

3-4 GB of data each day?

I backup data from my personal laptop to digitalocean droplet ($5 month) and then during the night backup droplet (which also stores my mail and other stuff) to the disc connected to raspi in my home. Incremental backup (rdiff-backup) takes literally 5 minutes (3 mins for /home and 2 mins for mail). And amount of data slowly approaches 9GB.


The problem with incremental backups is that once data at the source gets corrupted, you lose it in both places.

Full backups are less efficient, but significantly safer.


> 3-4 GB of data each day?

It is just a snapshot of the entire database.


and then your house burns down, floods, is robbed.


One of the basic premises behind a solid backup strategy is that if disaster hits that it does not hit simultaneously in all places. If that does happen then I think you have different problems to contend with than trying to restore your backups.


I keep a backup of my home data at the office (encrypted volume). Plus another one at a family member's house halfway across the continent (disks are exchanged at family get-togethers)

It's not hard - you just need to do it.


At its simplest, you should be able to backup to another Amazon S3 setup, that's completely isolated, belonging to a separate account.

Backups should be initiated from a production account access key where "Create" access has been granted, but all the storage and maintenance by another AWS account with it's own access key.

However, I'm not sure that's technically feasible at the moment, without quite a lot of manual scripting


Tarsnap makes that approach easy. Well, easy if you're comfortable with command line tools.


That's one of the reasons I never used anything like AWS. (Another is that for my kind of usage they're just too expensive)


>>You can't really outsource responsibility for stuff like backups. They should be under your control and yours alone

Incorrect.

The best practice is to have multiple backups at different locations, some online, some offline. The offline ones should not all be under your control. For example, companies in certain sectors store copies of their source code in bank vaults, updated once per year. The reason is clear: if something happens to your company (e.g. a plane crashes into your high-rise...), you can recover from an offsite copy of your business data. A company that used to have its headquarters in the Twin Towers recovered from Sept 11 in under 24 hours because they didn't say "all of our backups should be under our control."

Just because something is online does not mean it is not a backup. It can be a backup, it just should not be your only one.


Amazon's web services and their control panels support multi-factor authentication methods and customisable permissions models. For details see: http://aws.amazon.com/iam/


There is no reason people shouldn't be using multi-factor auth with their AWS master account. Please do so!

(ops guy)


That's extraordinary .. but it does drive home the distinction between "backup" and "online second copy of your data". Proper backups should be offline when not in use.


> Proper backups should be offline when not in use.

I'm not sure I'd go that far, but I'd say that backups on the same service as the data aren't a good idea. It would smart of use a different cloud provider for the backups and (depending on the size of the operation), even a disk sitting in someone's house somewhere, just in case.


That's why I have a Mini on the bookshelf that SSH's into an EC2 instance (IP restricted) during a 1 hour window each night when the router is set to open and copies that day's PGP'd backups over. Then it goes back to sleep to wake on schedule the following night. Not the same as a tape in a vault, but it's my last resort.


All I can say is this:

If you can delete it with a single control panel, it doesn't count as an offsite back. Fire the devops


DevOps means different things to different people.

most people use the term to mean 'a coder who can configure apache'

this whole incident reeks of a lack of OpSec and general Ops knowledge.

Lack of offsite backups is one thing, but if you're using cloud infrastructure you never place all eggs in one basket.

your instances have to be disposable and reproducible, at the first hint of a break in my default action is to torch and rebuild.


> your instances have to be disposable and reproducible, at the first hint of a break in my default action is to torch and rebuild.

Or rather shutdown, preserve, start replacements. IMHO your automation should be able to deal with the old preserved instances being left alone.


DevOps here.

There are some things you aren't going to expect (compromise of your AWS console). This could have been solved by having MFA enabled, as well as having the app push backups in realtime, versioned with delete protection, to S3 buckets under the control of another account (write access, but no delete access).

Show of hands how many people here are doing it this way.


Seriously, if your root account and all full admin accounts aren't using MFA you're just asking for it. Also if you're not using purpose specific access keys, you're just asking for it. If the first thing you do isn't calling AWS support, wow...


Couldn't agree more. Everything under a single platform, no MFA, no (real) offsite backup, and on top of that they spent 12 hours corresponding with the attacker, instead of immediately calling Amazon to ask their help to shut down everything, while they still had time?

I'm sorry, but this is a succession of things not to do in terms of system operations. Probably the team never managed mission critical platforms before, and hopefully they now learned the lesson.


How many companies have not yet learned that lesson? There are probably a lot of codespaces on AWS. My reasoning is that if you make it so that a developer can set up a virtual datacenter but does not have the background of actually running such an installation then you're going to have to assume that it is probably quite fragile.

Software people tend to make all kinds of assumptions about hardware that do not work out in practice.


That the problem with the recent "DevOps" trend. Lots of people coming from a "Dev" background, but no real "Ops".

And now that spinning up a couple of servers on AWS and creating snapshots on-the-fly are so easy, it gives the false impression that you don't need much to act as a sysadmin.


Not to be too flippant, but the company's closing shop. So, yeah, the DevOps are fired, along with everybody else.

As for the rest of us: AWS is a great one-stop shop. Unfortunately, using just AWS puts you in the "all the eggs in one basket" scenario that we were warned against as children.


Two-factor authentication is a second basket.

Sending a copy to Glacier is a second basket.

Does Amazon not have 30-day undelete for bulk storage? Seems crazy.


> Two-factor authentication is a second basket.

Not in my books. Recommended to be sure, but this has far too many single points of failure. To name a few:

- Software corrupts data

- Hardware corrupts data

- Social engineering bypasses 2FA


> Two-factor authentication is a second basket.

Two-factor authentication is a second basket is a better carrying strap but it's still on a single basket.


They ran a code hosting service and they had no offsite backups?

Wow, just wow.


From their homepage "We offer Rock Solid, Secure .. hosting". Perhaps not so rock solid...


I am in awe, myself.


Saw this earlier today. That's rough. Obviously they had some problems with their architecture (backups shouldn’t be able to be deleted like that), but it's still pretty messed up. I hope they catch the guy. I won't help Code Spaces, but whoever it was deserves to be caught.


s/I/It/


It hope they catch the guy?


    I won't help Code Spaces
s/I/It/

    It won't help Code Spaces


From the Amazon RDS documentation:

When the backup retention changes to a non-zero value, the first backup occurs immediately. Changing the backup retention period to 0 turns off automatic backups for the DB instance, and deletes all existing automated backups for the instance.

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overvi...


> and deletes all existing automated backups for the instance.

That could do with some sort of gracetime, both against shoot-in-your-foot scenarios and bad guys.


Unfortunately one of the major backup related pain points with RDS currently is you have no way to interact with the automated backups aside from restoring an instance to a point in time. They are also deleted when the instance is deleted. Manual snapshots aren't. Manual snapshots can be used to launch and new instance, or they can be moved to a different region. You MUST do your own backups of the databases if you want to get them "offsite" or off account. Like I said, PITA right now and from what I here a common request on RDS is being able to get at the automated backups and snapshots so you can move them offsite.


while I feel bad for thoer customers, I have no remorse for the creators. Stupid and mediocre managment of their code...not to mention...how does a code hosting service not understand the difference between a backup and an off-site backup? how could they delete an account and not notice (or bother to check) the other created accounts? No notification on account access creation? So mamy mediocre mistakes...


Seems to be down so cache: http://webcache.googleusercontent.com/search?q=cache:qpjW4k2...

"We are experiencing massive demand on our support capacity, we are going to get to everyone it will just take time. Code Spaces : Is Down!

Dear Customers,

On Tuesday the 17th of June 2014 we received a well orchestrated DDOS against our servers, this happens quite often and we normally overcome them in a way that is transparent to the Code Spaces community. On this occasion however the DDOS was just the start.

An unauthorised person who at this point who is still unknown (All we can say is that we have no reason to think its anyone who is or was employed with Code Spaces) had gained access to our Amazon EC2 control panel and had left a number of messages for us to contact them using a hotmail address

Reaching out to the address started a chain of events that revolved arount the person trying to extort a large fee in order to resolve the DDOS.

Upon realisation that somebody had access to our control panel we started to investigate how access had been gained and what access that person had to the data in our systems, it became clear that so far no machine access had been achieved due to the intruder not having our Private Keys.

At this point we took action to take control back of our panel by changing passwords, however the intruder had prepared for this and had already created a number of backup logins to the panel and upon seeing us make the attempted recovery of the account he proceeded to randomly delete artifacts from the panel. We finally managed to get our panel access back but not before he had removed all EBS snapshots, S3 buckets, all AMI's, some EBS instances and several machine instances.

In summary, most of our data, backups, machine configurations and offsite backups were either partially or completely deleted.

This took place over a 12 hour period which I have condensed into this very brief explanation, which I will elaborate on more once we have managed our customers needs.

Data Status

All svn repositories that had the following url structure have been deleted from our live EBS's and all backups and snapshots have been deleted: https://[ACCOUNT].codesapces.com/svn/[REPONAME]

All Svn repositoies using the following url format are still available for export but all backups and snapshots have been deleted: https://svn.codespaces.com/[ACCOUNT]/[REPONAME]

All Git repositories are available for export but all backups and snapshots have been deleted

All Code Spaces machines have been deleted except some old svn nodes and one git node.

All EBS volumes containing database files have been deleted as have all snapshots and backups.

Code Spaces Status

Code Spaces will not be able to operate beyond this point, the cost of resolving this issue to date and the expected cost of refunding customers who have been left without the service they paid for will put Code Spaces in a irreversible position both financially and in terms of on going credibility.

As such at this point in time we have no alternative but to cease trading and concentrate on supporting our affected customers in exporting any remaining data they have left with us.

All that we can say at this point is how sorry we are to both our customers and to the people who make a living at Code Spaces for the chain of events that lead us here.

In order to get any remaining data exported please email us at support[at]codespaces.com with your account url and we will endeavour to process the request as soon as possible.

On behalf of everyone at Code Spaces, please accept our sincere apologies for the inconvenience this has caused to you, and ask for your understanding during this time! We hope that one day we will be able to and reinstate the service and credibility that Code Spaces once had!"


Thanks for the cache just getting to this.

Edited because someone didn't like my original tone. Was a bit rushed to be honest.

Few things seem off about this:

- Offsite backups were also deleted, I don't think they had offsite backups, or at least backups you could legitimately say were "off site."

- EC2 has two factor auth, why you wouldn't use this for your business I don't know. [1]

- Corresponding with extortionist is a really dumb move. It would be better time spent locking things down - contacting amazon directly to get an account lock / getting your ducks in a row.

[1] http://aws.amazon.com/iam/details/mfa/


There is something about this whole story that feels weird, I can't name it but it is as if this isn't the whole story.


Hindsight is a bitch. Of course using 2-factor auth was the way to go, of course offsite backups have to really be "off site" (and not available to anyone with access to AWS control panel to delete), etc, etc, etc.

Now there are many "of courses" for the owners (that external people already knew, but it doesn't help their situation). It seems that for them these things weren't so obvious as they are now... the unknown unknowns.

Sad story but I'd call lessons learned for them, no news for the rest of the Internet.


Probably because of the part where they say this isn't the whole story: "This took place over a 12 hour period which I have condensed into this very brief explanation, which I will elaborate on more once we have managed our customers needs."


That could be it. But there is a certain dissonance about this whole thing, I try to imagine myself in the same situation and the whole thing weirds me out. How could this mysterious hacker have known they had no other backups? Have they talked to LE at all at this point? Why not string the guy along, buy time, immediately alert amazon to lock the account completely?

So many questions. Anyway, they'll be updating this sooner or later, I just can't help but feel a bit weirded out by some of the things in there (and things that should be in there that are not).

This is most likely just my professional paranoia acting up. And of course it is easy enough to be back-seat driver here, I'd hate to be in their shoes, no matter how they got there.


> How could this mysterious hacker have known they had no other backups?

I don't think we can infer that he knew that. It seems more likely to me that he expected the outcome of deleting all their Amazon stuff he could reach would be that they would be down for a day or two as they reconfigured everything and then restored from offsite backups, costing them overtime or comp time for their IT guys, a few disgruntled customers who leave, a few more disgruntled customers they have to placate with freebies, and making them more likely to pay next time an extortionist comes around.

I would not at all be surprised if the extortionist is very surprised that they did not have other backups and his actions have probably killed the company.

He's probably also somewhat worried, as this probably knocks the monetary damages up enough to (1) make it much more likely that this will get some serious law enforcement attention, and (2) if he is ever caught and convicted greatly increase his sentence and/or fine by moving the severity level of the offense way up.

For instance, here are some examples for 18 USC 1030(a)(5), which covers causing damage or loss on a computer via unauthorized access, assuming no other factors that increase the sentence:

       LOSS    MONTHS           FINE

       $10k       0-6    $ 1k  - 10k
       $30k      6-12    $ 2k  - 20k
       $70k     10-16    $ 3k  - 30k
      $120k     15-21    $ 4k  - 40k
      $200k     21-27    $ 5k  - 50k
      $400k     27-33    $ 6k  - 60k
     $1,000k    33-41    $ 7.5k- 75k
     $2,500k    41-51    $ 7.5k- 75k
     $7,000k    51-63    $10k - 100k
    $20,000k    63-78    $12.5k-125k
    $50,000k    78-97    $12.5k-125k
   $100,000k    97-121   $15k - 150k
   $200,000k   121-151   $17.5- 175k
   $400,000k   151-188   $17.5- 175k
   above that  188-235   $20k - 200k
Trying to cost someone a few thousand dollars worth of damage and instead killing their $10 million dollar company, for instance, changes it from 6 months tops to 5 years minimum. Ouch.


> I would not at all be surprised if the extortionist is very surprised that they did not have other backups and his actions have probably killed the company. > He's probably also somewhat worried, as this probably knocks the monetary damages up enough to (1) make it much more likely that this will get some serious law enforcement attention, and (2) if he is ever caught and convicted greatly increase his sentence and/or fine by moving the severity level of the offense way up.

That's plausible. It makes some sense that if you destroy something that you should be responsible for that. At the same time, even for a hacker the assumption that there would be back-ups would be a fairly logical one, though I'd hate to be in a position of fielding that defense.


If I throw a rock into your garage, and knock over your precariously balanced anvil onto a Lamborghini, that's 100% on me.

http://en.wikipedia.org/wiki/Eggshell_skull


Reminds me of LiveJournal's dead harddrive induced closure. When will startups learn how it's done?


When did that happen? I was under the impression that LiveJournal was still around however it was mostly popular in Russia. Any links to that?


Definitely still around. My wife uses it daily and is part of several communities.


Apparently they won't. I can't tell you the number of times I hear "everyone who's important in the startup world says 'shut up and ship' and we'll think about security & BC/DR later", where "later" usually ends up being "after a breach". If they survive.


Why is it possible to destroy an entire enterprise by compromising an Amazon account? Where the fuck is their 2FA? What about a cooling off period before committing changes like deleting all of your storage? Amazon's infrastructure seems to be built without essential safeguards.


Amazon has extremely tight security, including 2fa, fine-grained IAM permissions, instance security groups, VPC, and more.

The fact that Code Space's didn't bother to use them is their own problem, not a failing on Amazon's side.

Additionally, storing all of your backups with the same service as your production environment was outright moronic.


> Amazon's infrastructure seems to be built without essential safeguards

Their amazon's infrastructure


Amazon AWS has 2FA, clearly not used (unless it actually was an inside job).

As for the cooling off period, I'm not sure. Perhaps you can get the contents back, but it may be cost prohibitive.


Amazon does have a remarkably fine-grained control mechanism - but you need to use it.

For example I never publish my Route53 (DNS hosting service) keys, but even if they were leaked the account is only setup on the Amazon side to work from a single source IP.

You can restrict permissions significantly, so again in my case I've got a user configured who can only add/delete DNS records - but cannot create a new zone, or delete other zones. Not ideal, since "remove all records" is almost the same as "delete zone" in practice, but I'm not worried that unrelated zones on that account will be broken if I do lose my keys.


Another point for distributed version control systems. If the server hosting my repository exploded I'd have the entire repo on my computer.

Why hosted subversion is a thing, I don't know. It's a horrendous experience once you introduce any latency between the client and server.


How the hell would you let the faith of your entire enterprise data in the hand of Amazon? Offsite/offline backup are mandatory especially when you deal with this kind of data.


Real companies don't. I know a major video provider in Cambridge that replicates everything to S3 and Google Cloud storage for this very reason. Now, neither has ever gone down for them, but if you've got to stay up, you have a resilient architecture.


Does anyone know of noteworthy projects that were hosted there?


Without naming names I know of at least one company who were using it for their main svn repo. I am hoping they have come out of this relatively unscathed. I am not with them anymore and last I heard they were switching to git so perhaps they have managed to dodge this bullet.


Not really, archive.org shows Oracle, Macy's, and some other random corps using it for private hosting as of a few months ago.


Couple projects I've contracted on were using codespaces for their private SVN repos. I used git-svn, so that's a mercy...


Cloud kicks ass. Totally secure along with the high-level fellowship of paladins^Wadmins.


When will these cloud people learn that storing a so called backup in the same place or under the same domain or root account as the original data is a copy not an actual backup.


offsite backups != offline backups


Either that or the "hacker" pressed the "Close Account" button in the EC2 panel.


Am I the only one feeling cheated when people post referral / tracking links?


The ?hackernews in the URL is to work around HN's duplicate URL detection. It could have been ?lulz and had the same effect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: