I have to say - if they were using a managed relational database service, like Amazon's RDS Postgres, this likely would have never happened. RDS fully automates nightly database snapshots, and ships archive logs to S3 every 5 minutes, which gives you the ability to restore your database to any point in time within the last 35 days, down to the second.
Also, RDS gives you a synchronously replicated standby database, and automates failover, including updating the DNS CNAME that the clients connect to during a failover (so it is seamless to the clients, other than requiring a reconnect), and ensuring that you don't lose a single transaction during a failover (the magic of synchronous replication over a low latency link between datacenters).
For a company like Gitlab, that is public about wanting to exit the cloud, I feel like they could have really benefited from a fully managed relational database service. This entire tragic situation could have never happened if they were willing to acknowledge the obvious: managing relational databases is hard, and allowed someone with better operational automation, like AWS, to do it for them.
I have personally experienced a near-catastrophic situation 3 years ago, where 13 out of 15 days' worth of nightly RDS MySQL snapshots were corrupt and would not restore properly.
The root cause was a silent EBS data corruption bug (RDS is EBS-based), that Amazon support eventually admitted to us had slipped through and affected a "small" number of customers. Unlucky us.
We were given exceptional support including rare access to AWS engineers working on the issue, but at the end of the day, there was no other solution than to attempt restoring each nightly snapshot one after the other, until we hopefully found one that was free of table corruption. The lack of flexibility to do any "creative" problem-solving operations within RDS certainly bogged us down.
With a multi-hundred gigabyte database, the process was nerve-wracking as each restore attempt took hours to perform, and each failure meant saying goodbye to another day's worth of user data, with the looming armageddon scenario that eventually we would reach the end of our snapshots without having found a good one.
Finally, after a couple of days of complete downtime, the second to last snapshot worked (IIRC) and we went back online with almost two weeks of data loss, on a mostly user-generated content site.
We got a shitload of AWS credits for our trouble, but the company obviously went through a very near-death experience, and to this day I still don't 100% trust cloud backups unless we also have a local copy created regularly.
> We got a shitload of AWS credits for our trouble, but the company obviously went through a very near-death experience, and to this day I don't 100% trust cloud backups unless we also have a local copy created regularly.
Cloud backups, and more generally all backups should be treated like nuclear profliferation treaties: Trust, but verify!
If your periodically restore your backups you'll catch this kind of crap when it's not an issue, rather than when shit had already hit the fan.
Years ago I had my side project server hacked twice. I've been security and backup paranoid ever since.
At my current startup, we have triple backup redundancy for a 500GB pg database:
1/ A postgres streaming replication hot standby server (who at this moment doesn't serve reads, but might in the future)
2/ WAL level streaming backups to aws s3 using WAL-E, which we automatically restore every week to our staging server
3/ Nightly logical pg_dump backups.
9 months ago we only had option 3 and were hit with a database corruption problem. Restoring the logical backup took hours and caused painful downtime as well as the loss of almost a day of user generated content. That's why we added options 1 and 2.
I can't recommend WAL-E enough for an additional backup strategy. Restoring from a wal (binary) backup is ~10x faster in our usecase (YMMV) and the most data you can loose is about 1 minute. As an additional bonus you get the ability to rollback to any point in time. This has helped us to recover user deleted data.
We have a separate Slack #backups channel where our scripts send a message for every succesful backup, along with the backup size (MB's) and duration. This helps everyone to check if backups ran, and if size and duration are increasing in an expected way.
Because we restore our staging on a weekly basis, we have a fully tested restore script, so when a real restore is needed, we have a couple of people who can handle the task with confidence.
I feel like this is about as "safe" as we should be.
Even before that there are steps you can take. For example if you take a Postgres backup with pg_dump, you can run pg_restore on it to verify it.
If a database isn't specified, pg_restore will output the SQL commands to restore the database and the exit code will be zero (success) if it makes it through the entire backup. That lets you know that the original dump succeeded and there was no disk error for whatever was written. Save the file to something like S3 as well as the sha256 of it. If the hash matches after you retrieve you can be pretty damn sure that it's a valid backup!
Otherwise you get the blind scripts like GitLab had where pg_dump fails. No exit code checking. No verification. No beuno!
Are there any guidelines on how often you should be doing restore tests?
It probably depends on the criticality of the data, but if you test say every 2 week, you can still fall in the OPs case, right?
At what size/criticality should you have a daily restore test? maybe even a rolling restore test? so you check today's backup, but then check it again every month or something?
Ideally it should be immediately after a logical backup.
For physical backups (ex: wal archiving), a combination of read replicas that are actively queried against, rebuilt from base backups on a regular schedule, and staging master restores lesa frequent yet still regular schedule, will give you a high level of confidence.
Rechecking old backups isn't necessary if you save the hashes of the backup and can compare they still match.
Not immediately (imo); you should push the backup to wherever it's being stored. Then your db test script is the same as your db restore script: both start by downloading the most recent backup. The things you'll catch here are, eg, the process that uploads the dump to s3 deciding to time out after uploading for an hour, NOT fail, silently truncate, and instead exit with 0!
Wow, I'm sorry you experienced that. This points to the importance of regularly testing your backups. I hope AWS will offer an automated testing capability at some point in the future.
In the meantime, I hope you've developed automation to test your backups regularly. You could just launch a new RDS instance from the latest nightly snapshot, and run a few test transactions against it.
This is certainly true of all backups to an extent though not just the cloud. Back in the day of backing up to external tape storage it was important to test restores in case heads weren't calibrated or were calibrated differently between different tape machines etc.
I am curious did you manage to automate an restore smoke test after going through this?
Snapshots are not backups, although many people use them as backups and believe they are good backups. Snapshots are snapshots. Only backups are backups.
A snapshot could be a backup depending on what you're calling a snapshot, but yeah, in general, to be a backup things need to have these features:
1. stored on separate infrastructure so that obliteration of the primary infrastructure (AWS account locked out for non-payment, password gets stolen and everything gets deleted, datacenter gets eaten by a sinkhole, etc.) doesn't destroy the data.
2. offline, read-only. This is where most people get confused.
Backups are unequivocally NOT a live mirror like RAID 1, slightly-delayed replication setup like most databases provide, or a double-write system. These aren't backups because they make it impossible to recover from human errors, which include obvious things like dropping the wrong table, but also less obvious things, like a subtle bug that corrupts/damages some records and may take days or weeks to notice. Your standbys/mirrors are going to copy both of obvious and non-obvious things before you have a chance to stop them.
This is one of the most important things to remember. Redundancy is not backup. Redundancy is redundancy and it primarily protects against hardware and network failures. It's not a backup because it doesn't protect against human or software error.
3. regularly verified by real-world restoration cases; backups can't be trusted until they're confirmed, at least on a recurring, periodic basis. Automated alarms and monitoring should be used to validate that the backup file is present and that it is within a reasonable size variance between human-supervised verifications. Automatic logical checksums like those suggested by some other users in this thread (e.g., run pg_restore on a pg_dump to make sure that the file can be read through) are great too and should be used whenever available.
4. complete, consistent, and self-contained archive up to the timestamp of the backup. Differenced backups count as long as the full chain needed for a restoration is present.
This excludes COW filesystem snapshots, etc., because they're generally dependent on many internal objects dispersed throughout the filesystem; if your FS gets corrupted, it's very likely that some of the data referenced by your snapshots will be corrupted too (snapshots are only possible because COW semantics mean that the data does not have to be copied, just flagged as in use in multiple locations). If you can export the COW FS snapshot as a whole, self-contained unit that can live separately and produce a full and valid restoration of the filesystem, then that exported thing may be a backup, but the internal filesystem-local snapshot isn't (see also point 1).
Backups protect against bugs and operator errors and belong on a separate storage stack to avoid all correlation, ideally on a separate system (software bugs) with different hardware (firmware and hardware bugs), in a different location.
The purpose of a backup is to avoid data loss in scenarios included in your risk analysis. For example, your storage system could corrupt data, or an engineer could forget a WHERE clause in a delete, or a large falling object hits your data center.
Snapshots will help you against human error, so they are one kind of backup (and often very useful), but if you do not at least replicate those snapshots somewhere else, you are still vulnerable to data corruption bugs or hardware failures in the original system. Design your backup strategy to meet your requirements for risk mitigation.
I'd also add not just different location but different account.
If your cloud account, datacenter/Colo or office is terminated, hacked, burned down, or swallowed by a sink hole.. You don't want your backups going with it.
Cloud especially: even if you're on aws and have your backups on Glacier+s3 with replication to 7 datacenters in 3 continents... If your account goes away, so do your backups (or at least your access to them).
RDS, or any hosted database solution, is not some kind of silver bullet that solves all problems. While it's true it takes care of backups automatically, it does also restrict you in terms of what you can do.
For example, you can't load custom extensions into RDS. Also, to the best of my knowledge RDS does not support a hot standby replica you can use for read-only queries, and replication between RDS and non RDS is also not supported. This means you can't balance load between multiple hosts, unless you're OK with running a multi-master setup (of which I'm not sure how well this would play out on RDS).
Most important of all, we ship PostgreSQL as part of our Omnibus package. As a result the best way of testing this over time is to use it ourselves, something we strive to do with everything we sihp. This means we need to actually run our own things. Using a hosted database would mean we wouldn't be using a part of what we ship, thus not being able to test it over time.
But to be fair if you enable both you're paying for 3 servers total instead of 2, you can't read from the HA standby. (I'd imagine there are reasons not to do that anyway, but you don't even have the option of making that compromise to save on the cost)
With RDS Aurora, any of your read replicas can be promoted to read/write master in the event of a failure of your primary master. This happens in 10-15 seconds, and is very fast.
So, you can get the benefit of up to 15 read replicas, and not have to pay for an extra standby server that is sitting idle.
Well, you can use the Read Replica only, and if you have an outage on the primary, promote the Read Replica to recover... (will take a few minutes though.)
You're welcome to read frustratingly. I didn't think hard about the word choice.
I suppose disturbingly is meant to imply "It was frustrating to me, and I think it would be frustrating to anyone in the situation of seriously using Postgres on RDS[1], and perhaps it ought to even decrease their opinion of the RDS team's ability to prioritize and ship features that are production-ready".
Does that make sense?
[1]: There was no workaround for getting a read replica. RDS doesn't allow you to run replication commands. So your options were "Don't use Postgres on RDS, or don't run queries against up-to-date copies of databases." There was never any announcement of when read replicas were coming. It was arguably irresponsible of them to release Postgres on RDS as a product and then wait a year to support read replicas, which is a core feature that other DB backends had already.
Without insight into what the AWS RDS team workload and priorities are, I think it's unfair to use a term like disturbingly. Sure, as a user, we want features to be rolled out as quickly as possible. From what I've seen, Postgres RDS support has been slow, but consistently getting better: nothing to warrant suggesting Amazon isn't serious about continuing to improve their Postgres offering. That would be disturbing. Or data loss failures. Slower-than-I'd-like roll-out of new features? Frustrating.
By all means, RDS isn't perfect. It doesn't suit my current needs. But I understand that getting these things to work in a managed way that suits the needs of most customers is not an easy task. I'll remain frustrated in some small way until RDS does suit my needs. I hope they continue to add features to give customers more flexibility. And from what I've seen, they likely will.
>> RDS does not support a hot standby replica you can use for read-only queries
This is not true anymore.
I set up two read-only RDS replicas, one in a different AWS region, and another in the same region, for read-only queries, just by clicking in AWS console.
You can use the failover standby replica for reads with Aurora at least. And you can manually via MySQL set up replication with non RDS, just not via AWS APIs.
RDS also comes with its own set of tradeoffs. There is no free lunch, and the cloud is just another word for someone else's server. There are reasons Gitlab opposes that.
In the meantime solution architects and sales people from AWS are going to run around with annotated copies of this public post-mortem to enterprises and say "look, RDS would have solved x,y,z and we can do that for you if you pay us"
>> the cloud is just another word for someone else's server.
No. The cloud (AWS, GCE, Azure etc) is not "just" like your own server.
Just consider some basic details - you pay someone else to worry about things like power outages, disk failures, network issues, other hardware failures, and so on.
I think that's a little pedantic. The point he was making is that, conceptually speaking, the cloud is comprised of servers not unlike the servers you run yourself. The difference, obviously, is who runs them, the manner in which they're run, the exact manner in which they're utilized by you, etc., but they are still just servers at the bottom of the stack.
"The point he was making is that, conceptually speaking, the cloud is comprised of servers"
But... that "point" is trivial.
Did anyone ever claim that cloud servers are made of magic pixie dust? No.
The real "point" is that, cloud = hardware + service, with service > 0.
As the OP describes, GitLab tries to do their own service (because service is expensive... it is), and they find out, the hard way, that the "service" part is not easy at all.
Amazon & Microsoft & Google run millions of servers each, so they can afford to hire really good people, and establish really
good procedures, and so on.
You are completely right. There are reasons to oppose the cloud, but maybe they should focus on improving their systems before moving out of the cloud. At this point in time it is clear that GitLab lacks the talent to run everything themselves. I mean 5 backups worthless or lost? You can't let interns write your backups system. After all backup is a large portion of their product.
The worst part of the whole episode, even worse than 'deleted the active database by accident', was '(backups) were no one's responsibility'. This is not an oversight by an individual engineer, but an aspect of the management and company culture. It shows they lack processes derived from requirements. Lots on introspection required from gitlab at this point.
Yes. This should be treated as a serious management failure. Blame does not lie with the individual who made a simple mistake; it lies with the supervisory structure that allowed simple mistakes such as this to result in major data loss (and, as discussed in yesterday's thread [0], has made a series of other serious strategic mistakes that have likely caused them to end up with such inadequate internal hierarchies).
Something like this is not a mere oversight on the part of technical leadership; it's either negligence or incompetence. Whoever is responsible for GitLab's server infrastructure should be having very serious thoughts right now.
Smaller companies that do not have enough senior/good technical guys that they can afford for whatever reason they have... benefit greatly from the cloud. 1 master, no read partitions, weird backup policies and the saviour of the day is some engineers lucky manual snapshot. That sucks. It's better people start with cloud and manage when they are really confident.
It's worth noting that compared to a good number of recent ish startups, GitLab now has (I believe) more than 160 or so employees. Someone could've owned a recurring task to work on backup processes (and I imagine, now, someone (or likely multiple people)).
This likely would have never happened, if one of their one hundred and sixty employees, just took the time to make sure backups were setup at all. You also need to be a sufficiently large enough organization to warrant the prices that the "cloud" services demand. As stated below, cloud computing is just someone else's server somewhere, and they are making lots of money doing it. Unless you need that level of scalability, and processing, then it's not worth it. I think Gitlab stated their entire PostgreSQL database was only a few hundred gigabytes. That's not exactly huge.
"As stated below, cloud computing is just someone else's server somewhere. Unless you need that level of scalability, and processing, then it's not worth it."
I keep seeing people throw this around as if it's God's truth and it frustrates the hell out of me. It may be the case for your organization but everywhere I have worked (from startups to Fortune 500) the cloud allowed our engineers to focus on our product rather than infrastructure maintenance and contributed massively to our success.
The cloud provides convenience, which absolutely does have [some] value. That value usually does not approach the actual cost incurred for companies of a reasonable size (including, IMO, GitLab), but it does exist and it means that everyone should be able to find a smattering of uses for cloud.
The issue I think is that so many people just go balls-to-the-wall 1000% AWS and consider it a done deal, which is terrible, and then go around and telling everyone else they should do the same thing, which is also terrible.
The fact is that you can't just lay the responsibility for all of this in Amazon's lap. We'd be even less impressed if GitLab's excuse was "Yeah, we had the Amazon nightly snapshots enabled, so we only lost 19 hours of data" (whoever coincidentally took the backup 6 hours before the incident should get enough of a bonus to make his GitLab salary market-competitive!).
Amazon does start you out with some OK-ish defaults, which is better than allowing someone with 4 days of experience to set everything up, but ultimately that's not going to mean much in unskilled hands.
When it comes down to it, every company still need someone internal to take responsibility for their infrastructure; that means backups, security, permissions, performance, hardware, and yes, cost. If your company already has someone with those responsibilities, giving that person $500k to hire a few hardware jockeys is going to be much better than giving Amazon $3M to be the sole host for all of your infrastructure. If your company doesn't have anyone with these responsibilities, it needs to get on the stick, as GitLab has clearly demonstrated to us this month.
RDS PostgreSQL is like the Hotel California, you can check in any time you want, but you can never leave. Maybe it is OK as a simple data store for a single app, but not for a real database. I gained a lot of my knowledge of PostgreSQL internals by helping my company get off of RDS and onto a dedicated EC2 instance solution. RDS imposes too many limitations.
Also, your snapshot backup solution is trivial to implement on EC2 or anywhere else for that matter. But it is not easy to do it right in some scenarios. Read https://www.postgresql.org/docs/9.6/static/backup-file.html for details. LVM or ZFS are likely needed under the db layer.
"Maybe it is OK as a simple data store for a single app, but not for a real database."
Currently working at company number 2 with large (many terabytes) databases on RDS and can safely say this is horse shit.
The amount of time and energy it allows our engineers to spend on our actual products instead of database management is worth all of the extra cost and lock in and then some.
Edit: I just realized that you were talking about Postgres on RDS in particular. I don't have experience with Postgres so you may well be right.
a. How many hours would you guess you are saving a month?
b. What takes by a factor of 10 less time on RDS than doing it by hand? What task sees the largest time saved?
Because I always wonder when reading this, what am I missing? What haven't we done? Were we lucky? We were running MySQL and Postgres for multi hundred million EUR companies with millions of users and we did not spend a lot of effort into managing them.
4. Zero effort hot backups, automatic fail-overs, and multiple datacenter deployments
5. Low effort migrations of massive amounts of data between DBs when someone inevitably wants to refactor something
6. Zero effort logging and log aggregation
7. Almost zero effort alerting of issues via sms/email/other
I could go on but I'm on my way to work...
When you're paying engineers north of 150K all of this adds up, and I'd much rather throw the money at Amazon to handle this and pay the engineers to focus on our actual product.
We're in the business of PostgreSQL support, and some of our customers use RDS for various reasons. Not having to care about the deployment is one of the usual goals of using a managed environment, but considering they subsequently go and buy support from a third party might be a sign of something.
Of course, my view is biased because we only hear about the issues - there might be a 100x more people using RDS without any issues, and we never hear about them.
In general, the pattern we see is that people start using RDS, and they're fairly happy because it allows them to build their product and RDS more or less works. Then they grow a bit over time, and something breaks.
Which brings me to two main RDS issues that we run into - lack of insight, and difficulty when migrating from RDS with minimum downtime. Once in a while we run into an issue where we wish we could connect over SSH and get some data from the OS, attach gdb to the backend, or something like that. Or using auto_explain. Not even mentioning custom C extensions that we use from time to time ...
"and difficulty when migrating from RDS with minimum downtime"
They're simply uninformed then. AWS database migration service makes zero downtime migrations trivial between just about any major databases (mysql, oracle, postgres, aurora, sqlserver, etc.)
Funny you should mention a managed relational database service; Instapaper uses one of those and had more than 12 hours of downtime this week: http://blog.instapaper.com/post/157027537441
No database solution is totally reliable. If storing data is my primary job, like it is GitLab's, I'd like to have as much control of it as possible.
Let's just say that Instapaper's outage was self-inflicted. You don't see them blaming their cloud provider, do you? People make mistakes, and even with a managed relational database service, you can still make mistakes.
The difference is that Instapaper was able to restore from backups, because their managed service performed them properly. The archive data is taking longer to restore, but that's due to design decisions Instapaper made.
I'm typing off the top of my head, but didn't they have like 400GB of database? That would probably take 27 hours to get fully available via S3 at 32,000/kbps which is about what s3 will provide for first time hits in my experience.
RDS has severe performance limitations as in you can't provision more than 30K IOPS which is about 1/2 the performance of low end consumer SSD and about 1/20 the performance of a decent PCI-E SSD. You way better of running on decent dedicated hardware for the DB.
You can get 500K random reads per second and 100K random writes per second using RDS Aurora.
If you truly need more than 30K IOPs, I would recommend leveraging read-replicas, a Redis cache, and other solutions before just "throwing money at the problem" and purchasing a million IOPs.
You can't just buy a single enterprise-grade NVMe SSD and call it a day. Are you planning on buying enough to populate at least 2-3 servers with multiple devices, then setting up some type of synchronous replication across them? What type of software layer are you going to use to provide high availability for your data? DRBD? How are you going to manage all of the different failure modes (failed SSD, network partition, split brain, etc.)? How are you going to test it?
I'm afraid you are seriously underestimating the operational capabilities required to successfully operate a highly-available, distributed, SSD storage layer.
Nope but if I pay a few K for a service I expect it to scale to perfomance at least comparable to a very low end device. Why would I use DRBD in a Postgres cluster? I am not underestimating anything I am simply pointing out that RDS is overpriced and crappy service. A proper setup & operation of a postgres cluster is manageable task. What you totally can not manage on AWS is the risks of having single tenant that is using 30% of resources or having lengthy multi AZ outages due to bugs in extremely complex control layer etc.
I will add my inputs on RDS. I gave this comment on the GitLab incident thread. I actually managed to delete an RDS cloudformation stack by accident. The night before I pushed an update to Cloudformation to convert storage class to provisioned IOPS. Next morning I woke up really early, drove my girlfriend to work. While waiting in the car I wanted to check the status of the update so I went on AWS mobile app to check. Mind you I have iPhone 7, but the app was very slow and laggy. As I was scrolling down to find out the failure. But there was a lag between the screen render and my click. Damn. I clicked on delete. Yeah, fucking delete. No confirmation. It went through. No stop button.
There was no backup because the cfn template I built at the time did not have the flag that said take a final snapshot. If you do not take the final snapshot (via console, api, cfn) you are doomed: all the auto snapshots taken by AWS are deleted upon the removal of the RDS instance.
This was our staging db for one of our active projects which I and the dev team spent about a month working to get to staging and was under UAT. Fuck. I told my manager and he understood the impact so he just let me get started on rebuilding. The next morning I got the DB up and running since luckily I compiled my runbook when I first deployed it yo staging. But it was not fun because the data is synced via AWS DMS from our on premise Oracle db so I needed to get sign off from a number of departments.
So I learned my first lesson with RDS - make sure final snapshot flag is enabled (for EC2 user, please remind yourself anything stored on ephemeral storage are going to be loss upon a hard VM stop/start operation, so backup!!!).
I also learned that RDS is not truly HA in the case of upgrading servers, both minor and major upgrade. I've tested major upgrade and saw DB connection unavailable up to 10 min. In some minor version upgrades both primary and secondary had to be taken down.
Other small caveats such as auto minor version upgrade, maintenance windows, retention for automated snapshot are only up to 35 days, event logs in RDS console doesn't last for more than a day, converting to provisioned IOPS can be expensive are just some small annoyance or ugh kind of things I would encourage folks to pay close attention to. Oh yeah, also manual snapshots have to be managed by yourself, kind of obvious but there is no life cycle policy... building a read replica can take up to a day in my first attempt of ever creating a read replica.
Of course now I learned these lessons so we have auto and manual snapshots and a better schedule. I encourage you take the ownership of the upgrade even for minor version so you know how to design your applications to be better at fault tolerance.....in the end hing I liked RDS the most is the extensive free CW metrics available. I also recommended people not to use the mobile app and if you do, setup a read-only role / IAM user. The app is way too primitive and laggy. I still enjoy using RDS, the service is stable and quick to use, but just make sure you have the habit of backuping and take serious ownership and responsibility of the database.
There is no magic silver bullet that will let you upgrade a database without some minor amount of downtime. RDS minimizes this as much as possible by upgrading your standby database, initiating a failover, then creating a new standby. Clients will always be impacted because you have to, by definition, restart your database to be running the new version.
You can select your maintenance window, and you can defer updates as long as you want - nobody will force you to update, unless you check the "auto minor version update" box.
Please don't blame AWS for your lack of understanding of the platform. They try to protect you from yourself, and the default behavior of taking a final snapshot before deleting an instance is in both CloudFormation and the Console. If you choose to override those defaults, don't blame AWS.
>So I learned my first lesson with RDS - make sure final snapshot flag is enabled (for EC2 user, please remind yourself anything stored on ephemeral storage are going to be loss upon a hard VM stop/start operation, so backup!!!).
This bit us once. Someone issued a `shutdown -h now` out of habit in an instance that was going for reboot, and it came back without its data, because "shutdown" is the same as "stop", and "stop" on ephemeral instances means "delete all my data". Since the command was issued from inside the VM, no warning or message that would've appeared on the EC2 console was displayed.
Amazon's position on ephemeral storage was shockingly unacceptable and unprofessional. They claimed they had to scrub the physical storage as soon as the stop button was pressed for security purposes, which is a complete cop-out. Of course they can't reallocate that chunk of the disk to the next instance while your stuff is on it, but they could've implemented a small cooldown period between stoppage, scrubbing, and reallocating the disk so that there would at least be a panic button and/or so accidental reboots-as-shutdowns don't destroy data. The only reason they didn't do that is because they didn't want to need to expand their infrastructure to accommodate it. Very sloppy, and not at all OK. That's not how you treat customer data.
Fortunately, AWS has moved on; I don't think that any new instances can be created with ephemeral storage anymore. Pure EBS now.
>I also learned that RDS is not truly HA in the case of upgrading servers, both minor and major upgrade. I've tested major upgrade and saw DB connection unavailable up to 10 min. In some minor version upgrades both primary and secondary had to be taken down.
You need multi-AZ for true HA. Failover within the same AZ has a small delay, as you've noted.
>I still enjoy using RDS, the service is stable and quick to use, but just make sure you have the habit of backuping and take serious ownership and responsibility of the database.
As many others in this thread have said, AWS and other cloud providers aren't a silver bullet. Competent people are still needed to manage these sorts of things. GitLab most likely would not have fared any better under AWS.
Don't blame AWS because you don't understand what ephemeral storage is.
There is a significant security reason why they blank the ephemeral storage. How would you feel if a competitor got the same physical server as you, and was able to read all of your data? AWS takes great lengths to protect customer data privacy in a shared, multi-tenant environment. They are very public through their documentation about how this works, so I think it's a bit negligent to blame them because you don't understand the platform.
Did you read my post? I understand what ephemeral storage is, and that giving another instance access to that physical device without scrubbing it is insecure. That's not the point. There's no reason that AWS needs to delete that data the instant a stop command is issued.
AWS gets paid the big bucks to abstract such concerns away in a pleasant manner. The device with customer data can sit in reserve, attached to the customer's account, for a cooldown period (of maybe 24 hours?) that would allow the customer to redeem it. AWS could even charge a fee for such data redemptions to compensate for the temporary utilization of the resource, or they could say ephemeral instances will always cost your use + 1 day. They can put a quota on the number of times you can hop ephemeral nodes.
They could do basically anything else, because basically anything else is better than accidentally deleting data that you need due to a counterintuitive vendor-specific quirk that conflicts with established conventions and habits and then being told "Sorry, you should've read the docs better."
This is an Amazon-specific thing that bucks established convention and converts the otherwise-harmless habits of sysadmins into potential data loss events. It's very bad to do this ever (looking at you, killall Linux v killall Solaris), but it's especially bad to do it on a new platform like AWS where you know lots of people are going to be carrying over their established habits and learning the lay of the land. It is not reasonable for Amazon to tell the users that they just have to suck it up and read the docs more thoroughly next time.
This is not like invoking rm on your system or database root, which is a multi-decade danger that everyone is aware of and acclimated to accounting for, and which has multiple system-level safeguards in place to prevent it: user access control, safe-by-default versions of rm that have been distributed with most major distributions lately, etc., and for which thorough backup and replication solutions exist to provide remedies when inevitable accidents do happen.
The point is that just instantly deleting that data ASAP and providing 0 chance for recovery is wanton recklessness, and there's no excuse for it. Security is not an excuse because there's no reason they have to reallocate the storage the instant the node is stopped.
If such deletions could only be triggered from the EC2 console after removing a safeguard similar to Termination Protection, that may be more reasonable, but allowing a shutdown command from the CLI to destroy the data is patently irresponsible.
Good system design considers that humans will use the system, that humans make mistakes, and it will provide safeguards and forgiveness. Ephemeral storage fails on all of those fronts. Yes, technically, it's the user's fault for mistakenly pressing the buttons that make this happen. But that doesn't matter. The system needs to be reasonably safe. AWS's implementation of ephemeral storage is neither safe nor reasonable.
Amazon has done a good job of tucking ephemeral storage away. It used to be the default on certain instance sizes. As another commenter points out, it now requires one to specifically launch an AMI with instance-backed storage. It's good that they've made it harder to get into this mess, but it's bad that they continue to mistreat customers this way, especially when their prices are so exorbitant.
So, the solution to some customers not understanding the economics and functionality of ephemeral storage is to charge all customers for a minimum of 25 hours of use, even if they only use the instance for a single hour? That seems crazy.
Look, AWS is trying to balance the economics of a large, shared, multi-tenant platform. It would be great if they had enough excess capacity around to keep ephemeral instance hardware unused for 24 hours after the customer terminates or stops the instance, but frankly, that's an edge case, and they would be forcing other customers to subsidize your edge case by charging everyone more.
>So, the solution to some customers not understanding the economics and functionality of ephemeral storage
Let me stop you there. In our case, it wasn't that we didn't understand what ephemeral storage was or how it functioned, or that it would get cleared if the instance was stopped (though I've frequently met people who are confused over whether instance storage gets wiped when a machine is stopped or when it's terminated; it gets wiped when an instance is stopped).
The issue was that someone typed "sudo shutdown -h now" out of habit instead of "sudo shutdown -r now" (and yes, something like "sudo reboot" should've been used instead to prevent such mistakes). Stopping an instance, which is what happens when you "shut down", can have other ramifications that are annoying, like getting a different IP address when it's started back up, but those annoyances are usually pretty easy to recover from, not a big deal. Much different ball park from getting your stuff wiped.
Destroying consumer data IS a big deal. It's ALWAYS a big deal. If your system allows users to destroy their data without being 1000% clear about what's happening, your system's design is broken. High-cost actions like that should require multiple confirmations.
Even the behavior of the `rm` command has been adjusted to account for this (though it could be argued that it hasn't been adjusted far enough); for the last several years, an extra flag has been required to remove the filesystem root.
>is to charge all customers for a minimum of 25 hours of use, even if they only use the instance for a single hour? That seems crazy.
One of several potential solutions. It doesn't seem crazy to me; at least, not in comparison to making a platform with such an abnormal design that something which is an innocent, non-destructive command everywhere else can unexpectedly destroy tons of data.
The ideal solution would be for Amazon to fix their design so that this is fully transparent to the user. Instance storage should be transmitted into a temporary EBS disk on shutdown and automatically re-applied to a new instance store when it's spun back up (it's OK if this happens asynchronously). The EBS disk would follow conventional EBS disk termination policies; that data shouldn't be deleted except at times that the EBS root disk would also be deleted (typically on instance termination, unless special action is taken to preserve it).
That could be an optional extension, but it should be on by default -- that is, you could start an instance store at a lower cost per hour if you disabled this functionality, similar to reduced redundancy storage in S3, etc. Almost every company would be thrilled to pay the extra few cents per hour to safeguard against the accidental destruction of virtually any quantity of data that might be important.
>Look, AWS is trying to balance the economics of a large, shared, multi-tenant platform. It would be great if they had enough excess capacity around to keep ephemeral instance hardware unused for 24 hours after the customer terminates or stops the instance, but frankly, that's an edge case, and they would be forcing other customers to subsidize your edge case by charging everyone more.
A redemption fee would punish the user who made the mistake for failing to account for Amazon's flawed design. Under this model, such fees should be at least high enough to make up the cost incurred by Amazon in keeping the hardware idle.
This way Amazon can punish people who impugn upon its bad design choices by making them embarrass themselves before their bosses when they have to explain why the AWS bill is $300 higher this month or whatever, and the data won't be gone. Winners all around.
A redemption fee is a good idea, but it would still take engineering effort to build such a feature, so the opportunity cost is that other features customers need wouldn't get built.
Another thing I'd like to point out is that you really need to plan for ephemeral storage to fail. All it takes is a single disk drive failure in your physical host, and you've lost data. If you are using ephemeral storage at all, you should definitely have good, reliable backups, or the data should be protected in other ways (like HDFS replication).
I know about the daily snapshots, but didn't know about the archive logs. Is this something I have to enable? How do I get the logs and how do I restore using them?
It's automatic. Go ahead and launch a new instance, restoring to a point in time (that's how you do restores in RDS). Notice that it gives you a calendar day/date/time fields where you can select the recovery point down to the second. This is enabled by replaying the archive logs to get you to the exact point in time.
A great number of issues can be attributed to the selection of Azure as the platform of choice. That said, a little bird told me that the decision was largely a cost factor. "You get what you pay for" never rang more true.
But none of the issues were azure/cost related except for the slow recovery? I mean, neither AWS nor GCE can make you notice youre not getting cron mail.
Yes, I recall seeing a ticket that referenced Gitlab using Azure because it was heavily subsidized. My company uses Azure for much the same reason, and my experience has been largely positive.
Is Azure cutting deals beyond the usual free 60k over a year or two or whatever it is for cool startups? Azure seems significantly more expensive in general, problems and slowness aside.
Also, RDS gives you a synchronously replicated standby database, and automates failover, including updating the DNS CNAME that the clients connect to during a failover (so it is seamless to the clients, other than requiring a reconnect), and ensuring that you don't lose a single transaction during a failover (the magic of synchronous replication over a low latency link between datacenters).
For a company like Gitlab, that is public about wanting to exit the cloud, I feel like they could have really benefited from a fully managed relational database service. This entire tragic situation could have never happened if they were willing to acknowledge the obvious: managing relational databases is hard, and allowed someone with better operational automation, like AWS, to do it for them.