Having done this a few times with mysql-to-mysql moves across datacenters theres a couple things I would like to mention.
I dont know what Amazon RDS is like replaying binary logs but in mysql 5.0 I faced a bug where temporary table creation statements were being ignored. This was a huge problem from us and at the time could not find a work around. Here is the related mysql bug http://bugs.mysql.com/bug.php?id=35583
While this move has successfully worked for the author I hope if you ever do a database move you'll do a test run first. I've experience numerous bugs in mysql replication and replaying binary logs due to temporary tables, duplicate keys due to different auto insert ids and procs/trigs/funcs causing some havoc.
I posted a comment on the blog, but will leave one here as well.
In regards to the claim of: "quick instance size scaling with no downtime"
I am currently evaluating moving our MySQL server to Amazon's RDS. I liked the sounds of the Multi-AZ RDS as it will greatly decrease downtime. However upon examination I found the following issues:
- Changing an instance size results in downtime. Amazon's docs and support say up to 3 minutes.
- Any failover can take up to 3 minutes to promote the backup to becoming live.
- You will experience downtime during that time
- Browsing the support forum, some people complained of it taking more than 3 minutes to failover and in some cases the failover got stuck and they experience a longer downtime until they wrote on the forum and had AWS support manually fix it.
So in other words, this is not a silver bullet to making MySQL have 100% uptime. And in fact you will experience "up to 3 minutes" of downtime each week during the maintenance window when a failover will occur (unless they do the failover before the maintenance, which I have not found information on anywhere).
The weekly maintenance window you define for Amazon RDS is just a place holder.It does not mean that maintenance happens every week. It only happens when there are specific security or MyQL patches that need to be applied and is pre-announced in the forums.The maintenance window could also be used to schedule instance scaling events, during which, like you pointed out, there would be downtime. However, the downtime would be limited to the instance fail over time (typically less than 3 minutes) if you use a Multi-AZ deployment.
Are you aware if they do the failover, then after it is successful perform maintenance? Or do they take the just stop the master server to do maintenance and let failover happen automatically?
The first method would eliminate downtime. The second method would have that up to 3 minute downtime.
Also, in my basic testing, I only had 1 out of 10 failovers be completed within 15 seconds. The rest took between 2 and 3 minutes. The longest one was slightly over 3 minutes.
I've been planning my move of a 25gb MySQL db over to RDS.
I decided to have a downtime window in the middle of the night, versus doing a live move. I am moving from a dedicated datacenter to AWS all at once, so I'm in a different situation than those moving from an EC2 mysql instance to RDS, which should be a lot easier.
I followed the instructions from the RDS instructions, which tell you to break up mysqldump files great than 1GB and use mysqlimport to send them over.
I did some benchmarking and found the import time wasn't linear on file size. This may have to do with the fact that I was using the 1.7GB instance (which i'm upgrading to 15gb for the live site).
We had to do this a few times with large databases. To save some time we recreated the table structure we had (use something like Navicat to make life really easy and copy over the structure). We then exported our DBs into CSV files and used mysqlimport with --compress flag. Took about 4 - 6 hours to get all the data up. Definitely a weekend or latenight job. Of course with significantly less data this could take much less time. Our smaller DBs took less than an hour and those we literally just used the copy functions within Navicat.
forgot to add, to minimiza downtime we perform this dump and import during a slow period. If you're using PKs just note where your dumped tables leave off. When ready to roll to prod just put up your maintenance page, export from production your dbs into csv files again, but now from the PKs upward (this saves considerable time, you could always do the whole db again but wouldn't recommend it), import (add) the missing data and voila - a migrated DB. Switch over your DNS or domain settings (however you handle this) and you're up and running on RDS. Hope this helps anyone.
I dont know what Amazon RDS is like replaying binary logs but in mysql 5.0 I faced a bug where temporary table creation statements were being ignored. This was a huge problem from us and at the time could not find a work around. Here is the related mysql bug http://bugs.mysql.com/bug.php?id=35583
While this move has successfully worked for the author I hope if you ever do a database move you'll do a test run first. I've experience numerous bugs in mysql replication and replaying binary logs due to temporary tables, duplicate keys due to different auto insert ids and procs/trigs/funcs causing some havoc.