Hacker News new | past | comments | ask | show | jobs | submit login
Server Migration with Zero Downtime (sisyphiantales.com)
35 points by sisyphian-tales on Sept 13, 2014 | hide | past | favorite | 19 comments



If it was a site with "heavy write traffic" wouldn't the database already be stale anyway while the dump was being taken and restored? Zero downtime maybe not so hard. Zero write loss and zero read-only time is harder.


If it was a website with heavy traffic mysqldump would possibly take hours to complete and lock your tables for so long it affects requests. Even xtrabackup isn't suitable to run on a master node and it's common to have a dedicated slave for backup purpose.


You could just enter new servers mysql login info to old server and work from both servers on new DB. Old server would be slower a bit due to mysql request have to travel long way, but it would be a few times easier and faster.


Yeah that, and write the redis data to the new server because it's not only for caching. But yeah it's a very viable option ! I'll add it.


One cool trick is that, after you've copied all the data to a new database, you offset the auto-increment to some larger value on all tables in the new DB. Then you can switch the old app to use the new DB and it will immediately start inserting new records into it, but will have a gap between the old and the new auto-incrementing values. If your app was getting a lot of writes you've probably ended up with a few extra rows in the old DB since it took you some time to copy the data over before you could switch the DB connection. However it's no problem since you've created that gap, so you can just copy the missing rows into the new DB later, there will be no collisions.


That assumes that all the inserts and auto increments happen in the same order for both databases, which can't be guaranteed. You might end up with users with different IDs, which is pretty bad.


Call me crazy but why didn't you just do MySQL dual master auto increment numbers (one on even, one on odd) and then setup replication from old to new?

That would have solved everything in one step.


Autoincrements are only a small part of the problem. Unless, your database is append-only, alternating auto-increments are not going to help.


In my experience, any kind of DB replication is a bitch to set up. Definitely not worth the time for a one time thing.


I can set that up in about 10 steps including dumping db, setting up the offset, importing the db, and setting up reverse replication if you desire. Its super super easy.

This is 6 of the steps: http://www.percona.com/doc/percona-xtrabackup/2.1/howtos/set...


I have done several of these, my favorite one being where we moved servers AND switched all of the underlying code at once (including DB schema). Oh, and the traffic was very write heavy, and coming not from the web, but from a large number of hardware devices running Java 1.4 over 2G using UDP. The packet routing for this was complex and involved at least two pieces of home-grown software (and even IP spoofing). I love shit like this, and this article highlights a few cool ways you can do this migration. In this case I would have set up replication to a secondary MySQL server, then used reverse procuring to let DNS propagate and the finally make the secondary MySQL server the master server.


I've done this many times with fightlogg.in

This is the process:

1. Set up new server on new host (nginx/postgres/python/everything than needs set up) 2. Wait until 3AM or whenever traffic is at a lull. 3. Put up a message saying something about "server migration in progress". 4. Run pg_dump of the database on the old server. 5. scp that dump over to the new server 6. pg_restore that dump to new database server 7. Add an entry to new server's pg_hba.conf to allow connection from old server 8. change old server's app config to use the new postgres server (instead of localhost) 9. Move over A records.

The only downtime is the amount of time to do steps 6, 7 and 8. My entire DB is less than 1GB so it takes like 3 minutes to migrate.

This is made easy by me only having one datastore to move over. If the project has more than one datastore, it would probably take all day to migrate everything over.


The only difference I'd make here is using rsync for the transfer of the dump. If you're migrating a large database, frequently MySQL in my experience, you can run rsync in advance.

With rsync you can cut the downtime between disabling the old-host and promoting the new-host to the time it takes the transfer the changes applied since your previous rsync.

For small hosts/data-sets this might not matter, but for large ones you'll get a real saving.


> "The only downtime is the amount of time to do steps"

Don't you have downtime already at step 3?


Apache mod_proxy works well for this, the old server invisibly hits the new server and passes back the response. No fussing with dual databases, dual upload directories, different domains, AJAX crossdomains, or anything.


There are not so much use cases to use apache/httpd at all. With nginx you can proxy too.


Agreed.

You can also use HAProxy, or rinetd, if you want to handle other services too.


With option 3′ (old server as proxy) you wouldn't even need a temporary subdomain; just enable proxying and you're good to go.


Don't know all the details of the service but you probably could also have used haproxy causing no changes on the production servers (change server to proxy).

First install and configure haproxy to forward eerything to your existentiële environment. Change dns to the haproxy box. Add the new server to haproxy and remove the current one. Move DNS from haproxy to new server.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: