How can I backup multi-gigabyte databases in MySQL? Bonus points for speedier restores
We have around 90 GB in a MySQL, so here are some suggestions:
- You can try the following call:
mysqldump -u USER -pPASS --single-transaction -Q --databases foo bar
(Use the docs to check if those switches work for you.) - If you use MYISAM-tables only, then copying the database to another drive and tar'ing it, might also be an alternative. This will not work with INNODB though, since it keeps an additional file. Also, you have to test this carefully.
- Enable binary logs, and back them up. (This is probably my favorite!)
- Setup replication and to do the backup on one of your slaves, and then let it catch up with the rest. It's pretty nice when it runs, but the problem is getting it running. Replicating is no fun with MySQL. :(
- Does your filesystem support snapshots? If it does, then that is a great opportunity to use them.
- Use a commercial solution, such as Amanda.
The best way would be without interrupting normal operation. For high-reliability, fault-tolerant systems, you have 2 DBs that are kept in sync, and you backup the secondary. (eg this howtoforge article)
Otherwise, read the manual: use myseldump as you currently do, or use the mysqlhotcopy script (use just like mysqldump), or stop the DB and simply copy the frm, MID, MYI files (using rsync).
I think the stop+copy files mechanism is the fastest you're going to get.
If some of your databases are read-only or modified rarely single it out as a less frequent cron.
To answer your question - extended_insert option works good for me. If you have enough resources for restoring, extended_insert will insert huge chunks of data in a single query, making the restore faster.