Creating a *Consistent* *Online* MySQL Backup with InnoDB AND MyISAM tables
I just spent hours building a new database server to replace 2 failed servers from a mysqldump
file created a month ago, then used my server's bin-logs to bring that mysqldump
up to date. Now that I have this new database server online, and have had it running for a few days, I need to complete the server set-up and re-implement a backup strategy.
I need to bring (at least) one MySQL Replication Slave online for the new server, as well as start creating usable backups for when things fail again.
Before I move further:
- I forgot (eep) to enable the bin-log on the new MySQL server before importing the old logical backup and executing the old bin logs (blame it on being awake for 24 hours at that point, I guess).
- My data consists of several databases, all which have a mix of MyISAM tables AND InnoDB tables.
- Also, along the same lines of enabling the bin logs on the new server, I forgot to turn on the
innodb-file-per-table
option in themy.cnf
before importing all the data. The binary log is now enabled on the new MySQL server, but the inno file per table option is pretty much a non-fix until I can import another logical backup (I will probably end up bringing up a second slave, and promoting the first slave to master and ditching the current master - as the config just isn't where I need it to be).
So, the problem:
I need to take a backup of the MySQL server, but I can not take the new MySQL server offline. I need to continue taking writes, and serving reads with little to no downtime. This "little-to-no downtime" requirement is defined as less than 10 minutes. My data currently uses about 100GB of space on the server (mysql data files), and the logical backups are about 50GB (that's a lot of indexes..haha). I don't care of this backup is logical, or a copy of the data files from the MySQL data directory. I can create a logical backup off the slave after I get it online.
And, the question:
How would you create this needed backup? I know it's not easy, a lot of people will say it's not possible. But I refuse to believe that it isn't possible, there has to be a way to get it done.
A note about the server: It's running Ubuntu 10.04, MySQL 5.1.41, and the filesystem that the data is stored on is ext3. The server is running on the Rackspace Cloud, so the filesystem pretty much "is what it is" unless I can re-partition the root device and repartition it with another FS (XFS maybe?) to do a snapshot.
I've read about Perconas XtraBackup Tool but it only works for InnoDB tables. They have a MyISAM tool, but I don't really understand how it could (or even if it does) work in conjunction with XtraBackup to create a completely consistent backup.
I've read about mysqlhotcopy
but it only works with MyISAM tables.
Obviously I know about mysqldump
, but (also, obviously) the problem here is creating a consistent backup without locking the entire database server for the number of hours it will take to output that entire sql backup file.
Solution 1:
With regard to XtraBackup, or more specifically innobackupex, check out this explanation
Excerpt:
The innobackupex program adds more convenience and functionality by also permitting you to back up MyISAM tables and .frm files. It starts xtrabackup, waits until it finishes copying files, and then issues FLUSH TABLES WITH READ LOCK to prevent further changes to MySQL's data and flush all MyISAM tables to disk. It holds this lock, copies the MyISAM files, and then releases the lock.
The backed-up MyISAM and InnoDB tables will eventually be consistent with each other, because after the prepare (recovery) process, InnoDB's data is rolled forward to the point at which the backup completed, not rolled back to the point at which it started. This point in time matches where the FLUSH TABLES WITH READ LOCK was taken, so the MyISAM data and the prepared InnoDB data are in sync.
Hope this helps.
Cheers
Solution 2:
This "little-to-no downtime" requirement is defined as less than 10 minutes
OK 2 obvious solutions:
1) set up a mirrored filesystem to hold the mysql data tables. When you want to do a backup, shutdown mysqld, remove one of the mirrors from the set, then start up mysqld again, then remount the drive you've just taken out of the mirror somewhere else and either copy the raw files off or start up a second instance of mysld (different port, different path to files) and run mysqldump there. When finished, shutdown the 2nd mysqld instance and put the drive back into the mirror set (the RAID system should handle updating the drive before its read from as a prt of the mirror so no need to shutdown the first instance at this point).
2) Similar to 1 but implement mirroring at the application level - set up your current mysqld instance as a master, configure a second instance as a slave. When you want to create a backup, stop the replication on the slave, do the backup then start replication again (0 downtime).