Backing up a MySQL database via ZFS snapshots

Solution 1:

If you only use InnoDB for all tables and set innodb_flush_log_at_trx_commit to:

  • 1 (the contents of the InnoDB log buffer are written out to the log file at each transaction commit and the log file is flushed to disk) or,
  • 2 (the contents of the InnoDB log buffer are written to the log file after each transaction commit and the log file is flushed to disk approximately once per second),

then you don't need FLUSH TABLES before doing snapshot, just run ZFS snapshot directly. InnoDB can recover data from transaction commit logs without data loss.

Ref: https://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit

Solution 2:

You need a full database lock to backup a (most) database(s) consistently.

The manual https://dev.mysql.com/doc/refman/5.5/en/backup-methods.html says FLUSH TABLES WITH READ LOCK is correct for ZFS snapshots specifically.

Making Backups Using a File System Snapshot

If you are using a Veritas file system, you can make a backup like this:

  1. From a client program, execute FLUSH TABLES WITH READ LOCK.
  2. From another shell, execute mount vxfs snapshot.
  3. From the first client, execute UNLOCK TABLES.
  4. Copy files from the snapshot.
  5. Unmount the snapshot.

Similar snapshot capabilities may be available in other file systems, such as LVM or ZFS.

It is kind of ridiculous that they left out the fact that you need FLUSH TABLES table_a, table_b, table_c FOR EXPORT for InnoDB from these instructions. It's also stupid to have to specify each table like that. But as EEAA says, you can generate a list of tables as you begin the backup fairly easily.

As for holding the lock, you must keep the db connection active while performing the snapshot

Generally I'd use something like Perl or another programming language that can connect, lock the db and while maintaining the db connection take the snapshot, then unlock and disconnect. It's not complex. I'd wager that there are tools out there that do this already but writing one is easy.

I say easy, not complex, etc.. a few times. I'm assuming you have some basic programming or good scripting skills.

Solution 3:

I've ripped off and adapted a conceptually simple script in Bash which I found in another Server Fault post by Tobia. It should get you about 90% of the way there.

mysql_locked=/var/run/mysql_locked

# flush & lock MySQL, touch mysql_locked, and wait until it is removed
mysql -hhost -uuser -ppassword -NB <<-EOF &
    flush tables with read lock;
    delimiter ;;
    system touch $mysql_locked
    system while test -e $mysql_locked; do sleep 1; done
    exit
EOF

# wait for the preceding command to touch mysql_locked
while ! test -e $mysql_locked; do sleep 1; done

# take a snapshot of the filesystem, while MySQL is being held locked
zfs snapshot zpool/$dataset@$(date +"%Y-%m-%d_%H:%M")

# unlock MySQL
rm -f $mysql_locked

Here, the mysql command you use is run in the background and touches a file. It waits in the background for the file to disappear before exiting and thus unlocking the tables. Meanwhile the main script waits until the file exists, then creates the snapshot and deletes the file.

The file pointed to by $mysql_locked needs to be accessible to both machines, which you should be able to do easily enough since they can both access a common dataset (albeit they might use different paths, and you should account for this).