mysqldump & gzip commands to properly create a compressed file of a MySQL database using crontab

Solution 1:

First the mysqldump command is executed and the output generated is redirected using the pipe. The pipe is sending the standard output into the gzip command as standard input. Following the filename.gz, is the output redirection operator (>) which is going to continue redirecting the data until the last filename, which is where the data will be saved.

For example, this command will dump the database and run it through gzip and the data will finally land in three.gz

mysqldump -u user -pupasswd my-database | gzip > one.gz > two.gz > three.gz

$> ls -l
-rw-r--r--  1 uname  grp     0 Mar  9 00:37 one.gz
-rw-r--r--  1 uname  grp  1246 Mar  9 00:37 three.gz
-rw-r--r--  1 uname  grp     0 Mar  9 00:37 two.gz

My original answer is an example of redirecting the database dump to many compressed files (without double compressing). (Since I scanned the question and seriously missed - sorry about that)

This is an example of recompressing files:

mysqldump -u user -pupasswd my-database | gzip -c > one.gz; gzip -c one.gz > two.gz; gzip -c two.gz > three.gz

$> ls -l
-rw-r--r--  1 uname  grp  1246 Mar  9 00:44 one.gz
-rw-r--r--  1 uname  grp  1306 Mar  9 00:44 three.gz
-rw-r--r--  1 uname  grp  1276 Mar  9 00:44 two.gz

This is a good resource explaining I/O redirection: http://www.codecoffee.com/tipsforlinux/articles2/042.html

Solution 2:

if you need to add a date-time to your backup file name (Centos7) use the following:

/usr/bin/mysqldump -u USER -pPASSWD DBNAME | gzip > ~/backups/db.$(date +%F.%H%M%S).sql.gz

this will create the file: db.2017-11-17.231537.sql.gz

Solution 3:

You can use the tee command to redirect output:

/usr/bin/mysqldump -u user -pupasswd my-database | \
tee >(gzip -9 -c > /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz)  | \
gzip> /home/user/backup2/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz 2>&1

see documentation here

Solution 4:

Besides m79lkm's solution, my 2 cents on this topic:

Don't directly pipe | the result into gzip but first dump it as a .sql file, and then gzip it.
So go for && gzip instead of | gzip if you have the free disk space.

The dump itself will be double as fast but you will need a lot more free disk space. Your tables will be locked for less time so less downtime/slow-responding of your application. The end result will be exactly the same.

So very important is to check for free disk space first with
df -h

And then execute your dump like this:

mysqldump -u user-p [database_name] > dumpfilename.sql && gzip dumpfilename.sql

Another good tip is to use the option --single-transaction. It prevents the tables being locked but still result in a solid backup. So you might consider to use that option. See docs here. And since this does not lock your tables for most queries you can actually pipe the dump | directly in gzip... (in case you don't have the free disk space)

mysqldump --single-transaction -u user -p [database_name] > dumpfilename.sql && gzip dumpfilename.sql

Solution 5:

Personally, I have create a file.sh (right 755) in the root directory, file who do this job, on order of the crontab.

Crontab code:

10 2 * * * root /root/backupautomatique.sh

File.sh code:

rm -f /home/mordb-148-251-89-66.sql.gz #(To erase the old one)

mysqldump mor | gzip > /home/mordb-148-251-89-66.sql.gz (what you have done)

scp -P2222 /home/mordb-148-251-89-66.sql.gz root@otherip:/home/mordbexternes/mordb-148-251-89-66.sql.gz

(to send a copy somewhere else if the sending server crashes, because too old, like me ;-))