Transfering about 300gb in files from one server to another
Solution 1:
Just to flesh out Simon's answer, rsync
is the perfect tool for the job:
Rsync is a fast and extraordinarily versatile file copying
tool. It can copy locally, to/from another host over any
remote shell, or to/from a remote rsync daemon. It offers a
large number of options that control every aspect of its
behavior and permit very flexible specification of the set of
files to be copied. It is famous for its delta-transfer algo‐
rithm, which reduces the amount of data sent over the network
by sending only the differences between the source files and
the existing files in the destination. Rsync is widely used
for backups and mirroring and as an improved copy command for
everyday use.
Assuming you have ssh access to the remote machine, you would want to do something like this:
rsync -hrtplu path/to/local/foo [email protected]:/path/to/remote/bar
This will copy the directory path/to/local/foo
to /path/to/remote/bar
on the remote server. A new subdirectory named bar/foo
will be created. If you only want to copy the contents of a directory, without creating a directory of that name on the target, add a trailing slash:
rsync -hrtplu path/to/local/foo/ [email protected]:/path/to/remote/bar
This will copy the contents of foo/
into the remote directory bar/
.
A few relevant options:
-h, output numbers in a human-readable format
-r recurse into directories
-t, --times preserve modification times
-p, --perms preserve permissions
-l, --links copy symlinks as symlinks
-u, --update skip files that are newer on the receiver
--delete delete extraneous files from dest dirs
-z, --compress compress file data during the transfer
-C, --cvs-exclude auto-ignore files in the same way CVS does
--progress show progress during transfer
--stats give some file-transfer stats
Solution 2:
It depends on how fast it needs to be copied, and how much bandwidth is available.
For a poor network connection consider the bandwidth of a truck filled with tapes. (Read: mail a 2.5 inch HDD, or just drive it there yourself. 300 gigabit drives should be easy to find).
If it is less time critical or you you plenty of bandwidth then rsync is great. If there is an error you can just continue without re-copying the earlier files.
[Edit] I forgot to add that you can run rsync several times if your data gets used during the copy.
Example:
1) Data in use. Rsync -> All data gets copied. This may take some time.
2) Run rsync again, only the changed files get copied. This should be fast.
You can do this several times until there are no changes, or you can do it the smart/safe way by making the data read-only during the copy. (e.g. if it is on a used shared set that share to read-only. Or rsync the data, then at night set the share read-only while you run it a second time).
Solution 3:
I would go for rsync! I am using it to backup my server to an offsite server and it works fine. Usually there are a few MBs to copy but some days it goes up to 20-30GB and it allways worked without a problem.