How do I resume a copy of a large file in linux? I have a huge file (serveral gigabyes) partially copied to a network drive, and it took a long time, and it was mostly done before the copy operation stopped due to a network problem that is now fixed. How do I resume the file copy. I don't want an inefficient script, and ecp didn't work (it doesn't seem to work for large files).


Solution 1:

I would try rsync -a /from/file /dest/file.

Solution 2:

If you know you simply need to append to the local file and do not want to use rsync (which could potentially take a long time calculating checksums), you can use curl. For example, if you have a large file on a slow removable USB stick mounted at /media/CORSAIR/somefile.dat and only half of it is in the current directory, to resume:

curl -C - -O "file:///media/CORSAIR/somefile.dat"

Solution 3:

The command you want is going to be something like

rsync -v --append /path/to/afile /mnt/server/dest/afile

unless you can access the server over ssh and run rsync that way, in that case the command FlyingFish gave is best.

Solution 4:

Yes rsync is the way to go. For me, we've transferred 100+GB data over rsync+ssh. If you're looking for a true backup copy, make sure that you use the -a (archive) option to preserve file attributes (times, owners, perms, etc.)

host1> rsync -aP file user@host2:/path/to/new/dir/

It's also useful for copying large files that may change during the course of a migration. You can pre-load the data to the destination and once ready for final copy, do it again but only for a fraction of the time. You can save on actual downtime by using rsync to it's full potential.

P.S. Using v (verbose) can slow down a transfer of many files.