What's the best way to perform a parallel copy on Unix?
I routinely have to copy the contents of a folder on a network file system to my local computer. There are many files (1000s) on the remote folder that are all relatively small but due to network overhead a regular copy cp remote_folder/* ~/local_folder/
takes a very long time (10 mins).
I believe it's because the files are being copied sequentially – each file waits until the previous is finished before the copy begins.
What's the simplest way to increase the speed of this copy? (I assume it is to perform the copy in parallel.)
Zipping the files before copying will not necessarily speed things up because they may be all saved on different disks on different servers.
As long as you limit the copy commands you're running you could probably use a script like the one posted by Scrutinizer
SOURCEDIR="$1"
TARGETDIR="$2"
MAX_PARALLEL=4
nroffiles=$(ls "$SOURCEDIR" | wc -w)
setsize=$(( nroffiles/MAX_PARALLEL + 1 ))
ls -1 "$SOURCEDIR"/* | xargs -n "$setsize" | while read workset; do
cp -p "$workset" "$TARGETDIR" &
done
wait
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
parallel -j10 cp {} destdir/ ::: *
You can install GNU Parallel simply by:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep da012ec113b49a54e705f86d51e784ebced224fdf
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
Explanation of commands, arguments, and options
- parallel --- Fairly obvious; a call to the parallel command
- build and execute shell command lines from standard input in parallel - man deeplink
- -j10 ------- Run 10 jobs in parallel
- Number of jobslots on each machine. Run up to N jobs in parallel. 0 means as many as possible. Default is 100% which will run one job per CPU on each machine. - man deeplink
- cp -------- The command to run in parallel
- {} --------- Replace received values here. i.e.
source_file
argument for commandcp
.- This replacement string will be replaced by a full line read from the input source. The input source is normally stdin (standard input), but can also be given with -a, :::, or ::::. The replacement string {} can be changed with -I. If the command line contains no replacement strings then {} will be appended to the command line. - man deeplink
- destdir/ - The destination directory
- ::: -------- Tell parallel to use the next argument as input instead of stdin
- Use arguments from the command line as input source instead of stdin (standard input). Unlike other options for GNU parallel ::: is placed after the command and before the arguments. - man deeplink
- * ---------- All files in the current directory
Learn more
Your command line will love you for it.
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Get the book 'GNU Parallel 2018' at http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html or download it at: https://doi.org/10.5281/zenodo.1146014 Read at least chapter 1+2. It should take you less than 20 minutes.
Print the cheat sheet: https://www.gnu.org/software/parallel/parallel_cheat.pdf
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
Honestly, the best tool is Google's gsutil. It handles parallel copies with directory recursion. Most of the other methods I've seen can't handle directory recursion. They don't specifically mention local filesystem to local filesystem copies in their docs, but it works like a charm.
It's another binary to install, but probably one you might already run considering all of the cloud service adoption nowadays.