Netcat files transfer over a private backend 10G network [duplicate]

I have to copy a lot of files (60000+) between two servers, which is approx 5TB of data.

I've tried mounting the backup server as a folder, and copying the files that way, but I couldn't get the permissions to write files right.

So I thought of bonding the remaining 3 nic ports, connect crosscables and using cp/scp to copy everything. Now I have no experience with bonding nic's en transfering data that way.

Would it be faster? Can anyone relate or give me some advice on better solutions? Would be much appreciated.


When copying large amounts of files I usually use these commands:

Target:

nc -q 1 -l 1234 | pv -pterb -s <filesize>G | tar xv

Source:

tar cv <DIR>  | nc -q 1 <targetip> 1234

This will directly stream all the data, without a lot of protocol overhead from source to target over port 1234. This proved to be the fastest way for me to copy the data in a local network. As an addition I added the pv command in the target, so I can get a rough overview how far along the files are.

For a more advanced, yet maybe slower transfer, I'd recommend using rsync.