How should I copy my VM templates between vSphere datacenters?

How about using ovftool to copy the templates directly between hosts?

I have used this for VMs before, and it works pretty well. Not sure if that also works for templates, but if not then you can just covert the templates temporarily to VMs for copying them.

Instructions, with an example are here.

You could also use ovftool to convert your templates to .ovf packages, which should be very compact, and then transfer the packages between datacenters with BITS or FTP or SCP or whatever protocol you want.


Options:

The way I see it, I have three possible approaches, though I dearly hope I'm missing a better one that someone here can point me at. (Ideally one that has me only moving the 40 GiB of actual data, and in a resumable, "background" or speed-throttled method.)

  1. Copy the files between datastores through the vSphere client.
    • Advantage: Moving only ~40 GiB, not ~100 GiB.
    • Disadvantage: Everything else - not resumable, not background/speed-throttled, interface SUCKS.

  2. Copy the file between Windows guests using BITS
    • Advantage: Resumable, background transfer.
    • Disadvantage: Moving ~60 GiB of data that doesn't really exist.
    • Bonus: Uses PowerShell. <3
    • Double Secret Probation Bonus: PowerShell Remoting makes it possible to do this in one single command.

  3. Copy the file between ESXi hosts via SCP
    • Advantage: Speed-throttled and potentially resumable.
    • Disadvantage: Moving ~60 GiB of data that doesn't really exist. Not background transfer.
    • Bonus: Neck beard. Extra neck-beard for resumability.

  4. Better option suggested on Server Fault.
    • Advantage: Resumable, speed-throttled background transfer that only moves ~40 GiB of data that exists.
    • Disadvantage: Awarding a bounty costs rep.
    • Bonus: Learn something new, justify playing ServerFault at work.

Here's a somewhat interesting idea for you. It won't help with your initial seeding, but I wonder if using something like Crashplan's free product would help you with your templates.

https://www.code42.com/store/

It does dedupe and block level differentials, so you could install it on one local server there at HQ as the "seeder", and on each spoke server (in a VM I guess) as a "receiver". Setup the backups to only include the folder where the templates will be stored on the HQ Server. It can also backup to multiple destinations (such as each "spoke") https://support.code42.com/CrashPlan/Latest/Getting_Started/Choosing_Destinations

The steps (after setting up the Crashplan app on each side) would work something like:

  1. Copy the templates from the datastore(s) to the "seed" server to the directory on it that Crashplan is monitoring. On a gigabit network this might take a little time but shouldn't be too bad.
  2. Crashplan should monitor and start backing up the files to the spokes/receivers. This will obviously take quite a while.
  3. After the initial seeding/backups, when future templates change copy them from the actual datastore(s) to the "seed" server's directory Crashplan is monitoring, overwriting the original template copy. Then Crashplan will dedupe and only replcate the block level changes across to the spokes.

Just an idea...might be an interesting road to venture down and see if it works as a poor man's dedupe/block level replication for just these files.


I've done this type of move a number of ways, but given what you've described...

FedEx or UPS, with a twist...

I know that the servers in use are HP ProLiant and Dell PowerEdge servers. VMware does not have good support for removable devices (e.g. USB) as datastore targets. However, using a single drive RAID 0 logical drive (in HP-speak) at the main site can work. You can add and remove locally-attached disks on HP and Dell systems and use that as a means to transport datastores.

Being templates, you can move/copy them to your local disk via vCenter. Ship the disks. Insert into the receiving standalone server. The array and datastore will be recognized via a storage system rescan. Copy data. Profit.

I've also used this as a means to seed copies for vSphere replication, as 24 hours of deltas is a lot easier to manage than multiple full syncs.


This is a method I use fairly often for this kind of scenario. It seems counter-intuitive because you are uploading files from inside a VM stored on the datastore, to the datastore itself. However, this gives you a lot more control over how the transfer is accomplished.

  • Use WinRAR or 7Zip to break your template into 1GB-2GB chunks.
  • Create a VM on the ESXi server at each remote site. Minimal resources are needed, this is just a staging area.
  • Attach a VMDK to each of these VMs that's big enough to hold the data you're transferring.
  • Install an OS and transfer tool of your choice (I use an SFTP server for this).
  • Upload the RAR'd template to the staging VM.
  • Uncompress the RAR'd template.
  • Use vSphere or web UI to upload the template from the staging VM to the ESXI datastore. (this will be a FAST transfer).

Pros:

By breaking the template into smaller pieces you reduce the risk of data corruption during transfer. (If a file gets corrupted, you only need to re-upload that piece of the RAR, rather than the entire 40GB file.)

You only transfer 40GB (probably less as RAR'ing will compress further).

You get your pick of transfer utilities as you're doing the transfer inside the OS of your choice.

Cons:

You have to create a staging VM. I make this easier by having a pre-created template that is <1GB that has just a bare OS install + SFTP server.

Compressing/decompressing a 40GB template will take ~4-6 hours depending on your CPU resources.