virsh save vm_name memdump and then virsh restore memdump restores a (running) VM all right.

However, a VM is shut off after virsh save. I'm writing a "live" backup and restore script for KVM VMs, so in the backup part I obviously need a VM running after backup. It's not a problem to do virsh restore memdump right after backup but it strikes me as essentially unnecessary - I "should" be able to pause a VM, save its memory to a file and then simply resume/unsuspend a VM.

This is not really a problem with VMs that have little memory, but if VM has sizable working memory, then it prolongs backup unnecessarily.

Unfortunately a VM is shut off even if I do virsh suspend first, before virsh save.

Is there a way to do this? (i.e. suspend, save, unsuspend)


First, I totally agree with @dyasny, it is hard to find a reasonable use case for 'full VM state (aka. with memory)'.

But, if you really want 'virsh save vm_name memdump' without destroy the vm, you can try

virsh snapshot-create-as  ${domain} ${fake_snap} 'save vm while keep running' \
 --no-metadata --atomic --live \
 --memspec  ${path_to_mem_dump_file},snapshot=external

Good luck :)

======== Updating: (too long to post as reply ) ===============

Oh, maybe this is my verboseness, 'full VM state' == mem_state + disk_state, while 'mem_state' == 'vm physical memory' + 'vm cpu registers' + 'vm device state in hypervisor'.

So it is safe to 'virsh save' and 'virsh store', since thers is no thing to lose , 'save/restore' just like 'laptop sleeping', usually you will get your app continue running after you 'restore' a vm.

It is disaster if 'mem_state' and 'disk_state' out of sync, that is why 'virsh save' enfocing a 'destroy' after 'save mem'.

My 'virsh save without destroy' is actually a 'full VM backup', the disk_snapshot is hinden inside the original qcow2. So you just see a 'mem_state'.:)


If the VM has lots of memory, saving it will in any case mean a large amount of time spent saving the memstate.

If there is no hard requirement to backup a full VM state (because usually it is redundant, you'll get errors when you restore because of time differences, and it might even lead to a crash).

Normally, VMs are backup up as following:

  1. Quiesce Vm's filesystems
  2. Take live snapshot of VM's disk(s)
  3. Backup the disks and the VM's configuration (virsh dumpxml VM)
  4. Live merge the disks so the snapshot is gone

Now, the only part that might be tricky with kvm is the last part. It is kind of supported using blockpull in most current distros, but that will not merge the snapshot into the base image, it will do the opposite - pull the data from the base into the snapshot, so you can remove the base. The better command is blockcommit, it will push the changed bits from the snapshot into the base image, however, it is only available in the very bleeding edge distributions. I hope it will make it into RHEL 7.1, we'll see