ZFS inside a virtual machine

Basically any modern hypervisor (VMWare, Xen, KVM, HyperV, even VirtualBox) supports barrier passing: when a VM explicitly flushes something to disk (by issuing a barrier/FUA), the hypervisor will pass the barrier down to the host, forcing the same flushes executed by the guest OS. In other words, no corruption is expected for important/durable writes (as the one used by the filesystem itself to update its metadata).

While most hypervisor can be confiured to ignore flushes, this can jeopardize any filesystems - XFS, EXT4, etc. will be exposed to serious corruptions as much as ZFS.

Back to the main question: it is perfectly safe to use ZFS inside a guest OS, and I have first-hand experience on similar setup. This will enable the guest to use advanced features as compression, snapshots and send/receive. However this can led to somewhat lower guest performance (ZFS is not enginereed to be a benchmark-winning filesystem). Moreover, as many SANs implements the very same features at the disk image level, you should evaluate if the performance impact due to double-CoW is worth the additional flexibility at the guest level.

For the reasons above, I generally use ZFS at the disk-image/hypervisor level, while using XFS or EXT4 inside the virtual machines themselves. However, on scenario where I have no visibility on the underlying SAN/storage (and its snapshot/compression/replication policies), I sometime use ZFS at the guest level.

In these cases, the added features more than compensate the performance impact vs, for example, a plain XFS setup. And I have no stability/durability problems at all.

Side note: VT-d is only useful if you plan to pass the raw disks (or other hardware devices) to the guest itself. If using file or volume based virtual disk, you are not using VT-d.