Windows Server virtual disk cache settings
In Windows under Azure or Hyper-V, the Disk Policy write cache settings for a virtual disk will always appear to be enabled from the server configuration, regardless of the actual status of write cache settings on the underlying disk system. However, there is a second setting related to the first, for which its effect is unclear in this case of virtual disks, where the first is checked/enabled in the setting, but actually disabled in the virtual disk hardware. The disk settings appear like this:
[x] Enable Write Caching on the device
[ ] Turn off Windows write-cache buffer flushing on the device
Intuitively, I would assume there is no reason to enable this, since write caching is actually NOT on for the device, and if it were a physical disk, this setting would be disabled. But this article suggests that enabling it will free the OS and hardware from doing meaningless work, improving performance while maintaining the same data integrity, for this case where write caching is enabled in the setting but is actually disabled in hardware:
...since a virtual hard disk isn't really a disk at all, that setting has no meaning as far as virtual disks are concerned. But the second setting is different and does have meaning as it controls the cache flush on/off settings for the disk. When you select the second setting, cache flushes will essentially pretend to succeed--at least at the level of the software stack. ... when you select this setting in the guest OS for a virtual hard disk in a virtual machine, you might see some performance improvement for applications running in the virtual machine. But always remember that it's the host's disk cache settings that are the important ones as far as data integrity are concerned.
Can anyone confirm whether or not this claim is true and a safe thing to do WRT data integrity?
Solution 1:
I think those settings do not really have any sense if your data disks were initially configured with Read/None cache. This article states it is more a matter of host caching that can be modified through Service Management APIs or Powershell commands.
Solution 2:
I would suggest going with VM with enabled option "Turn off Windows write-cache buffer flushing on the device" only in the case if you have redundancy on power/UPS or storage system levels. You should consider the data loss in case of power outage or hardware failure if you would enable the option. I can suggest going with S2D, StarWind or HPE VSA as software defined storage that can provide you with redundancy on node level so you could use the write-caching feature on-premise VMs.
For the case, the option "Turn off Windows write-cache buffer flushing on the device" will boost the overall VM system performance: RAM is used as the source for hot data and data flushing to the device is performed only when RAM is full.
I would suggest disabling Write-Caching, in case you are running Hyper-V standalone host with a PSU connection to a single line.
These assumptions are only reliable to on-premise virtualized hosts. The cloud vendors have own functionality to provide redundancy for their instances, therefore, I use the write-caching feature only for my Amazon VMs.
EDIT: the point of the feature is decreasing of latency on performed I/O operations.
- Once WB cache is disabled the I/O requests go directly to underlying storage thus cause max latency.
- In case "Enable Write Caching on the device" enabled, the I/O go to RAM first, once data becomes "cold", Windows OS flush them onto storage. For the case, it decreases the latency.
- And when both WB options are marked, you can face the minimal latency due to hot and cold data is located on RAM. The data flushes onto storage only when RAM got full.