Which scheduler to change on LVM to benefit virtual machines
Solution 1:
So, the answer turned out to be simply: the underlying device. Newer kernels only have 'none' in /sys/block/*/queue/scheduler
when there is no scheduler to configure.
However, for a reason unknown to me, the devices on this server are created as multipath devices, therefore my fiddling with the scheduler on /dev/sd[bc]
never did anything in the past. Now I set dm-1
and dm-0
to deadline with a read_expire=100
and write_expire=1500
(much more stringent that normal) and the results seem very good.
This graph shows the effect on disk latency in a virtual machine, caused by another virtual machine with an hourly task:
You can clearly see the moment where I changed the scheduler parameters.
Solution 2:
Hmm, Debian...
Well, I can share how Redhat approaches this with their tuned framework. There are profiles for "virtual-host" and "virtual-guest". The profile descriptions are explained in detail here, and the following excerpt shows which devices are impacted. The "dm-*" and "sdX" devices have their schedulers changed.
# This is the I/O scheduler ktune will use. This will *not* override anything
# explicitly set on the kernel command line, nor will it change the scheduler
# for any block device that is using a non-default scheduler when ktune starts.
# You should probably leave this on "deadline", but "as", "cfq", and "noop" are
# also legal values. Comment this out to prevent ktune from changing I/O
# scheduler settings.
ELEVATOR="deadline"
# These are the devices, that should be tuned with the ELEVATOR
ELEVATOR_TUNE_DEVS="/sys/block/{sd,cciss,dm-,vd,zd}*/queue/scheduler"
Also see:
CentOS Tuned Equivalent For Debian and Understanding RedHat's recommended tuned profiles