What does it mean when Linux has no I/O scheduler
I have some virtual machines running Ubuntu cloud-based image 14.04-1 LTS version. I wanted to see the IO performance of different IO schedulers on the VM so I went to /sys/block/<drive>/queue/scheduler
on the guest OS to change the IO scheduler. Usually, there should be cfq
, deadline
, or noop
to choose. But what I saw is none
. Does it mean that Canonical has removed the I/O scheduler in the cloud-based image or the scheduler none
here is the renamed noop
scheduler? and what happens if we don't have an I/O scheduler in the system? All the io requests were directly sent the host in FIFO order?
Thanks for shed some light!
Solution 1:
From this Debian Wiki:
Low-Latency IO-Scheduler
(This step is not necessary for SSDs using the NVMe protocol instead of SATA, which bypass the traditional I/O scheduler and use the
blk-mq
module instead.)The default I/O scheduler queues data to minimize seeks on HDDs, which is not necessary for SSDs. Thus, use the
"deadline"
scheduler that just ensures bulk transactions won't slow down small transactions: Installsysfsutils
andecho "block/sdX/queue/scheduler = deadline" >> /etc/sysfs.conf
(adjust sdX to match your SSD) reboot or
echo deadline > /sys/block/sdX/queue/scheduler
So, the answer is: none
is NOT an alias for noop
. none
means "the scheduler is not used".
Solution 2:
It seems that on kernels >= 3.13 none
is not an alias of noop
anymore. It is shown when the blk-mq
I/O framework is in use; this means a complete bypass of the old schedulers, as blk-mq
has (right now) no schedulers at all to select.
On earlier kernels, none
really is a poorly-documented alias for noop
. See here for more details.
Solution 3:
None is not an alias for noop.
None is displayed because no scheduler is in use. SSDs using the NVMe protocol instead of SATA bypass the traditional I/O scheduler.
Solution 4:
Guest VMs have virtual I/O devices provided by the hypervisor. Therefore the actual I/O device scheduling is performed by the hypervisor kernel, and guests pass all device I/O directly to hypervisor without any scheduling.