Is there really no asynchronous block I/O on Linux?
(2020) If you're using a 5.1 or above Linux kernel you can use the io_uring
interface for file-like I/O and obtain excellent asynchronous operation.
Compared to the existing libaio
/KAIO interface, io_uring
has the following advantages:
- Retains asynchronous behaviour when doing buffered I/O (and not just when doing direct I/O)
- Easier to use (especially when using the
liburing
helper library) - Can optionally work in a polled manner (but you'll need higher privileges to enable this mode)
- Less bookkeeping space overhead per I/O
- Lower CPU overhead due to fewer userspace/kernel syscall mode switches (a big deal these days due to the impact of spectre/meltdown mitigations)
- File descriptors and buffers can be pre-registered to save mapping/unmapping time
- Faster (can achieve higher aggregate throughput, I/Os have a lower latency)
- "Linked mode" can express dependencies between I/Os (>=5.3 kernel)
- Can work with socket based I/O (
recvmsg()
/sendmsg()
are supported from >=5.3, see messages mentioning the word support in io_uring.c's git history) - Supports attempted cancellation of queued I/O (>=5.5)
- Can request that I/O always be performed from asynchronous context rather than the default of only falling back to punting I/O to an asynchronous context when the inline submission path triggers blocking (>=5.6 kernel)
- Growing support for performing asynchronous operations beyond
read
/write
(e.g.fsync
(>=5.1),fallocate
(>=5.6),splice
(>=5.7) and more) - Higher development momentum
- Doesn't become blocking each time the stars aren't perfectly aligned
Compared to glibc's POSIX AIO, io_uring
has the following advantages:
- Much faster and more efficient (the lower overhead benefits from above apply even more here)
- Interface is kernel backed and DOESN'T use a userspace thread pool
- Less copies of the data are made when doing buffered I/O
- No wrestling with signals
-
Glibc's POSIX AIO can't have more than one I/O in flight on a single file descriptor whereas
io_uring
most certainly can!
The Efficient IO with io_uring document goes into far more detail as to io_uring
's benefits and usage. The What's new with io_uring document describes new features added to io_uring
since its inception, while The rapid growth of io_uring
LWN article describes which features were available in each of the 5.1 - 5.5 kernels with a forward glance to what was going to be in 5.6 (also see LWN's list of io_uring articles). There's also a "Faster IO through io_uring" videoed presentation (slides) from late 2019 by io_uring
author Jens Axboe. Finally, the Lord of the io_uring tutorial gives an introduction to io_uring
usage.
The io_uring
community can be reached via the io_uring mailing list and the io_uring mailing list archives show daily traffic at the start of 2021.
Re "support partial I/O in the sense of recv()
vs read()
": a patch went into the 5.3 kernel that will automatically retry io_uring
short reads and a further commit went into the 5.4 kernel that tweaks the behaviour to only automatically take care of short reads when working with "regular" files on requests that haven't set the REQ_F_NOWAIT
flag (it looks like you can request REQ_F_NOWAIT
via IOCB_NOWAIT
or by opening the file with O_NONBLOCK
). Thus you can get recv()
style- "short" I/O behaviour from io_uring
too.
Software/projects using io_uring
Though the interface is young (its first incarnation arrived in May 2019), some open-source software is using io_uring
"in the wild":
-
fio (which is also authored by Jens Axboe) has an io_uring ioengine backend (in fact it was introduced back in fio-3.13 from February 2019!). The "Improved Storage Performance Using the New Linux Kernel I/O Interface SNIA presentation" (slides) by two Intel engineers states they were able to get double the IOPS on one workload and less than half the average latency at a queue depth of 1 on another workload when comparing the
io_uring
ioengine to thelibaio
ioengine on an Optane device. - The SPDK project added support for using io_uring (!) for block device access in its v19.04 release (but obviously this isn't the backend you'd typically use SPDK for other than benchmarking). More recently, they also seem to have added support for using it with sockets in v20.04...
- Ceph committed an io_uring backend in Dec 2019 which was part of its 15.1.0 release. The commit author posted a github comment showing some io_uring backend has some wins and losses versus the libaio backend (in terms of IOPS, bandwidth and latency) depending on workload.
-
RocksDB committed an
io_uring
backend for MultiRead in Dec 2019 and was part of its 6.7.3 release. Jens statesio_uring
helped to dramatically cut latency. -
libev released 4.31 with an initial
io_uring
backend in Dec 2019. While some of the author's original points were addressed in newer kernels, at the time of writing (mid 2021) libev's author has some choice words aboutio_uring
's maturity and is taking a wait-and-see approach before implementing further improvements. -
QEMU committed an io_uring backend in Jan 2020 and was part of the QEMU 5.0 release. In the "io_uring in QEMU: high-performance disk IO for Linux" PDF presentation Julia Suvorova shows the
io_uring
backend outperforming thethreads
andaio
backends on one workload of random 16K blocks. -
Samba merged an
io_uring
VFS backend in Feb 2020 and was part of the Samba 4.12 release. In the "Linux io_uring VFS backend." Samba mailing list thread, Stefan Metzmacher (the commit author) says theio_uring
module was able to push roughly 19% more throughput (compared to some unspecified backend) in a synthetic test. You can also read the "Async VFS Future" PDF presentation by Stefan for some of the motivation behind the changes. - Facebook's experimental C++ libunifex uses it (but you will also need a 5.6+ kernel)
- The rust folk have been writing wrappers to make
io_uring
more accessible to pure rust. rio is one library talked about a bit and the author says they achieved higher throughput compared to using sync calls wrapped in threads. The author gave a presentation about his database and library at FOSDEM 2020 which included a section extolling the virtues ofio_uring
. - The rust library glommio exclusively uses
io_uring
. The author (Glauber Costa) published a document called Modern storage is plenty fast. It is the APIs that are bad showing that with careful tuning glommio could get over 2.5 times the performance over regular (non-io_uring
) syscalls when performing sequential I/O on an Optane device. - Gluster merged an io_uring posix xlator in Oct 2020 and was part of the Gluster 9.0 release. The commit author mentions performance was "not any worse than regular pwrite/pread syscalls".
Software investigating using io_uring
- PostgreSQL developer Andres Freund has been one of the driving forces behind
io_uring
improvements (e.g. the workaround to reduce for filesystem inode contention). There is a presentation "Asynchronous IO for PostgreSQL" (be aware the video is broken until the 5 minute mark) (PDF) motivating the need for PostgreSQL changes and demonstrating some experimental results. He has expressed hope of getting his optionalio_uring
support into PostgreSQL 14 and seems acutely aware of what does and doesn't work even down to the kernel level. In December 2020, Andres further discusses his PostgreSQLio_uring
work in the "Blocking I/O, async I/O and io_uring" pgsql-hackers mailing list thread and mentions the work in progress can be seen over in https://github.com/anarazel/postgres/tree/aio . - The Netty project has an incubator repo working on
io_uring
support which needs a 5.9 kernel -
libuv has a pull request against it adding
io_uring
support but its progress into the project has been slow -
SwiftNIO added
io_uring
support for eventing (but not syscalls) in April 2020 and the Linux: full io_uring I/O issue outlines plans to integrate it further - The Tokio Rust project has developed a proof of concept tokio-uring
Linux distribution support for io_uring
- (Late 2020) Ubuntu 18.04's latest HWE enablement kernel is 5.4 so
io_uring
syscalls can be used. This distro doesn't pre-package theliburing
helper library but you can build it for yourself. - Ubuntu 20.04's initial kernel is 5.4 so
io_uring
syscalls can be used. As above, the distro doesn't pre-packageliburing
. - Fedora 32's initial kernel is 5.6 and it has a packaged
liburing
soio_uring
is usable. -
SLES 15 SP2 has a 5.3 kernel so
io_uring
syscalls can be used. This distro doesn't pre-package theliburing
helper library but you can build it for yourself. - (Mid 2021) RHEL 8's default kernel does not support
io_uring
(a previous version of this answer mistakenly said it did). According to the Add io_uring support Red Hat knowledge base article (contents is behind a subscriber paywall) backporting ofio_uring
to the default RHEL 8 kernel is in progress.
Hopefully io_uring
will usher in a better asynchronous file-like I/O story for Linux.
(To add a thin veneer of credibility to this answer, at some point in the past Jens Axboe (Linux kernel block layer maintainer and inventor of io_uring
) thought this answer might be worth upvoting :-)
The real answer, which was indirectly pointed to by Peter Teoh, is based on io_setup() and io_submit(). Specifically, the "aio_" functions indicated by Peter are part of the glibc user-level emulation based on threads, which is not an efficient implementation. The real answer is in:
io_submit(2)
io_setup(2)
io_cancel(2)
io_destroy(2)
io_getevents(2)
Note that the man page, dated 2012-08, says that this implementation has not yet matured to the point where it can replace the glibc user-space emulation:
http://man7.org/linux/man-pages/man7/aio.7.html
this implementation hasn't yet matured to the point where the POSIX AIO implementation can be completely reimplemented using the kernel system calls.
So, according to the latest kernel documentation I can find, Linux does not yet have a mature, kernel-based asynchronous I/O model. And, if I assume that the documented model is actually mature, it still doesn't support partial I/O in the sense of recv() vs read().
As explained in:
http://code.google.com/p/kernel/wiki/AIOUserGuide
and here:
http://www.ibm.com/developerworks/library/l-async/
Linux does provide async block I/O at the kernel level, APIs as follows:
aio_read Request an asynchronous read operation
aio_error Check the status of an asynchronous request
aio_return Get the return status of a completed asynchronous request
aio_write Request an asynchronous operation
aio_suspend Suspend the calling process until one or more asynchronous requests have completed (or failed)
aio_cancel Cancel an asynchronous I/O request
lio_listio Initiate a list of I/O operations
And if you asked who are the users of these API, it is the kernel itself - just a small subset is shown here:
./drivers/net/tun.c (for network tunnelling):
static ssize_t tun_chr_aio_read(struct kiocb *iocb, const struct iovec *iv,
./drivers/usb/gadget/inode.c:
ep_aio_read(struct kiocb *iocb, const struct iovec *iov,
./net/socket.c (general socket programming):
static ssize_t sock_aio_read(struct kiocb *iocb, const struct iovec *iov,
./mm/filemap.c (mmap of files):
generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
./mm/shmem.c:
static ssize_t shmem_file_aio_read(struct kiocb *iocb,
etc.
At the userspace level, there is also the io_submit() etc API (from glibc), but the following article offer an alternative to using glibc:
http://www.fsl.cs.sunysb.edu/~vass/linux-aio.txt
It directly implement the API for functions like io_setup() as direct syscall (bypassing glibc dependencies), a kernel mapping via the same "__NR_io_setup" signature should exist. Upon searching the kernel source at:
http://lxr.free-electrons.com/source/include/linux/syscalls.h#L474 (URL is applicable for the latest version 3.13) you are greeted with the direct implementation of these io_*() API in the kernel:
474 asmlinkage long sys_io_setup(unsigned nr_reqs, aio_context_t __user *ctx);
475 asmlinkage long sys_io_destroy(aio_context_t ctx);
476 asmlinkage long sys_io_getevents(aio_context_t ctx_id,
481 asmlinkage long sys_io_submit(aio_context_t, long,
483 asmlinkage long sys_io_cancel(aio_context_t ctx_id, struct iocb __user *iocb,
The later version of glibc should make these usage of "syscall()" to call sys_io_setup() unnecessary, but without the latest version of glibc, you can always make these call yourself if you are using the later kernel with these capabilities of "sys_io_setup()".
Of course, there are other userspace option for asynchronous I/O (eg, using signals?):
http://personal.denison.edu/~bressoud/cs375-s13/supplements/linux_altIO.pdf
or perhap:
What is the status of POSIX asynchronous I/O (AIO)?
"io_submit" and friends are still not available in glibc (see io_submit manpages), which I have verified in my Ubuntu 14.04, but this API is linux-specific.
Others like libuv, libev, and libevent are also asynchronous API:
http://nikhilm.github.io/uvbook/filesystem.html#reading-writing-files
http://software.schmorp.de/pkg/libev.html
http://libevent.org/
All these API aimed to be portable across BSD, Linux, MacOSX, and even Windows.
In terms of performance I have not seen any numbers, but suspect libuv may be the fastest, due to its lightweightedness?
https://ghc.haskell.org/trac/ghc/ticket/8400