How much data does Linux read on average boot?

Install one system, boot it and check out the block layer statistics from /sys/block/${DEV}/stat e.g. /sys/block/sda/stat.

Quoting from the documentation:

The stat file consists of a single line of text containing 11 decimal values separated by whitespace. The fields are summarized in the following table, and described in more detail below:

Name            units         description
----            -----         -----------
read I/Os       requests      number of read I/Os processed
read merges     requests      number of read I/Os merged with in-queue I/O
read sectors    sectors       number of sectors read
read ticks      milliseconds  total wait time for read requests
write I/Os      requests      number of write I/Os processed
write merges    requests      number of write I/Os merged with in-queue I/O
write sectors   sectors       number of sectors written
write ticks     milliseconds  total wait time for write requests
in_flight       requests      number of I/Os currently in flight
io_ticks        milliseconds  total time this block device has been active
time_in_queue   milliseconds  total wait time for all requests

read sectors, write sectors

These values count the number of sectors read from or written to this block device. The "sectors" in question are the standard UNIX 512-byte sectors, not any device- or filesystem-specific block size. The counters are incremented when the I/O completes.

You can use this one-liner to get the number of bytes more easily:

awk '{printf("read %d bytes, wrote %d bytes\n", $3*512, $7*512)}' /sys/block/vda/stat

Results for Scientific Linux 6.1 i386

I tested this on a KVM/qemu virtual machine running Scientific Linux 6.1 i386 (which is similar to RHEL). The following services were enabled: acpid, auditd, crond, network, postfix, rsyslog, sshd and udev-post. The swap is on a separate disk, so it's not taken into account.

The stats for 85 boots, taken remotely with SSH a couple of seconds after the login prompt appeared, were:

    Name            Median   Average   Stdev
    -------------   ------   -------   -----
    read I/Os       1920     1920.2    2.6
    read merges     1158     1158.4    1.8
    read sectors    85322    85330.9   31.9
 >> read MiBytes    41.661   41.665    0.016
    read ticks      1165     1177.2    94.1
    write I/Os      33       32.6      1.7
    write merges    64       59.6      7.4
    write sectors   762      715.2     70.9
 >> write MiBytes   0.372    0.349     0.035
    write ticks     51       59.0      17.4
    in_flight       0        0.0       0.0
    io_ticks        895      909.9     57.8
    time_in_queue   1217     1235.2    98.5

The boot time was around 20 seconds.


You say in your comments that you're evaluating a netboot / network root environment.

The first thing you must realize is there is no such thing as "vanilla" - you're not going to run CentOS 5.10 right out of the box with zero changes (if you think you are you're deluding yourself: NFS Root is already at least Strawberry, verging on Pistachio).

If you want an answer for your specific environment (which is what really counts) you're going to need to set up an NFS server and a client machine, boot it, and measure:

  1. The transfer (quantity)
  2. The throughput (rate)

Both values will be critically important for performance. You'll probably also want to set up several clients at some point and simulate normal use of the system to see what kind of steady-state demand they put on your NFS server / network when people are using the systems as they would in their everyday work.

See also: Our series on Capacity Planning - we don't talk specifically about NFS, but the general principles of "Build it, Test it, Stress it" apply.