Performance of Loopback Filesystems

Has anyone done any performance/benchmarking tests on Linux loopback file systems? What has your experience been so far. Is there any serious degradation in performance? How about robustness?

http://freshmeat.net/articles/virtual-filesystem-building-a-linux-filesystem-from-an-ordinary-file


Solution 1:

I've done a bit of benchmarking with write operations in a loopback device. Here's the conclusion:

  • If you sync after every write, then a loopback device performs significantly worse (almost twice as slow).
  • If you allow the disk cache an IO scheduler to do their job, then there is hardly any difference between using a loopback device and direct disk access.

Benchmark results

First, I ran a benchmark on a loopback device in tmpfs of 8GB, and a loopback device within that loopback device (with sync after every write operation):

ext4 in tmpfs:

Measured speed: 557, 567, 563, 558, 560, 559, 556, 556, 554, 557
Average speed : 558.7 MB/s  (min 554  max 560)

ext4 in extf in tmpfs:

Measured speed: 296, 298, 295, 295, 299, 297, 294, 295, 296, 296
Average speed : 296.1 MB/s  (min 294  max 299)

Clearly, there is some difference in performance when using loopback devices with sync-on-write.
Then I repeated the same test on my HDD.
ext4 (HDD, 1000 MB, 3 times):

Measured speed: 24.1, 23.6, 23.0
Average speed : 23.5 MB/s  (min 23.0  max 24.1)

ext4 in ext4 (HDD, 945MB):

Measured speed: 12.9, 13.0, 12.7
Average speed : 12.8 MB/s  (min 12.7  max 13.0)

Same benchmark on HDD, now without syncing after every write (time (dd if=/dev/zero bs=1M count=1000 of=file; sync), measured as <size>/<time in seconds>).
ext4 (HDD, 1000 MB):

Measured speed: 84.3, 86.1, 83.9, 86.1, 87.7
Average speed : 85.6 MB/s  (min 84.3  max 87.7)

ext4 in ext4 (HDD, 945MB):

Measured speed: 89.9, 97.2, 82.9, 84.0, 82.7
Average speed : 87.3 MB/s  (min 82.7  max 97.2)

(surprisingly, the loopback benchmark looks better than the raw-disk benchmark, presumably because of the smaller size of the loopback device, thus less time is spent on the actual sync-to-disk)

Benchmark setup

First, I created a loopback filesystem of 8G in my /tmp (tmpfs):

truncate /tmp/file -s 8G
mkfs.ext4 /tmp/file
sudo mount /tmp/file /mnt/
sudo chown $USER /mnt/

Then I established a baseline by filling the mounted loopback file with data:

$ dd if=/dev/zero bs=1M of=/mnt/bigfile oflag=sync
dd: error writing '/mnt/bigfile': No space left on device
7492+0 records in
7491+0 records out
7855763456 bytes (7.9 GB) copied, 14.0959 s, 557 MB/s

After doing that, I created another loopback device in the previous loopback device:

mkdir /tmp/mountpoint
mkfs.ext4 /mnt/bigfile
sudo mount /mnt/bigfile /tmp/mountpoint
sudo chown $USER /tmp/mountpoint

And ran the benchmark again, ten times:

$ dd if=/dev/zero bs=1M of=/tmp/mountpoint/file oflag=sync
...
7171379200 bytes (7.2 GB) copied, 27.0111 s, 265 MB/s

and then I unmounted the test file and removed it:

sudo umount /tmp/mountpoint
sudo umount /mnt

(similarly for the test on the HDD, except I also added count=1000 to prevent the test from filling my whole disk)
(and for not-writing-on-sync test, I ran timed the dd and sync operation)