Why ext filesystems don't fill entire device?
I've just noticed any of ext{2,3,4} filesystems i'm trying to create on 500G HDD don't use all available space (466G). I've also tried reiser3, xfs, jfs, btrfs and even vfat. All of them create fs of size 466G (as shown by df -h). However, ext* creates fs of 459G. Disabling reserved blocks increases space available to user, but size of fs is still 459G.
The same is for 1Tb HDD: 932G reiserfs, 917G ext4.
So, what is this 1.5% difference? Why it happens and is there the way to make ext fill whole volume?
UPD: All tests done on the same machine, on the same HDD etc. It doesn't matter how 466G differs from marketing 500G. The problem is it differs for different FS'.
About df - it shows total FS size, used size and free space. In this case I have:
for reiserfs:
/dev/sda1 466G 33M 466G 1% /mnt
for ext4:
/dev/sda1 459G 198M 435G 1% /mnt
If I turn root block reservation off, 435G changes to 459G - full size of fs (minus 198M). But fs itself is still 459G for ext4 and 466G for reiser!
UPD2: Filling volumes with real data via dd:
reiserfs:
fs:~# dd if=/dev/zero of=/mnt/1 dd: запись в «/mnt/1»: На устройстве кончилось место 975702649+0 записей считано 975702648+0 записей написано скопировано 499559755776 байт (500 GB), 8705,61 c, 57,4 MB/c
ext2 with blocks reservation turned off (mke2fs -m 0):
fs:~# dd if=/dev/zero of=/mnt/1 dd: запись в «/mnt/1»: На устройстве кончилось место 960356153+0 записей считано 960356152+0 записей написано скопировано 491702349824 байта (492 GB), 8870,01 c, 55,4 MB/c
Sorry for russian, but i've run it in default locale and repeating it is too long. It doesn't matter, dd output is obvious.
So, it turns out that mke2fs really creates smaller filesystem, than other mkfs's.
Solution 1:
There are two reasons this is true.
First, for some reason or another OS writers still report free space in terms of a base 2 system, and hard drive manufacturers reports free space in terms of a base 10 system. For example, an OS writer will call 1024 bytes (2^10 bytes) a kilobyte, and a hard drive manufacture would call 1000 bytes a kilobyte. This difference is pretty minor for kilobytes, but once you get up to terabytes, it's pretty significant. An OS writer will call 1099511627776 bytes (2^40 bytes) a terabyte, and a hard-drive manufacturer will call 1000000000000 bytes a terabyte.
These two different ways of talking about sizes frequently leads to a lot of confusion.
There is a spottily supported ISO prefix for binary sizes. User interfaces that are designed with the new prefix in mind will show TiB, GiB (or more generally XiB) when showing sizes with a base 2 prefix system.
Secondly, df -h reports how much space is available for your use. All filesystems have to write housekeeping information to keep track of things for you. This information takes up some of the space on your drive. Not generally very much, but some. That also accounts for some of the seeming loss you're seeing.
After you've edited your post to make it clear that none of my answers actually answer your question, I will take a stab at answering your question...
Different filesystems use different amounts of space for housekeeping information and report that space usage in different ways.
For example, ext2 divides the disk up into cylinder groups. Then it pre-allocates space in each cylinder group for inodes and free space maps. ext3 does the same thing since it's basically it's ext2 + journaling. And ext4 also does the exact same thing since it's a fairly straightforward (and almost backwards compatible) modification of ext3. And since this meta-data overhead is fixed on filesystem creation or on resize, it's not reported as 'used' space. I suspect this is also because the cylinder group meta-data is at fixed places on the disk, and so is simply implied as being used and hence not marked off or accounted for in free-space maps.
But reiserfs does not pre-allocate any metadata of any kind. It has no inode limit that's fixed on filesystem creation because it allocates all of its inodes on-the-fly like it does with data blocks. It, at most, needs some structures describing the root directory and a free space map of some sort. So it uses much less space when it has nothing in it.
But this means that reiserfs will take up more space as you add files because it will be allocating meta-data (like inodes) as well as the actual data space for the file.
I do not know exactly how jfs and btrfs track meta-data space usage. But I suspect they track it more like reiserfs does. vfat in particular has no inode concept at all. Its free space map (the size of which is fixed at filesystem create (the infamous FAT table)) stores much of the data an inode would, and the directory entry (which is dynamically allocated) stores the rest.
Solution 2:
As well as the issues that Omnifarious mentions, with ext2/3/4 a certain amount of space is reserved for root - this reserved space does not show in the output of df.
For instance creating a small filesystem (~100mb) with default options, using ext2 rather then 3 or 4 in order to ignore space that would otherwise be taken by the journal:
swann:/tmp# dd if=/dev/zero of=./loop.fs bs=10240 count=10240
swann:/tmp# mkfs.ext2 loop.fs
swann:/tmp# mkdir loop
swann:/tmp# mount -text2 -oloop loop.fs loop
swann:/tmp# df loop
Filesystem 1K-blocks Used Available Use% Mounted on
/tmp/loop.fs 99150 1550 92480 2% /tmp/loop
Tweaking the reserved blocks option (tune2fs
's -m
option sets the reserved blocks as a percentage, and the -r
option sets the reserved blocks as a straight number of blocks):
swann:/tmp# umount loop
swann:/tmp# tune2fs -m 25 loop.fs
swann:/tmp# mount -text2 -oloop loop.fs loop
swann:/tmp# df loop
Filesystem 1K-blocks Used Available Use% Mounted on
/tmp/loop.fs 99150 1550 72000 3% /tmp/loop
swann:/tmp# umount loop
swann:/tmp# tune2fs -m 0 loop.fs
swann:/tmp# mount -text2 -oloop loop.fs loop
swann:/tmp# df loop
Filesystem 1K-blocks Used Available Use% Mounted on
/tmp/loop.fs 99150 1550 97600 2% /tmp/loop
As you can see in the example above, even when logged in as root df
doesn't show the reserved space in the "Available" count. The reserved space does not show in the "Used" count either, whether logged in as root or a less privileged user. This can sometimes cause confusion when a filesystem is close to full if you are not expecting these two facts.
Also note that tune2fs
, despite its name, is relevant for ext3 and ext4 filesystems as well as ext2 ones.