ext4 partition size / free space discrepancies

There are a few things going on here. gparted reports the actual used/free space. The kernel reduces the available count by the reserved space. After you removed the reserved space, the free count did not change because the reserved blocks already were free; it is just that non root users are not allowed to invade that space to prevent them from causing trouble by filling up the disk. The gnome numbers are a little flaky because of a bug. Instead of reporting the used space that the kernel reports ( and df shows ), it computes it by subtracting the free space from the total. This causes it to show reserved space as used.

The missing 4GB is actually used is the fs overhead for ext4. NTFS only initially allocates a small amount of space for the MFT, and grows it as needed. The ext series of filesystems though, allocate space for the inode table ( rough equivalent of the MFT ) at format time and it can not grow. The space missing from the reported total space is the inode table. The remaining bit of used space is from the journal ( usually 128 mb ) and resize inodes.


First of all, reserved blocks are not block used for filesystem internal management.

Reserved blocks are simply reserved for root, as to assure that services using files on that partition cannot be ruled out of space by some non-admin user filling all the space.

Even with no reserved blocks (-m 0) there is always a part of the space used for filesystem internal management, I cannot say how much, I have not such a deep knowledge.

Also, Gparted is executed as root, so it see reserved blocks as free. Nautilus, executed as user, see them as non free.

Ok, @psusi answer is very clear, I have nothing to add.


After partitioning my brand new 8 TB disk with gparted, it reported:

  Size: 7.28 TiB
  Used: 59.76 GiB   <-- Huh?
Unused: 7.22 TiB

Which is why I ended up here. Now let the investigation begin.

Running sudo fdisk /dev/sdc (where /dev/sdc is my new disk) reveals:

Disk /dev/sdc: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
...
Disklabel type: gpt

Note that 15628053168 * 512 = 8001563222016.

From now on lets work in number of SECTORS (which are 512 bytes) and work exclusively with hexadecimal notation. This gives us,

fdisk (real values)
Disk size: 3a3812ab0

Furthermore, fdisk gives us the partition table:

Device     Start         End     Sectors  Size Type
/dev/sdc1   2048 15628052479 15628050432  7,3T Linux filesystem

Lets translate that into hex too (it already is in sectors):

/dev/sdc1    800   3a38127ff   3a3812000  7.277378082275390625 TiB

(That TiB value is exact; but in decimal. It shows why 7.3 was printed).

The first 0x800 sectors are reserved for the Master Boot Record (MBR) and the partition table (type gpt, since it was created by gparted and I choose to use that type there).

The End sector is inclusive, so indeed

3a38127ff + 1 - 800 = 3a3812000

But why was this chosen? Well, because gparted rounded everything off to 1 MiB boundaries (it said), which happens to be 0x800 sectors (1024 * 1024 / 512 in hex).

Nevertheless, why didn't it pick 3a3812fff as last sector? Well, because that doesn't exist, the total disk size is 3a3812ab0 as we saw before.

Ok, so we need a little space at the start for MBR and parition table, but only want to start and end partitions at 0x800 boundaries, therefore the first sector is 800 and the last one is 3a38127ff. Leading to a total partition size of 3a3812000 sectors, or 7.28 TiB as reported by gparted (8001561821184 bytes in decimal).

The type of the filesystem on it is ext4.

Lets start with mounting it, and lets now work in sectors in DECIMAL:

sudo mount -t ext4 /dev/sdc1 /mnt/newdisk
df -B512 | grep sdc1
/dev/sdc1 15502817864 102728 14721279848  1% /mnt/newdisk

So df reports, in sectors and in hex:

df Size: 15502817864 sectors (= 7937442746368 bytes = 7.219... TiB).
df Used: 102728 sectors (= 52596736 bytes = 50.16 MiB).
df Available: 14721279848 sectors (= 7537295282176 bytes = 6.855... TiB).

Hence, the reported size is 8001561821184 - 7937442746368 = 64119074816 = 59.716 GiB less than the partition size reported by fdisk!

Ok, so how does ext4 work?

We can quickly get a lot of information running

sudo dumpe2fs -h /dev/sdc1

The most important output being

Inode count:              244191232
Block count:              1953506304
Reserved block count:     97675315
Free blocks:              1937839392
Free inodes:              244191221
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      558
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         4096
Inode blocks per group:   256
Flex block group size:    16
First inode:              11
Inode size:               256
Required extra isize:     32
Desired extra isize:      32
Journal size:             1024M
Journal length:           262144

Note that the Block count shows the full partition size. One block being 4096 bytes, we have 1953506304 * 4096 = 8001561821184.

So clearly we're looking for blocks that are not available to us. Going with what df reports as Available (7537295282176 / 4096 = 1840159981 blocks available), that are 113346323 blocks that are not available.

The journal exists of 262144 blocks, so... 113346323 - 262144 = 113084179 blocks to go.

We have 558 reserved GDT blocks... 113083621 block to go.

The number of "groups" on the fs is 'total number of inodes' / 'inodes per group' = 244191232 / 4096 = 59617.

The inodes being 256 bytes in size account for 244191232 * 256 / 4096 = 15261952 blocks, so 113083621 - 15261952 = 97821669 blocks to go.

We're out of options here, apparently the Reserved block count isn't available either, which is 97675315 .. so that leaves 97821669 - 97675315 = 146354 blocks that are unavailable that we didn't explain yet. That is still 571.7 MiB, or ~2.455 blocks per group, but not THAT much compared to the 59.76 GiB that we had to explain.

Running the following command:

cat /proc/fs/ext4/sdc1/mb_groups | sed -e 's/^#.*://' | sort | uniq -c | sort -rn | sed -e 's/^\(..............\).*/\1/' | grep -v free

We get the number (first column) of groups that have N blocks free (second column):

  55860  32768
   3724  28640
     22  31743
      8  0    
      1  8958 
      1  28639
      1  27609

Clearly the maximum of free blocks per group is 32768 (most of them), which is also what dumpe2fs reported (Blocks per group).

So, let me convert this table to 'Used' in bytes by subtracting the second column from 32768 and multiplying that with 4096 bytes. Then I get

   3724  16908288
     22  4198400
      8  134217728
      1  97525760
      1  16912384
      1  21131264

and 3724 * 16908288 + 22 * 4198400 + 8 * 134217728 + 97525760 + 16912384 + 21131264 = 64268140544 or 59.85 GiB.

SUMMARY

Unavailable blocks      Reason
97675315                Reserved block count (5%)
15261952                inodes (0.78%)
262144                  journal (0.013%)
146354                  Unexplained (0.007%)
558                     Reserved GDT blocks (0%)

Lets start with changing the reserved block count to 0, because this HDD is for long term storage and I really don't care what happens if it runs full (I do, but my system will still function perfectly).

sudo umount /dev/sdc1
sudo tune2fs -r 0 /dev/sdc1

dumpe2fs now reports:

Reserved block count:     0

but more importantly,

df -B512 | grep sdc1
/dev/sdc1 15502817864 102728 15502682368   1% /mnt/newdisk

aka

df Available: 15502682368 sectors (= 7937373372416 bytes = 7.22... TiB)!

We can also reduce the number of inodes, but that is only smart if you are sure that you won't need them; for example when you will only store large files on the disk. The number of inodes is roughly the number of files + directories you can store on the disk. So, having 244191232 will allow me to store files with an average size of 32kB (32504 bytes) on this disk. Instead I intent to store mostly files on it of a size of roughly 1 to 2 GB... So yeah, I think I can safely reduce the number of inodes by say a factor 10.

So, I decided to reformat my partition (screw gparted, all I needed was a partition table, not a filesystem):

sudo mkfs.ext4 -b 4096 -e remount-ro -i 325040 -j -L '/opt/verylarge' -m 0 -t ext4 -T big -U 48c6a937-aea3-42a0-a69c-c24d0dc65179

After which I ended up with

Inode count:              24800672
Block count:              1953506304
Reserved block count:     0
Free blocks:              1951529866
Free inodes:              24800661
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Reserved GDT blocks:      1024
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         416
Inode blocks per group:   26
Flex block group size:    16
First inode:              11
Inode size:               256
Required extra isize:     32
Desired extra isize:      32
Journal size:             1024M
Journal length:           262144

df -H
Filesystem  Size  Used Avail Use% Mounted on
/dev/sdc1   8,0T   97M  8,0T   1% /mnt/newdisk

I have NO idea where that 1% comes from, but I'm happy with my 8,0 TB :)


Try reducing the partition size by a few megabytes using gparted, then increasing it again to its original size. This may cause other applications to read the sizes correctly. I recently corrected a 50Gb error this way!