raidz1 6x4TB = 16.9t?
I an running Ubuntu server with the latest version of zfs-utils. I installed 6x4TB disks (lsblk -b shows all disks partition 1 size=4000787030016) and created a raidz1 configuration with all 6 disks. The raidz calculator website said I should see 20TB "usable". When I run "zpool list" I see 21.8T "FREE". When I run "zfs list" I see 16.9T "AVAIL". When I run "df -h" I see 17T "Size". It is my understanding that "similar" to RAID5. I am quite surprised to see that I started with approximately 24T of disks and after raidz1 I am left with only 17T? It is my understanding that raidz1 is "similar" to RAID5 and expected to lose at least 4T for striping, but where did the other 3T go?
2021/02/20 update - I have deleted and recreated /tank a few times and may have posted stats from a previous build. Below are the stats when using 6x4TB disks. Apologies for the messy display. I am not yet familiar with how to properly enter/display stuff on this forum.
root@bignas1:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 805K 16.9T 153K /tank
root@bignas1:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 21.8T 1.12M 21.8T - - 0% 0% 1.00x ONLINE -
root@bignas1:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 21.8T 1.12M 21.8T - - 0% 0% 1.00x ONLINE -
raidz1 21.8T 1.12M 21.8T - - 0% 0.00% -
ONLINE
sda - - - - - - - - ONLINE
sdb - - - - - - - - ONLINE
sdc - - - - - - - - ONLINE
sdd - - - - - - - - ONLINE
sde - - - - - - - - ONLINE
sdf - - - - - - - - ONLINE
root@bignas1:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 21.8T 912K 21.8T - - 0% 0% 1.00x ONLINE -
root@bignas1:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 594K 16.9T 153K /tank
root@bignas1:~# zfs version
zfs-0.8.3-1ubuntu12.6
zfs-kmod-0.8.3-1ubuntu12.5
2021/02/20 update 2
root@bignas1:~# cat /sys/module/zfs/parameters/spa_slop_shift 5
root@bignas1:~# lsblk | grep sd
sda 8:0 0 3.7T 0 disk
├─sda1 8:1 0 3.7T 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 3.7T 0 disk
├─sdb1 8:17 0 3.7T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 3.7T 0 disk
├─sdc1 8:33 0 3.7T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 3.7T 0 disk
├─sdd1 8:49 0 3.7T 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 3.7T 0 disk
├─sde1 8:65 0 3.7T 0 part
└─sde9 8:73 0 8M 0 part
sdf 8:80 0 3.7T 0 disk
├─sdf1 8:81 0 3.7T 0 part
└─sdf9 8:89 0 8M 0 part
First, you need to consider than a 4 TB disk really is a 3.64 TiB one (due to decial/binary conversion, ie: one TB is defined as 10^12 rather than 2^40).
So, 3.64 TiB * 5 data disk = 18.2 TiB. Note: by design, zpool list
reports the total raw space, without considering parity overhead. This is the reason it shows 21.8 TiB (3.64 * 6)
From that, you need to subtract the space ZFS reserves for itself. By default it is set a 3.2% of total pool size (tunable via spa_slop_shift
)
18.2 TiB - 3.2% = 17.61 TB
Curiously, you lose an additional ~600 GB I can not account for. On a test machine of mine configured with 6x 3.64 TiB virtual disks, zfs list
reports the expected 17.6 TiB as AVAIL.
Can you show the output of cat /sys/module/zfs/parameters/spa_slop_shift
and lsblk
?