btrfs increase raid capacity by replacing disks (and not adding disks!)

Yes, the capacity will grow in btrfs when you replace the drives with bigger ones. But make sure you always have backups! While the RAID0/1 code is not nearly as buggy as the RAID5/6 code in btrfs (as of 07/2016), your device replacement would not be the first one to go horrible wrong.


It should work as you have described it. However, an additional step may be necessary.

For example, if you put 4 drives with 3 GB each in a raid1 configuration, you'll end up with a capacity of 6 GB. Replacing two of those drives with 4 GB drives should give you 7 GB of capacity (btrfs disk usage calculator).

Step 1: Create BTRFS RAID1 volume with 4x 3G = 6G capacity:

# mkfs.btrfs -f -draid1 -mraid1 /dev/sdb /dev/sdc /dev/sdd /dev/sde >/dev/null 
# mount /dev/sdb BTRFS/
# btrfs fi show BTRFS/
Label: none  uuid: e6dc6a95-ae5e-49c4-bded-77001b445ac7
    Total devices 4 FS bytes used 192.00KiB
    devid    1 size 3.00GiB used 331.12MiB path /dev/sdb
    devid    2 size 3.00GiB used 0.00B path /dev/sdc
    devid    3 size 3.00GiB used 0.00B path /dev/sdd
    devid    4 size 3.00GiB used 0.00B path /dev/sde

# parted -s /dev/sdb print | grep Disk
Disk /dev/sdb: 3221MB
Disk Flags: 
# parted -s /dev/sdc print | grep Disk
Disk /dev/sdc: 3221MB
Disk Flags: 
# parted -s /dev/sdd print | grep Disk
Disk /dev/sdd: 3221MB
Disk Flags: 
# parted -s /dev/sde print | grep Disk
Disk /dev/sde: 3221MB
Disk Flags: 
# df -h BTRFS/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        6.0G   17M  5.3G   1% /mnt/BTRFS
# btrfs fi df BTRFS/
Data, RAID1: total=1.00GiB, used=320.00KiB
Data, single: total=1.00GiB, used=0.00B
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=256.00MiB, used=112.00KiB
GlobalReserve, single: total=16.00MiB, used=0.00B

Step 2: Replace 2 3G drives (3rd and 4th drive) with 4G drives:

# parted -s /dev/sdf print | grep Disk
Disk /dev/sdf: 4295MB
Disk Flags: 
# parted -s /dev/sdg print | grep Disk
Disk /dev/sdg: 4295MB
Disk Flags: 
# btrfs replace start -f 3 /dev/sdf BTRFS/
# btrfs replace start -f 4 /dev/sdg BTRFS/
# btrfs fi show BTRFS/
Label: none  uuid: e6dc6a95-ae5e-49c4-bded-77001b445ac7
    Total devices 4 FS bytes used 512.00KiB
    devid    1 size 3.00GiB used 1.28GiB path /dev/sdb
    devid    2 size 3.00GiB used 1.25GiB path /dev/sdc
    devid    3 size 3.00GiB used 1.06GiB path /dev/sdf
    devid    4 size 3.00GiB used 544.00MiB path /dev/sdg

# df -h BTRFS/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        6.0G   17M  5.2G   1% /mnt/BTRFS

The RAID1 filesystem should have a capacity of 7 GB, but it only has 6 GB.

Solution

It needs to be resized to use all available space (balance won't help). It needs to be resized on every device that has been replaced, so on device #3 and #4.

# df -h BTRFS/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        6.0G   17M  5.8G   1% /mnt/BTRFS
# btrfs fi show BTRFS/
Label: none  uuid: e71b4996-5f7c-4b08-b8d8-87163430b643
    Total devices 4 FS bytes used 448.00KiB
    devid    1 size 3.00GiB used 1.00GiB path /dev/sdb
    devid    2 size 3.00GiB used 1.00GiB path /dev/sdc
    devid    3 size 3.00GiB used 288.00MiB path /dev/sdf
    devid    4 size 3.00GiB used 288.00MiB path /dev/sdg

# btrfs fi resize 3:max BTRFS/
Resize 'BTRFS/' of '3:max'
# btrfs fi resize 4:max BTRFS/
Resize 'BTRFS/' of '4:max'
# df -h BTRFS/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        7.0G   17M  6.8G   1% /mnt/BTRFS

The filesystem now has its expected capacity of 7 GB.

Step 2 (alternative): Remove drives (the old way, not recommended)

Before the replace command was added, the only workaround to replace drives was to add a new drive and remove the old one. However, this may take more time. And it has the drawback that it will leave you with a devid hole, i.e., the removed device's id won't be used anymore and the device ids no longer match their respective position in the raid array.

# df -h BTRFS/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        6.0G   17M  5.3G   1% /mnt/BTRFS
# btrfs dev add -f /dev/sdf BTRFS/
# btrfs dev add -f /dev/sdg BTRFS/
# btrfs fi show BTRFS/
Label: none  uuid: ac40a98a-ac3b-4563-9ec9-6135332e5cdc
    Total devices 6 FS bytes used 448.00KiB
    devid    1 size 3.00GiB used 1.03GiB path /dev/sdb
    devid    2 size 3.00GiB used 1.25GiB path /dev/sdc
    devid    3 size 3.00GiB used 1.03GiB path /dev/sdd
    devid    4 size 3.00GiB used 256.00MiB path /dev/sde
    devid    5 size 4.00GiB used 0.00B path /dev/sdf
    devid    6 size 4.00GiB used 0.00B path /dev/sdg

# btrfs dev rem /dev/sdd BTRFS/
# btrfs dev rem /dev/sde BTRFS/
# df -h BTRFS/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        7.0G   17M  6.8G   1% /mnt/BTRFS
# btrfs fi show BTRFS/
Label: none  uuid: efc5d80a-54c6-4bb9-ba8f-f9d392415d3f
    Total devices 4 FS bytes used 640.00KiB
    devid    1 size 3.00GiB used 1.00GiB path /dev/sdb
    devid    2 size 3.00GiB used 1.00GiB path /dev/sdc
    devid    5 size 4.00GiB used 1.03GiB path /dev/sdf
    devid    6 size 4.00GiB used 1.03GiB path /dev/sdg

When using add/remove, it is not necessary to manually grow the volume.

Note that, when using add/remove, the 3rd drive in the raid array has index 5 instead of 3, which may be confusing when you need to identify a drive based on its slot in your rack.


This is BTRFS version 4.4. Future versions may behave differently.