Multi device BTRFS filesystem with disk of different size

I have an existing BTRFS filesystem composed of one 500GB disk and I just bought a 2TB disk to increase the storage capacity of my home server and I want add the new disk to the existing filesystem. From what I read, it seems like no BTRFS setup can handle disk of different sizes without wasting the difference in size between the larger and the smaller disk, but I'm new to BTRFS and I might have missed something, so is there a setup that can allow me to combine two disks in a filesystem without wasting space ?


Solution 1:

Btrfs can use different raid levels for data and Metadata:

the default (even for one disk) is raid1 for the metadata (directories etc) and raid0 for the data.

If you did not change this then likely you will have no problem adding the second disc and running re-balance. because only the metadata will be copied to both discs (you can see your metadata size with btrfs filesystem df /). Just be aware that if either of your disks fails you loose data.

because the 2tb disk is sooooooooo much bigger than the 500g it would perhaps give you better odds if you add the new one and then remove the old one (the odds of one specific drive failing are a lot less than the odds of either of the drives failing).

if you plan on having a raid array later (with more similar sized drives) you may want to re-create the filesystem on the new drive with raid1 for both data and metadata and then copy everything over. then later when you have more money buy the second 2tb drive.

ps: using raid1 on a singl drive means the data will be stored in two locations on that one drive (to protect against corruption) and will reduce your storage space (its a really really good idea for the metadata).

pss: seriously, dont be tempted to not use raid1 for metadata. psss: there is a very good chance that btrfs will gain the ability to change raid levels dynamically.

Solution 2:

It depends on what profile you use for the data blocks of the multi-device Btrfs filesystem.

  • When you use RAID0 (the default for data blocks), each disks can only be filled up to the capacity of the smallest disk in the array.

  • When you use the "single" profile for the data blocks, each disk will be filled up to it's full capacity. e.g. mkfs.btrfs -d single /dev/sda /dev/sdb

I have a file server with a 2TB and a 3TB disk. It boots Ubuntu 12.10 from a USB flash drive. First I created the Btrfs filesystem without the -d single option:

mkfs.btrfs /dev/sda /dev/sdb

The result was that I could only store about 4TB (3.45 binary TB file data).

# btrfs fi show
Label: none  uuid: 3a63a407-dd3c-46b6-8902-ede4b2b79465
 Total devices 2 FS bytes used 3.22TB
 devid    2 size 2.73TB used 1.82TB path /dev/sdb
 devid    1 size 1.82TB used 1.82TB path /dev/sda
# btrfs fi df /mnt/btrfs1/
Data, RAID0: total=3.45TB, used=3.22TB
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=264.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=94.00GB, used=4.29GB
Metadata: total=8.00MB, used=0.00
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        4.6T  3.3T  241G  94% /mnt/btrfs1

Note the used 1.82TB for the 3TB drive.

Then I used the "balance" command to convert the data blocks from RAID0 to the "single" profile:

btrfs balance start -dconvert=single /mnt/btrfs1

It took a very long time (about 30 hours) to balance the 4TB data. But after it completed, I could use the full 5TB (4.36 binary TB file data).

# btrfs fi show
Label: none  uuid: 3a63a407-dd3c-46b6-8902-ede4b2b79465
 Total devices 2 FS bytes used 4.34TB
 devid    2 size 2.73TB used 2.73TB path /dev/sdb
 devid    1 size 1.82TB used 1.82TB path /dev/sda
# btrfs fi df /mnt/btrfs1/
Data: total=4.36TB, used=4.34TB
System, RAID1: total=40.00MB, used=500.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=94.00GB, used=4.01GB
Metadata: total=8.00MB, used=0.00
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        4.6T  4.4T   27G 100% /mnt/btrfs1

Solution 3:

I have used multiple devices with btrfs in Ubuntu, and it has worked just fine. Keep in mind that btrfs does not actually implement standard RAID levels. It implements optional striping and mirroring, but not true RAID.

Solution 4:

It is possible to combine drives with different size in btrfs.
But currently btrfs does not handel ENOSPC (No space left on device) very well.

E.g. I installed 3 drives in a RAID0 (striped) array. 1x500GB, 1x250GB, 1x160GB.
You would asume that you will have a disk space between 800-900GB.

This is what df -h shows:
/dev/sdf 848G 615G 234G 73% /media/btrfs

But I'm not able to store any more data on the array. (No space left)

btrfs filesystem df /media/btrfs shows me this:
Data: total=612.51GB, used=612.51GB
Metadata: total=1.62GB, used=990.73MB
System: total=12.00MB, used=48.00KB

Even rebalancing didn't help.

On a mailing list I saw this callculation:
size of smallest drive * number of drives in the array
(although I have some more space: 612GB instead of 160GB*3=480GB)

So in the current state of development chances are that you will not be able to use all the space you have even though btrfs does support different sizes in one array.

I'm using Ubuntu 10.10 with 2.6.35-22-generic kernel.