BTRFS: deleting a volume

A week ago, I created a BTRFS pool using two flash drives (32GB each) with this command: /sbin/mkfs.btrfs -d single /dev/sda /dev/sdb. Then I realized that I should have used the partitions /dev/sda1 and /dev/sdb1, instead of the disks /dev/sda and /dev/sdb, so I recreated the volumes using /dev/sd[ab]1.

My problem is that now I have two volumes:

$ sudo btrfs fi show
Label: none  uuid: ba0b48ce-c729-4793-bd99-90764888851f
        Total devices 2 FS bytes used 28.00KB
        devid    2 size 29.28GB used 1.01GB path /dev/sdb1
        devid    1 size 28.98GB used 1.03GB path /dev/sda1

Label: none  uuid: 17020004-8832-42fe-8243-c145879a3d6a
        Total devices 2 FS bytes used 288.00KB
        devid    1 size 29.28GB used 1.03GB path /dev/sdb
        devid    2 size 28.98GB used 1.01GB path /dev/sda

I've tried different options in order to delete the second volume (uuid ending in c145879a3d6a), ie: using btrfs delete device. Then mkfs.btrfs, unmounted the devices and also fdisk in order to recreate the whole raid from scratch, but no matter what I do, btrfs fi show still shows both volumes. How can I completely remove these volumes from my system and start everything from scratch? No matter what I do the volumes can't be removed, ie:

$ sudo btrfs device delete /dev/sda /media/flashdrive/
ERROR: error removing the device '/dev/sda' - Inappropriate ioctl for device

I'm running here kernel 3.12.21 + btrfs v0.19


I've run into similar issues myself using BTRFS.

First things first -- butter doesn't need to be in a partition, so unless there was some kind of unmentioned reason that you wanted it in /dev/sdb1, you did exactly what I did and ran into exactly the same problem.

After digging around and trying to find a clean solution to fixing it, wipefs is your best option -- supposedly newer versions can remove all traces. However, at the time I ran into this, I ended up just using dd to write zeros to my entire device, something like the following:

dd if=/dev/zero of=/dev/sdX bs=4M

It's the 9000 pound gorilla of solutions, but it will put your thumbdrives back to a fresh state.

SSD Warning: This might be harmful to the performance of an SSD (depending on the manufacturer) and should really only be done on thumbdrives. See this question which offers up some other alternatives (blkdiscard) that might be faster/safer/better for SSDs. This question also has some good answers that might do the equiv of this but without zeroing (secure erase feature).


sudo wipefs --all -t btrfs /dev/sda /dev/sdb

worked for me. I had to add --all to have sudo btrfs fi show turn up empty.

-a, --all

Erase all available signatures. The set of erased signatures can be restricted with the -t option.

Array/Btrfs was created with sudo mkfs.btrfs --label btrfs_6TB_RAID1 --metadata raid1 --data raid1 /dev/sda /dev/sdb --force

See wipefs documentation