How can I shrink a zfs volume on ubuntu 18.04?
Solution 1:
Actually, it is possible to do it. Because it is possible to remove top level vdev from pool if space permits. So the trick is to:
1. add a temp disk/file vdev to the pool with smaller size, but is enough to hold all existing data (including snapshots etc)
2. remove the old vdev
3. possibly re-partition old vdev to smaller size, or replace with a smaller disk
4. add it back
5. remove the temp disk.
All files should remain intact.
To illustrate the steps:
- Allocate the files
# fallocate -l 3G 3G_1.img
# fallocate -l 3G 3G_2.img
# fallocate -l 2G 2G_1.img
# fallocate -l 2G 2G_2.img
- Create zfs with the 2 3G files with mirror. (my directory is /var/snap/lxd/common/lxd/disks)
# zpool create test3g mirror /var/snap/lxd/common/lxd/disks/3G_1.img /var/snap/lxd/common/lxd/disks/3G_2.img
# zpool status test3g
pool: test3g
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
test3g ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/var/snap/lxd/common/lxd/disks/3G_1.img ONLINE 0 0 0
/var/snap/lxd/common/lxd/disks/3G_2.img ONLINE 0 0 0
errors: No known data errors
# zpool list test3g
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
test3g 2.75G 111K 2.75G - - 0% 0% 1.00x ONLINE -
You can clearly see the test3g pool is 2.75G usable, it is in mirror.
- Let's try to create dummy file inside. This to simulate your existing data. And you can verify your data is intact after the exercise.
# echo test > /test3g/test.txt
# cat /test3g/test.txt
test
#
- Now attach another mirror of smaller size (e.g. 2G)
# zpool add test3g mirror /var/snap/lxd/common/lxd/disks/2G_1.img /var/snap/lxd/common/lxd/disks/2G_2.img
# zpool status test3g
pool: test3g
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
test3g ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/var/snap/lxd/common/lxd/disks/3G_1.img ONLINE 0 0 0
/var/snap/lxd/common/lxd/disks/3G_2.img ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
/var/snap/lxd/common/lxd/disks/2G_1.img ONLINE 0 0 0
/var/snap/lxd/common/lxd/disks/2G_2.img ONLINE 0 0 0
errors: No known data errors
# zpool list test3g
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
test3g 4.62G 156K 4.62G - - 0% 0% 1.00x ONLINE -
Now we have a striped mirror of 4.62GB usable.
- Let's remove the previous 3G parts.
# zpool remove test3g mirror-0
# zpool status
pool: default
state: ONLINE
scan: scrub repaired 0B in 0 days 00:06:45 with 0 errors on Fri May 21 15:20:30 2021
config:
NAME STATE READ WRITE CKSUM
default ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/opt/lxc_image/default.img ONLINE 0 0 0
/var/snap/lxd/common/lxd/disks/default.img ONLINE 0 0 0
errors: No known data errors
pool: tank
state: ONLINE
scan: scrub repaired 0B in 0 days 00:00:10 with 0 errors on Tue May 18 19:42:56 2021
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/var/snap/lxd/common/lxd/disks/tank.img ONLINE 0 0 0
/nfs-dual/zfs-images/tank.img ONLINE 0 0 0
errors: No known data errors
pool: test3g
state: ONLINE
scan: none requested
remove: Removal of vdev 0 copied 102K in 0h0m, completed on Fri May 21 15:33:19 2021
72 memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
test3g ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
/var/snap/lxd/common/lxd/disks/2G_1.img ONLINE 0 0 0
/var/snap/lxd/common/lxd/disks/2G_2.img ONLINE 0 0 0
errors: No known data errors
# zpool list test3g
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
test3g 1.88G 142K 1.87G - - 0% 0% 1.00x ONLINE -
The space is down to 1.87G. Note that this may take long depend on your data size, but it will complete OK. Check the message above:
Removal of vdev 0 copied 102K in 0h0m, completed on Fri May 21 15:33:19 2021
You technically reduced the zpool size from 3G to 2G, without down time, without losing data.
- Let's verify the data is still there
# cat /test3g/test.txt
test
#
Yes of course.
Note that only the recent version of zfs supports this. Your mile may vary.
Thanks Best Shilin
Solution 2:
Once you add your disk to zfs pool, by default zfs allocates whole disk for usage and it formats the disk using an EFI label to contain a single, large slice. This is recommended way.
It is not possible to shrink size of volume if you have allocated whole disk. You can technically force it to shrink by gparted or some other tool but that will cause problems with zfs pool and you will loose your data, so that is not recommended at all.
You can reduce size of particular zfs pool and create new pool from free space. Only way possible to make free some space is you can make disk offline (I hope that you have multiple disks in zfs pool), re-format it and use some space for your desired partition and remaining space allocated to zfs partition for example c150d0
. When you add disk back to zfs pool, you need add zfs partition for usage. This is not recommended way but should be counted as workaround.
Do it at your own risk.