ZFS Equivalent of lvdisplay snap_percent
Solution 1:
ZFS snapshot space is reflected in the filesystem's consumption. You can derive what you're asking for by monitoring the most appropriate fields below.
In the end, you'll watch your filesystem's "avail" space... See how "used"+"avail" is less than "size"?:
root@deore:~# df -h /volumes/vol1/LA_Specialty
Filesystem size used avail capacity Mounted on
vol1/LA_Specialty 800G 391G 254G 61% /volumes/vol1/LA_Specialty
I've filtered the output of zfs get all pool/filesystem
below to show the relevant properties. Below, I have an 800GB filesystem (quota) where 545GB is used. 391GB is referenced, meaning that's the size of the real data. 154GB is used by snapshots.
root@deore:/volumes# zfs get all vol1/LA_Specialty
NAME PROPERTY VALUE SOURCE
vol1/LA_Specialty type filesystem -
vol1/LA_Specialty creation Sat Sep 24 18:44 2011 -
vol1/LA_Specialty used 545G -
vol1/LA_Specialty available 255G -
vol1/LA_Specialty referenced 391G -
vol1/LA_Specialty compressratio 2.96x -
vol1/LA_Specialty quota 800G local
vol1/LA_Specialty reservation none default
vol1/LA_Specialty recordsize 16K local
vol1/LA_Specialty mountpoint /volumes/vol1/LA_Specialty inherited from vol1
vol1/LA_Specialty usedbysnapshots 154G -
vol1/LA_Specialty usedbydataset 391G -
vol1/LA_Specialty usedbychildren 0 -
vol1/LA_Specialty usedbyrefreservation 0 -
Then looking at the snapshots... It's possible to see the individual size of the snapshots and the total data size that they reference.
root@deore:/volumes# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
vol1/LA_Specialty@snap-daily-1-2013-09-07-020003 57.6G - 389G -
vol1/LA_Specialty@snap-daily-1-2013-09-08-020003 1.95G - 391G -
vol1/LA_Specialty@snap-daily-1-2013-09-09-020008 3.42G - 392G -
vol1/LA_Specialty@snap-daily-1-2013-09-10-020003 3.05G - 391G -
vol1/LA_Specialty@snap-daily-1-2013-09-11-020003 2.81G - 391G -
vol1/LA_Specialty@snap-daily-1-2013-09-12-020004 2.65G - 391G -
vol1/LA_Specialty@snap-daily-1-2013-09-13-020003 2.70G - 391G -
vol1/LA_Specialty@snap-daily-1-2013-09-14-020003 25K - 391G -
vol1/LA_Specialty@snap-daily-1-latest 25K - 391G -
And a du
listing of the snapshot directory...
root@deore:/volumes/vol1/LA_Specialty/.zfs/snapshot# du -skh *
389G snap-daily-1-2013-09-07-020003
391G snap-daily-1-2013-09-08-020003
392G snap-daily-1-2013-09-09-020008
391G snap-daily-1-2013-09-10-020003
391G snap-daily-1-2013-09-11-020003
391G snap-daily-1-2013-09-12-020004
391G snap-daily-1-2013-09-13-020003
391G snap-daily-1-2013-09-14-020003
391G snap-daily-1-latest
Solution 2:
ZFS snapshots have a lot of hidden data in them. Generally I would refer you to
zfs list -ro space
Which shows an output similar to:
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rootpool/export/home 6.37G 11.7G 2.80G 8.87G 0 0
rootpool/export/[email protected] - 134M - - - -
rootpool/export/[email protected] - 320M - - - -
rootpool/export/[email protected] - 251M - - - -
rootpool/export/[email protected] - 1.02M - - - -
rootpool/export/[email protected] - 1.04M - - - -
rootpool/export/[email protected] - 850K - - - -
rootpool/export/[email protected] - 747K - - - -
rootpool/export/[email protected] - 326K - - - -
rootpool/export/[email protected] - 454K - - - -
rootpool/export/[email protected] - 319K - - - -
This will tell you that I am using a TOTAL of 11.7G on this particular dataset and that 2.8G is used by snaps and 8.87 is used by the actual filesystem (active data). However, the USED size next to each snapshot is very misleading.
If you add up all of the numbers in the used column for the snapshot you will see that they do not come anywhere near the USEDSNAP total. This is because the USED value is how much unique space each snapshot holds.
For example:
If I have a pool named "newpool" and it has 2 1G files (fileA and fileB):
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
newpool 11.0G 2.0G 0.00G 2.0G 0 0
Now I snap that:
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
newpool 11.0G 2.0G 0.00G 2.0G 0 0
newpool@snap1 11.0G 0.0G 0.00G 2.0G 0 0
Now I delete 1 of the 1G files (fileA):
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
newpool 11.0G 2.0G 1.00G 1.0G 0 0
newpool@snap1 - 1.0G - - - -
Now I create a new 1G file (fileC):
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
newpool 10.0G 3.0G 1.00G 2.0G 0 0
newpool@snap1 - 1.0G - - - -
Now I snap it again
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
newpool 10.0G 3.0G 1.00G 2.0G 0 0
newpool@snap1 - 1.0G - - - -
newpool@snap2 - 0.0G - - - -
Now I delete fileB (which is in both snapshots):
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
newpool 10.0G 3.0G 2.00G 1.0G 0 0
newpool@snap1 - 1.0G - - - -
newpool@snap2 - 0.0G - - - -
Notice how the snapshot USED column did not reflect the change? That's because fileB was referenced by both snapshots and since it is not unique it is not shown in the USED count for any particular snapshot. The USEDSNAP column reflects that the space has been used by the snapshots, but it doesn't associate it to any particular one.
Now if you were to remove snap1:
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
newpool 11.0G 2.0G 1.00G 1.0G 0 0
newpool@snap2 - 1.0G - - - -
snap2 now shows that it has 1.0G used because that data is now unique to that snapshot.
The USED column will show you how much space you can reclaim if you delete that individual snapshot, but doesn't show you truly how much space that snapshot is reserving.
So now that I have said all of that -
If you are planning on only keeping one snapshot of any particular dataset then the zfs list -ro space command should give you what you are looking for.
If you are going to have multiple snapshots at the same time, this data can be misleading. Don't do what comes natural and assume the USED column means anything when dealing with multiple snapshots. Also, du is a poor choice on the snapshot directories since that just shows you what is referenced by the snapshot, not what space the snapshot is actually using.
The zfs manpage goes through some of this, but it isn't great at showing those relationships.