ZFS on Linux: Which mountpoint option when mounting manually per script?
I want to create a zpool with ZFS on Linux (7.13) on Debian Buster. The problem is that the pool will be created based on LUKS encrypted drives (not root, only external). These drives are decrypted and loaded during boot by a script which i created since they are pulling a key file from an external source.
To avoid any issues with timing where the system wants to mount the zpool before the drives are decrypted and loaded I would like to mount the pool also manually as part of the script.
Now I ask myself which option I should choose for the mountpoint at creating the pool: none or legacy
The man page does not really help what the real difference is:
If a file system's mount point is set to legacy ZFS makes no attempt to manage the file system, and the administrator is responsible for mounting and unmounting the file system. Because pools must be imported before a legacy mount can succeed, administrators should ensure that legacy mounts are only attempted after the zpool import process finishes at boot time. For example, on machines using systemd, the mount option
Does anybody know the real differences and how to achieve a manual mounting later by script the best way?
I've been using ZFS pools on LUKS encrypted volumes for the better part of a decade. It works fine.
There is no reason to mount the pool manually to attempt to defeat nonexistent timing problems. Just create your pool normally and enjoy.
To avoid any problems in the future, when creating and managing the pool, use the names beginning with luks-
in the /dev/mapper
directory to refer to the devices. For example, the devices:
lrwxrwxrwx. 1 root root 10 Jul 26 22:22 luks-ata-WDC_WD2000FYYZ-01UL1B1_WD-WCC1P1046856 -> ../../dm-5
lrwxrwxrwx. 1 root root 10 Jul 26 22:22 luks-ata-WDC_WD2000FYYZ-01UL1B1_WD-WCC1P1145175 -> ../../dm-4
lrwxrwxrwx. 1 root root 10 Jul 26 22:22 luks-ata-WDC_WD2000FYYZ-01UL1B1_WD-WCC1P1165144 -> ../../dm-2
lrwxrwxrwx. 1 root root 10 Jul 26 22:22 luks-ata-WDC_WD2000FYYZ-01UL1B1_WD-WMC1P0DHH53R -> ../../dm-3
correspond to:
pool: srv
state: ONLINE
scan: scrub repaired 0B in 0h42m with 0 errors on Tue Jul 30 14:42:04 2019
config:
NAME STATE READ WRITE CKSUM
srv ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
luks-ata-WDC_WD2000FYYZ-01UL1B1_WD-WCC1P1046856 ONLINE 0 0 0
luks-ata-WDC_WD2000FYYZ-01UL1B1_WD-WCC1P1145175 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
luks-ata-WDC_WD2000FYYZ-01UL1B1_WD-WCC1P1165144 ONLINE 0 0 0
luks-ata-WDC_WD2000FYYZ-01UL1B1_WD-WMC1P0DHH53R ONLINE 0 0 0
This pool is created with:
zpool create -o ashift=12 srv \
mirror \
luks-ata-WDC_WD2000FYYZ-01UL1B1_WD-WCC1P1046856 \
luks-ata-WDC_WD2000FYYZ-01UL1B1_WD-WCC1P1145175 \
mirror \
luks-ata-WDC_WD2000FYYZ-01UL1B1_WD-WCC1P1165144 \
luks-ata-WDC_WD2000FYYZ-01UL1B1_WD-WMC1P0DHH53R
I use zfs on luks with portable drive. I have never had problems with the standard mount points. Mounting doesn’t happen until after the pool is imported, and that won’t happen until after the luks volume is unlocked. So I think you may be over thinking this.
Anyway, if you really want to, go with legacy for manual mounting.
You don't need to use none
or legacy
as mountpoint even if there is some delay with the LUKS drives and if you decrypt these by a script as long as you add zpool import <poolname>
after the loading of the drives. Because at the time ZFS wants to import the pools they may be not available which can be seen by a message in syslog