Are ZFS pools or LVM volume groups more reliable for utilizing many partitions?
When deciding whether to use LVM volume groups or ZFS pools in configuring new file servers, what needs to be considered? Is there a "better" choice in the realm of multi-purpose file servers? Unlike this previous question, I don't want to layer the technologies.
Scenario:
- RHEL / CentOS 6 x64 servers
- many available, identical DAS and SAN LUNs
Choice:
I am personally quite familiar with LVM, so am comfortable using it if it is the better option. However, ZFS looks pretty promising, and learning new technology is always good.
Given that we want to be able to share-out a fairly large store (multiple TB) to different departments, and they need to be able to access them over both CIFS and NFS, should we use ZFS or LVM on for the underlying pool manager?
I know that using a product like FreeNAS is possible, but for a variety of reasons, I need to be able to roll-out onto "typical" Linux servers.
I use both, but prefer ZFS. ZFS on Linux has been very good to me, but isn't the "fix all" for every situation.
A typical server will look like this:
(Remember, I usually use hardware RAID and mostly use ZFS as a flexible volume manager)
- Hardware RAID with a logical volume comprised of underlying disks. That array will be carved into a small OS volume (presented as a block device), then partitioned (/,/usr,/var and such).
- The remaining space will present another block device to be used as a ZFS zpool.
Smart Array P420i in Slot 0 (Embedded) (sn: 001438029619AC0) array A (SAS, Unused Space: 1238353 MB) logicaldrive 1 (72.0 GB, RAID 1+0, OK) logicaldrive 2 (800.0 GB, RAID 1+0, OK) physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS, 900.1 GB, OK) physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS, 900.1 GB, OK) physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS, 900.1 GB, OK) physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS, 900.1 GB, OK)
I then take the zpool and create the additional ZFS filesystems (or mountpoints) and zvols as necessary.
# zpool list -v vol1
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
vol1 796G 245G 551G 30% 1.00x ONLINE -
wwn-0x600508b1001c4d9ea960806c1becebeb 796G 245G 551G -
And the filesystems...
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
vol1 245G 539G 136K /vol1
vol1/images 245G 539G 245G /images
vol1/mdmarra 100G 539G 100G /growlr_pix
So, using ZFS for data partitions is extremely nice because it allows you to address a pool of data, set quotas and manage attributes at mountpoint granularity. LVM still requires dealing with filesystem tools and is a bit more rigid.