Why use LVM? It creates more borders (less freedom)
I have a linux server which runs in a VM. The hypervisor is VMWare.
This setup was done by a former admin:
server:~ # pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 system lvm2 a-- 119,84g 0
server:~ # vgs
VG #PV #LV #SN Attr VSize VFree
system 1 3 0 wz--n- 119,84g 0
server:~ # lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
home system -wi-ao--- 97,84g
root system -wi-ao--- 20,00g
swap system -wi-ao--- 2,00g
I ask myself: Why?
It is great that you can do a lot of interesting things with LVM. But why?
Why not create one blockdevice/partition/filesystem?
Swapping could get done into a file.
One partition/filesystem would give me less block devices. This means the directories in the file system have more space to grow.
If I use one block device with one filesystem it is less likely to run out of disk space.
Example: If the files of "root system" need more than 20GByte, and "home" has space left, then everything is fine.
Here is an simplified ascii art of the LVM setup:
+--------------------+
| |
| Filesystem |
| |
|---------------------
| |
| Logical Volume |
| |
|---------------------
| |
| Volume Group |
| |
----------------------
| |
| Physical Volume |
| |
|---------------------
| |
| Block device |
| |
+--------------------+
Background: This is not a high available system. A reboot at night is always possible.
Using LVM in general enables several features. Just a couple:
- Extending a volume is one step and online:
lvextend --resize
. - Partitioning is not required. Without LVM, the common use case of resizing root fs requires downtime and probably editing partition tables from a second system.
- Snapshots are always available, even without such a feature on the storage system or hypervisor.
- Extreme use cases may require a volume backed by more one disk (LUN). Rare these days. But might as well standardize on the more flexible LVM.
(Some of the details are specific to Linux LVM, but LVM in general is implemented on many operating systems. On UNIX, AIX and HP-UX predate Linux and have similar LVMs.)
That particular allocation resembles defaults on distros like Red Hat reflecting some recommendations they have given for a long time.
To store user data separately from system data, create a dedicated partition within a volume group for the /home directory. This will enable you to upgrade or reinstall Red Hat Enterprise Linux without erasing user data files.
Paging space could be in a more convenient file, yes. I think this requires certain file systems. Directly on a block level device works regardless of file system, and it is obvious the space is not available for regular files. Further, you could move this paging space volume to fast less durable storage like a single local SSD.
Some administrators go further and isolate say /var
on its own volume and /tmp
on tmpfs. That way log files and such will not fill up space required for software in /usr
.
Personally, absent size requirements, I would start /home
off smaller say 20 GB. That leaves 70 GB free in the VG for whatever needs the next lvextend
. This also has been a suggestion for a long time. I suspect it isn't the auto partition default because user data tends to grows unbounded, and they want to use the space in case capacity planning is never revisited again.
Leave Excess Capacity Unallocated, and only assign storage capacity to those partitions you require immediately. You may allocate free space at any time, to meet needs as they occur.
In many cases a single partition, or no partitioning is not detrimental, especially in a virtualized environment where you can easily increase the primary disk and root file system if and when you run out of space.
Until you can't...
Then you find out the hard way that hard partitioning is design choice that you're stuck with for the life time of your server.
(Not completely but most solutions will require downtime and/or service interruptions.)
If you have only cattle: that is not an issue, your servers are short-lived anyway. If you have more pets than you'd care to admit though...
The performance overhead / penalty you pay for using LVM is negligible and even you can't see an immediate need now LVM will potentially save your bacon in the future when LVM will allow you to perform online changes in your storage that would otherwise be either outright impossible or would at least require (significant) downtimes.
The Answer is, if you cannot identify any operational reasons for using LVM then there is no reason to use it, which in your scenario of hypervisor and storage area network it is easy to dismiss. but LVM is not a protocol like smb or iscsi, nor is it a filesystem like ext4 or ntfs, it's also not JBOD arrray or any kind of RAID, it not a type of disk like SSD or SAS, it's not a storage provider like VMware datastore or Ceph - so, why use LVM? - to present logical volumes to the OS independently of all that.
Some distros default to setting up their partitioning with LVM/device mapper by default. As previously stated the overhead of using LVM is small (it's not totally zero though) and it increases flexibility so this isn't a bad choice.
With regard to separate filesystems: a separate /home
to /
is often rationalised as a method to perform poor-man's quotaing. By keeping them seperate, filling up /home
doesn't render a tool that only needs to create files in /tmp
(like sort
) unable to do its job. Once again, some partitioning tools (Ubuntu?) guide you towards this style of set-up.
Maybe the previous admin was simply following the defaults? If you have different needs and are already in a position where you are starting again then of course you can make different choices. However, if the system is OK and not running out of space (20GByte for root might mean you have over 50% of space left or might not be nearly enough depending on the situation) I doubt it's worth the effort to tear it all down and start again just to switch.