How bad is it really to install Linux on one big partition?

We will be running CentOS 7 on our new server. We have 6 x 300GB drives in raid6 internal to the server. (Storage is largely external in the form of a 40TB raid box.) The internal volume comes to about 1.3TB if formatted as a single volume. Our sysadmin thinks it is a really bad idea to install the OS on one big 1.3TB partition.

I am a biologist. We constantly install new software to run and test, most of which lands in /usr/local. However, because we have about 12 non-computer savvy biologists using the system, we also collect a lot cruft in /home as well. Our last server had a 200GB partition for /, and after 2.5 years it was 90% full. I don't want that to happen again, but I also don't want to go against expert advice!

How can we best use the 1.3TB available to make sure that space is available when and where it's needed but not create a maintenance nightmare for the sysadmin??


Solution 1:

The primary (historical) reasons for partitioning are:

  • to separate the operating system from your user and application data. Until the release of RHEL 7 there was no supported upgrade path and a major version upgrade would require a re-install and then having for instance /home and other (application) data on separate partitions (or LVM volumes) allows you to easily preserve the user data and application data and wipe the OS partition(s).

  • Users can't log in properly and your system starts to fail in interesting ways when you completely run out of disk space. Multiple partitions allow you to assign hard reserved disk space for the OS and keep that separate from the area's where users and/or specific applications are allowed to write (eg /home /tmp/ /var/tmp/ /var/spool/ /oradata/ etc.) , mitigating operational risk of badly behaved users and/or applications.

  • Quota. Disk quota allow the administrator to prevent an individual user of using up all available space, disrupting service to all other users of the system. Individual disk quota is assigned per file system, so a single partition and thus a single file-system means only 1 disk quotum. Multiple (LVM) partitions means multiple file-systems allowing for more granular quota management. Depending on you usage scenario you may want for instance allow each user 10 GB in their home directory, 2TB in the /data directory on the external storage array and set up a large shared scratch area where anyone can dump datasets too large for their home directory and where the policy becomes "full is full" but when that happens nothing breaks either.

  • Providing dedicated IO paths. You may have a combination of SSD's and spinning disks and would do well to address them differently. Not so much an issue in a general purpose server, but quite common in database setups is to also assign certain spindles (disks) to different purposes to prevent IO contention, e.g. seperate disk for the transaction logs, separate disks for actual database data and separate disks for temp space. .

  • Boot You may have a need for a separate /boot partition. Historically to address BIOS problems with booting beyond the 1024 cylinder limit, nowadays more often a requirement to support encrypted volumes, to support certain RAID controllers, HBA's that don't support booting from SAN or file-systems not immediately supported by the installer etc.

  • Tuning You may have a need for different tuning options or even completely different file-systems.

If you use hard partitions you more or less have to get it right at install time and then a single large partition isn't the worst, but it does come with some of the restrictions above.

Typically I recommend to partition your main volume as a single large Linux LVM physical volume and then create logical volumes that fit your current needs and for the remainder of your disk space, leave unassigned until needed.

You can than expand those volumes and their file-systems as needed (which is a trivial operation that can be done on a live system), or create additional ones as well.

Shrinking LVM volumes is trivial but often shrinking the file-systems on them is not supported very well and should probably be avoided.

Solution 2:

The concept of using multiple partitions is that a full one in the the wrong place will not cause the whole system to work unexpected.

Consider a process on the machine filling up a logfile pretty fast up to the point that there is no free space available. On a single-partition machine, this could then, for example, prevent the system to write new data to /tmp. If there is another process that would want to write to /tmp it would probably exit with an error, causing unexpected behaviour.

This can be prevented if you use different partitions for places where users or processes normally write to (/home, /var, /tmp).

I would recommend that you check your old server which folders tend to get big. You can do that on the command line with

du -h -d 1 / 2> /dev/null

You'll see where the most data is accumulated and design your next system appropriately. The "-d 1" limits the output to only one level of folder depth making it more readable.

Solution 3:

The main problem with having a single large partition is that filling the filesystem up it is possible that no login is possible anymore.

The user root has its home folder (/root) outside of /home because of this. If the filesystem is filled up under some circumstances even root cannot log in and cannot repair the system.

This is the reason why you normally create separate mounting points for /var, /tmp and /home to be able to login at least as root to repair the system when one of the other partitions is filled up.