Sell partitioning to me

Solution 1:

  • Faster fsck. Lets say your system fails, for some reason and when it reboots it needs to run an fsck. With a really large partition that fsck can take forever and nothing on the system will work until the fsck of the entire system is done. If you partition the system so the root partition is pretty small, then you may be able to get the system up and some of the basic services running while you wait for the fsck of the larger volumes to complete.
    • if your system has small drives, or there is only one service on the system then this may not really matter.
    • With journaled file-systems this may not matter most of the time, but occasionally even with a journaled file-system you have to run a full fsck.
  • Improved security because you can mount a fs read-only.
    • For example nobody should need to write to /usr during normal usage. So why not just mount the filesystem so it is read-only. Having filesystems read-only when they don't need to be written to will prevent some script-kiddy attacks, and may prevent you from destroying things when you don't mean too.
    • This may make maintaining the system more difficult, since you'll need to remount it as read-write when you need to apply updates.
  • Improved performance, functionality for a specific service/usage.
    • Some filesystems are more appropriate for specific services/applications, or they allow your to configure the filesystem so that it operates better in some cases. Maybe you have a filesystem with lots of small files and you need more inodes. Or maybe you need to store large a few large files, virtual disk images.

I don't think setting up lots of partitions is something you should do for every system. Personally, on most of Linux my servers I just setup one big partition. Since most of my systems have smallish drives and are single purpose and serving some infrastructure-role (dns, dhcp, firewall, router, etc). On my file servers I do setup partitions to separate the data from the system.

Could it be argued that partitioning can potentially accelerate hardware failure, because of the thrashing a disk does when moving or copying data from one partition to another on the same disk?

I highly doubt a well partitioned system would have any increased likely-hood of failure.

Solution 2:

One reason to keep /home/ seperate is you can reinstall the operating system and never worry about losing user data. Beyond that, theres a lot of security to be had in mounting everything either read only, or noexec. If users can't run code anywhere that they can write code, it's one less attack vector.

I'd only bother with that on a public machine though, as the downside of running out of disk space in one partition but having it in another is a serious annoyance. There are ways to work around this like doing software raid or ZFS where you should be able to dynamically resize partitions easily, but I have no experience with them.

Solution 3:

  • Simplifying backup

You can make backups (via dumpfs or similar) of things you want, not things you don't. dump(1) is a better backup system than tar(1).

  • Filling up partitions

That's an argument for partitioning as well. Users filling up their homedirs doesn't wreck the server, take the web server down, keep logs from happening, keep root from logging in, etc.

It also allows you to more transparently move a section of your data (say, /home) onto another disk: copy it over, mount it. If you're using something that allows shadow copies / snapshots/ whatever, you can even do that live.

Solution 4:

I have always been taught to keep /var on a separate partition so if you get a out of control log file you will clog up a single partition not the entire drive. If it on the same space as the rest of the system and you 100% fill your entire disc, it can crash out and make for a nasty restore.