Is it bad to have a very full hard drive on a high traffic database server?

Running an Ubuntu server with MySQL for a high traffic production database server. Nothing else is running on the machine except the MySQL instance.

We store daily database backups on the DB server, is there any performance hit or reason why we should keep the hard disk relatively empty? If the disk is filled up to 86%+ with the database and all of the backups, does it hurt performance at all?

So would the DB server running with 86-90%+ full capacity perform less well in any way than the server running with only a 10% full disk?

The total disk size on the server is over 1 TB so even 10% of the disk should be enough for basic O/S swapping and such.


First of all you DO NOT want to keep your database backups on the same physical drive or RAID group as your database. The reason for this is that a disk failure (if you are running without any RAID protection) or a catastrophic RAID failure (if you are using RAID-1 or RAID-5) will cause you to lose your database and your database backups.

Your question about disk performance relates to how full a disk drive is depends on how the data on the disk is accessed. For spinning disks there are two physical factors that affect I/O performance. They are:

  • seek time - which is the time it takes the disk drive to move the disk head from its current track position to the track that contains the requested data

  • rotational latency - which is the average time it takes for the desired data to reach the read head as the drive spins - for a 15K RPM drive this is 2 ms (milliseconds)

How full your drive is can affect the average seek time your server's I/Os are experiencing. For example, if your drive is full and you have database tables that are physically located on the drive on the extreme opposite ends of the disk's platters, then as you perform I/Os accessing data from each of these tables those I/Os will experience the maximum seek time of the drive.

That being said, however, if your drive is full and your application accesses only a small fraction of the data stored on the drive and all of this data is located contiguously on the drive, then these I/Os will be minimally impacted by seek time.

Unfortunately, the answer to this question is that "your mileage will vary", meaning that how your application accesses the data and where that data is located will determine what your I/O performance will be.

Also, as mentioned by @gravyface, it would be "best practice" to separate your operating system storage requirements from your database. Again, this would be to help minimize head movement on the disk surface as having both on the same drive could cause constant seeking between the operation system and database areas of the drive as both the operating system and database software make I/O requests.


There are two angles to consider here: Performance and Robustness.

Performance-wise, it is generally recommended to have separate disk spindles (or RAID groups / drive sets) for:

  1. The OS stuff (binaries, logs, home directories, etc.)
  2. Swap space (which may be combined with (1) if you don't expect to use swap)
  3. The Production DB
  4. The Production DB's transaction logs (if used)
  5. Database Dumps/backups

The reasoning behind this is pretty straightforward: You don't want the DB performance impacted by "other stuff" demanding the disk (e.g. if the machine starts swapping heavily and the swap partition is on the other side of the disk from the DB data you have long disk seeks to contend with).


From a robustness standpoint you want the same sort of breakdown, but for a different reason: As others have pointed out you don't want a failed disk to take out both your DB and its backups (though realistically you should be copying the backups off the server anyway in the event of a catastrophic failure).

You also want to avoid any configuration with a monolithic / partition that contains everything -- this is an unfortunate, tragic, and alarmingly common mistake made in the Linux world which is not shared by the other Unix-like systems.
As Gravyface mentioned in his comment, if you somehow manage to fill up / your system will almost certainly crash, and cleanup/recovery can be time consuming and costly if the system has a single / partition rather than a well-structured hierarchy of mount points.


I'd recommend moving the database and temporary (see below) backups to a different partition than root (/).

Also, come up with a sensible rotation/retention scheme for your (assumed) compressed database dump backups. There's (usually) no reason to keep that many copies of the backups on the local disk. Does nothing for disaster recovery and when moved off-site, should be removed off the disk.

That's pretty much standard operating procedure.