Best Practice: Add an Additional Disk to Expand Logical Volume or Expand Existing Disk

We host a client’s Oracle VMs on our Nutanix platform. To date, whenever their VMs require more space they have us add an additional vDisk which they then add to the VG in order to expand the required LV. The reason they’re doing it this way is because they don’t know how to expand a disk and its partitions inside Linux without rebooting the OS.

Of course, it is completely possible to add a disk in Linux and expand the parition, LV and filesystem while the OS is running, and in my opinion, this is the preferred method in terms of keeping things simple and linear. However, I don’t know enough about LVM on a pooled storage backend to justify this from a performance perspective.

So my question is:

How would multiple vDisks for a single LV impact I/O performance for an Oracle DB compared to using a single large vDisk on a virtualisation platform where storage from multiple physical disks are pooled together?


I would suggest you should avoid using partitions, because partitions are a) awkward to resize and can require a reboot, b) cause geometry change when resizing over a size limit of about 500GB. Multiple disks also cause confusion as to what your hypervisor calls the disks.

What I've tended to do is is have a SYS disk and DATA disk(s). The SYS data I tread like a regular disk (with some partitioning), but the DATA disk (say /dev/sdb) I leave unpartitioned and just use it directly as a Physical Volume (PV).

Let's say you have a directory /srv that is mounted as ext4 from /dev/mapper/vg-DATA/lv-srv

If I want to add 100GB to that logical volume, I tend to do the following (with no reboot needed):

  1. Resize the underlying disk (sdb) in the hypervisor
  2. echo - - - > /sys/block/sdb/device/rescan to cause the kernel to rescan that SCSI device. (not sure if this is needed for devices such as /dev/vd*)
  3. dmesg | tail should show the kernel has picked up the capacity change.
  4. pvresize /dev/sdb will cause the PV to adjust its size automatically (it will say '1 successful, 0 unsuccessful' or similar.
  5. vgs DATA will not show it has some free space.
  6. Resize the LV ('srv' in my example) you want using lvresize --name srv --extents='+100%FREE' --resizefs DATA

(I typed all these from memory, so if I got that all correct, its a testament to how repeatable the process is)

Note that I said --resizefs, which assumes the filesystem you're using is capable of online resizing (eg. ext4 and xfs)

I should say that this is perhaps not COMMON practice... but in my experience it was MUCH BETTER than what we used before with partitions. The problematic servicing tends to be when the SYS (partitioned) disk needs work, but in this design, I told our engineers that should be a sign that they should create a DATA disk and refactor the storage (this is something that does tend to need an outage to cutover). When I was specifying this practice, I was trying to streamline our operations to have an experience closer to what you see in cloud services.

Beware 'Best Practice' in this case is overly informed from physical kit, and in terms of VMs is well due for a rethink. Its perhaps a bit too early to know what 'Best Practice' is new paradigms. I would settle for 'Good Consistent Local Practice' that makes it easy to meet your servicing requirements.

I should also warn you that the disk may appear empty to the likes of 'fdisk'. Use 'lsblk' to see what storage is used (and install this by default). This is why Oracle don't recommend using ASM on unpartitioned disks; but perhaps fdisk/parted is smart enough to recognise an unpartitioned disk that is a physical volume (I don't know myself).

With regard to performance, when we moved our Oracle Database workloads into VMs we had some discussion with our VMWare admins around this. In this case, they had some dedicated storage for this (with other optimisations; I can't remember what, although I do recall they disabled snapshots). They did have some concern that we didn't have a bunch of virtual scsi devices on the same virtual bus; but I don't know to what extent that matters today.

In short, from a performance point of view, if Nutanix don't have a reference architecture for your version of Oracle DB and your version and configuration of the storage and virtual infrastructure, then you'd have to benchmark and compare. Tools such as bonnie++ may (still?) be useful. You should also be caring more about whether ASM is going to be used, or if files will be used.