Are there any notable advantages (or disadvantages) to using EFI firmware and GPT boot disks in an ESXi environment?
My basic question is, as the title asks: are there any notable advantages (or disadvantages) to using EFI firmware and GPT boot disks in an ESXi environment? By "notable," I mean anything other than the well-known 2 TB limit for MBR disks, and the restriction that BIOS boot firmware must use MBR disks to boot from.
The specific VM option is in the screenshot below.
In case it makes a difference, some background and specifics on my particular environment are below, though I'm interested in the general case as well as anything that would relate specifically or only to a Windows environment.
As a result of some recent projects, where I have succeeded in dragging my corporate overlords at $[day_job] into the current decade, I'll be replacing a lot of our home office systems. These systems, as well as what they're to be replaced by, are primarily Windows Server OSes virtualized on ESX 5.5 (update 1 now, soon to be update 2, and VMFS5, so large volume support). The VMs, as well as all the storage they access, are on a SAN (EMC VNX 5400), which is presented to the ESXi hosts over NFS shares. Everything is thin-provisioned.
For the most part, I'll simply be upgrading a bunch of large, complicated, PITA systems to newer platforms - for example, our multi-TB file servers that currently run on Server 2003 R2 and do not use DFS will be upgraded to Server 2012 R2, be put into DFS namespaces, make use of DFS replication, and start using Server 2012 Data Deduplication. Our SharePoint system, which currently runs on Server 2003 R2 and SQL Server 2005 will be upgraded to SharePoint 2013, running Server 2012 R2, and put on a SQL Server engine of 2008 R2 or above. And so on.
In looking into the file servers, and how to deal with the amount of data on them (each one of our home office file servers has data in excess of 2 TB), I looked into, and settled on, the Data Deduplication feature in Server 2012. Since that works on a per-volume basis, it works best if all the data is one volume, instead of split across multiple volumes, like our current mess. This brought up the issue of GPT disks being best for our data volumes, and brought me to the question of EFI vs BIOS firmware. Our servers all have OS [virtual] disks of 50 GB that are separate from any data volumes, and at least at present, I'm planning on keeping it that way - being able to attach a data volume to a new VM is pretty useful.
So, with that in mind, I can't envision a scenario where we'd ever need or want a VM to boot from a volume that needs to be GPT for being over the 2 TB MBR disk limit. The fact that the environment is purely virtual does seem to negate the recoverability advantages of GPT disks, so I can't come up with any compelling reason start building our new VMs with EFI boot firmware and/or GPT boot volumes. Of course, I also can't come up with any compelling reasons to stick with BIOS boot firmware and MBR disks, and hence, my question:
Are there any notable advantages (or disadvantages) to using EFI firmware and GPT boot disks in an ESXi environment? (By "notable," I mean anything other than the well-known 2 TB limit for MBR disks, and the restriction that BIOS boot firmware must use MBR disks to boot from.)
On the BIOS vs UEFI front, there's this: https://communities.vmware.com/thread/464854
I work on the team responsible for developing the virtual firmware, specifically the virtual EFI implementation.
We had not intended that EFI be the default. We realized that we'd made a mistake too late to correct it in time for vSphere 5.1 GA, and the consequences of the initial mistake had propagated to various other places which had now assumed that EFI was intended to be the default, such as documentation and release collateral.
The primary reason for wanting to return to BIOS by default is the lack of FT support – We did not wish to provide a default configuration that was going to be incompatible with FT. Secondary reasons exist, such as a small number of PCI Passthrough scenarios which would work on BIOS but fail on EFI, and generally broader support for BIOS in the ecosystem – such as guest OS deployment solutions, OS recovery solutions, PXE boot environments and PXE server support, and so forth.
That's all there is to it. It was a mistake which propagated in a way that we couldn't clean up in time for vSphere 5.1 GA, and it's most regrettable that caused the confusion that it did.
My advice: If you don't need FT, won't be using PCI Passthrough (or if you can validate that your PCI Passthrough configuration works with virtual EFI), and have few or no dependencies on other BIOS-specific tools to deploy or manage your OS, you can feel free to deploy EFI Windows 2012 VMs.
One place where the EFI setting for VMs is very useful is in allowing manual P2V conversions of bare metal systems which were installed using EFI, since EFI is not supported by VMware Converter (or wasn't, last I checked). See How to perform a P2V conversion of a Windows Server 2008 R2 EFI system? for background on this.