Boot Windows from SAN
Am not so familiar with Storage. I have some questions around running Physical and Virtual Machines with SAN for Storage.
Is it possible to boot OS from a SAN Storage attached to a Physical Server? If so is there are downside to it? Say there are multiple drives one for OS, and this Server has SQL Server installed and there are other drives for Data, Log and Temp DB all are from a SAN. Would it perform better if the OS is boot up from a Local hard drive than keeping it on SAN with the SQL Datafiles? High IOPS? Latency?
I am trying compare the below 3 and quantify.
Physcial Machine - Boot OS from Local Drive and others from SAN
Physcial Machine - Boot OS and other Drives from SAN
Virtual Machine - Virtual Disk on Local drive of Hyper - V host, SQL Data/log/Temp on vSAN
Virtual Machine - Virtual Disk, SQL Data/log/Temp all in vSAN
Would the last one perform poor comparing the others?
Thanks
Booting from SAN is pretty rare these days. The hypervisors I tend to see are booted from SD cards. You can then have 100% of your local storage for VMs, or use the SAN exclusively for VMs. It's also possible to just PXE-boot your hypervisor if booting from an SD card is not suitable.
To address your specific questions:
Is it possible to boot OS from a SAN Storage attached to a Physical Server?
Yes, it is. A lot of datacenter/enterprise network cards have the requisite iSCSI protocols built into them to facilitate this.
If so is there are downside to it?
Yeah, lots. Complexity and fragility are the biggest. And truth be told, there just aren't that many gains.
Say there are multiple drives one for OS, and this Server has SQL Server installed and there are other drives for Data, Log and Temp DB all are from a SAN.
Would it perform better if the OS is boot up from a Local hard drive than keeping it on SAN with the SQL Datafiles?
It would be better to have all the data files on your local storage. For database servers I see the SAN as a dying medium. Local storage is now so much faster than SAN storage (even with a 40Gbps connection). A single NVMe drive on 2019-class hardware can cap out at just under 4GB/sec, which is over 30Gbps. On a single drive. For not that much money. In the future once PCI-e 4.0 starts shipping on the next generation of servers effectively doubles that potential speed. AMD Epyc servers have over 128 PCIe 4.0 lanes which gives you the potential for over 250GB/sec (2Tb/sec) per socket. Sorry, but I see SAN as a dead medium in the modern world.
Additionally MSSQL no longer requires shared storage for a clustered system. With AlwaysOn High Availability you can get highly available SQL servers with no shared storage (although the storage layout on all the servers does need to look identical on each node).
Would the last one perform poor comparing the others?
They are all going to perform poorly versus local storage. However, I realise that's not the question you're actually asking and I'm just hoping that you haven't bought a new SAN in 2019 and you still have the opportunity to reevaluate.
Let's go through these;
Physical Machine - Boot OS from Local Drive and others from SAN
There are many systems running like this right now everywhere, in particular things like non-virtualised database servers work this way, it's fine and very useful if you need the lowest latencies and to gain access to every last drop of performance - CGI render-farms usually work like this too.
Physical Machine - Boot OS and other Drives from SAN
I've done this once or twice over the years, usually it's a little more 'fragile' than I'd like, yes it works but can take a lot more work to do than the local-disk option, and really all you're doing is saving a little bit extra on your servers by not buying a boot pair and disk controller - personally I can't see me using this in the future.
Virtual Machine - Virtual Disk on Local drive of Hyper - V host, SQL Data/log/Temp on vSAN
Certainly this works fine, there's a LOT of VMs running like this - most of AWS's VM/Instances work this way, it's very popular indeed. If your hypervisor of choice doesn't allow you to migrated VMs from one local disk to another host and that host's disk then sometimes you might miss out on one of the benefits of virtualisation, the ability to migrate all VMs from a host to allow for it to be fixed/upgraded/patched without user impact. You mention vSAN here, which product are you talking about, if you're talking about VMware's vSAN product then that changes things as you can then run disks locally, have protection from failure and have the ability to migrate from host to host - maybe come back to clear that up?
Virtual Machine - Virtual Disk, SQL Data/log/Temp all in vSAN
In corporate environments this is probably the most frequently used and trusted method right now (though that might change as distributed file systems such as vSAN really take off).
As for the performance of the last option - it depends entirely on the centralised disk array/s you use and the communication methods used to access them. If you use 1Gbps ethernet against a slow disk array then it'll definitely be slower than using local disks, the array manufacturer I prefer, combined with 40Gbps FCoE is way quicker than anything I can connect directly to my server bar NVMe drives. So it really does depend on what that centralised storage is.
I hope this is of some help clarifying your options.
I'm not sure how 'rare' this practice is, I've seen it a lot but I'd guess it depends on the environment.
Where I typically see Boot from SAN is when Blade servers are in use. Since the typical blade server has only two slots for disks, this could be pretty limiting for something like a database server where multiple terabytes of space might be required. Additionally, it's fast and easy to replace a failing blade by simply replacing the blade and not having to worry about the guy doing that to move the disks and not get them in the correct slots. This still requires the HBA in the new blade to be configured a discussed below.
Booting either Windows or Linux from SAN is simple, you would need to work with the storage administrator, supply them the WWN from the HBA (usually two HBA interfaces for redundant paths). Once the Storage folks have provisioned a Boot LUN, boot the server and enter the HBA BIOS during POST. Scan for the available LUN and tell the HBA to use that to boot from. Make sure the HBA is the first device in the Boot order and your Windows install should see it and offer to install to that LUN. It may be easiest to configure only the Boot LUN initially, then after the OS is installed, add any additional data LUNs needed.