Is it possible to deploy a high available file service with two file servers using a shared virtual hard disk and DFS with Windows Server 2019?
In my infrastructure, I have two servers with Windows Server 2019 and Hyper-V installed. A SAN is directly connected to both servers via FC. The SAN provides three volumes to both servers: a volume for the quorum, a volume for VMs and a volume for data.
I plan to deploy a file service that is as much high available as my given infrastructure can go. Therefore - as I have two nodes - I want to deploy two file servers. This way, I can tolerate the failure of one whole server (host) or a failure of one virtual file server. With just one virtual file server (with HA enabled), I would just tolerate the failure of one host, but not a failure of the VM itsself.
I plan to use the data volume of my SAN to deploy a shared virtual hard disk, that both virtual file servers will use to provide the file shares.
Further more, I want that the users don't have to care which file server they access to access their files. \\FileSrv1\Data\README.md
should be the same as \\FileSrv2\Data\README.md
, but users should be able to access it like \\FS\Data\README.md
. As far as I know, this is a typical use case for DFS. But I don't want two file servers that replicate their data, as I have a shared storage.
So my question is, can I use both - a shared storage for virtual file servers AND DFS to abstract the file access - in my scenario?
It turned out that I do not necessarily need DFS in my scenario. The guest failover cluster of the file servers already provides an abstraction layer to access the files, so I just called the clustered role fs
. In conclusion, files can always be accessed by \\FS\...
no matter which of the file servers is active.