Is dedicated one NIC to a virtualized DC overkill?
Host Server:
- Dual Xeon E5530's
- 24 GB Ram
- 4x 1GB NIC's (operating at 100MB because of 100MB maanged switch)
- Server 2008 R2 Enterprise with Hyper-V Role + BackupExec 2012 backing up to NAS connected via iSCSI
VM's:
- Domain Controller (also DNS server)
- File Server
- Database Server
- Two App Servers
Current NIC Setup:
- 1 Physical NIC dedicated to host OS so that BackupExec can do it's thing without choking out VM's.
- 1 Physical NIC shared between APP servers. (NIC usage peaks and valleys through the day)
- 1 Physical NIC shared between SQL Server and File Server (two biggest bandwidth hogs)
- 1 Physical NIC dedicated to DC
Questions:
- Is dedicating one physical NIC to the DC/DNS overkill? I have about 20 users.
- Any tips about setting this whole thing up better?
- Are there any way to prioritize the different VM's sharing a NIC?
- I'm going to stack a 1GB switch on the 100MB one. 3 physical Servers, NAS's and that kind of thing will connect to the 1GB switch. Users will all be plugged into the 100MB switch. With the increased bandwidth am I safe putting more VM's on one physical NIC, or are there other factors to consider?
Thanks!
Solution 1:
- Yes
- It's pretty common to bond all the VM NICs and share them between the machines. It's common to have two Management NIC interfaces that are bonded as well, but not used for any VM traffic. This leads to most VM servers have at least 6 NIC interfaces, and 8 to 10 is not uncommon.
- Yes, but not with what you've got now, and you're probably not interested in what solutions do exist.
- Without knowing the actual usage that's common on your servers I can't say with any certainty. But, that sounds reasonable, especially given the above. Do note that if you're 100Mb switch can have 1Gb uplinks of any kind I would highly recommend getting a couple. If not, bond at least a few 100Mb uplinks from that switch to the 1Gb switch.
Solution 2:
- Yes. Think about how much traffic is generated between 20 users and a domain controller. It's not much.
- I know more about VMware and less about Hyper-V, but the VMware way to do this is to bond/trunk all of those interfaces for guest traffic, and then VLAN the guest NICs as necessary.
- Are you actually facing any problems? From your description, it just sounds like you're prematurely optimizing. That said, if your switches support it, you could do QoS there to prioritize, I dunno, SQL traffic maybe?
- Without knowing what your traffic is like, I'm going to say "Probably almost definitely yes."