VMware NAS/iSCSI recommendations - smallish organization

I have two VMware servers - ESX + ESXi. Two backup NAS boxes. The current NAS boxes are low-cost and unsuitable for running VMs from. Support NFS only. Slow.

My plan is to have a dedicated iSCSI/NAS for storing and running VMs. Two additional low-cost boxes for backup.

I'm looking for advice regarding 2 things really:

  1. Recommendations as far as VMware architecture/design for a smaller organization. Less than 20 Virtual Machines. 2 servers + 2 x 1.5 terabyte backup NAS boxes.
  2. A good NAS/iSCSI box with your recommendation on RAID config ...I would go with 6 or better.

I'm trying to design an installation that is both fast and reliable/redundant. If you have any experiences to share or your current configuration including network design ( switches, fiber ...etc ), I will be enormously thankful. I'm not married to this idea, so if you have a design not using iSCSI NAS boxes ...let er rip. Cost? Can we stay around $5,000 ( on top of already stated components )?

Links to info are welcome also.

Thanks for reading!

Bubnoff

* UPDATE *

Thanks to all who responded. I began this post looking more at the NAS/SAN issue but am beginning to think that my main issue at this point is properly setting up our network for virtualization ...with all the equipment that entails. The slow NFS during testing is likely due to issues we've got on our network rather than protocol or device issues. We have a network consultant coming in this year and I now have more ammo to work with in getting it right.

Any other network examples, gotchas or advice is welcome. Thanks again.


Solution 1:

Whether you go for NFS or iSCSI, you should budget for dedicated networking equipment for storage. Don't run your storage on the same network as your servers or PCs normally connect to.

Buy a couple of 1Gb switches (10gb if you can afford it). Make sure you have two NICs per VMware host just for storage. Get Storage hardware with dual NICs. Make sure your storage and hosts are connected to both switches. If you have anything else that needs to talk to your storage, add NICs for that purpose. That way you are not contending

I am sure that RAID 5 will be fine. I'm sure you can find all the above for less than $5k (not including your VMware licensing).

Solution 2:

Don't ignore NFS.

ESX can use NFS just as easily as it can use FC / iSCSI, and NFS is a lot easier to live with depending on the rest of your infrastructure.

And, if you go with NFS, you can just get a dell / HP box with lots of storage, or a simple FC shelf and a pair of dell/HP boxes.

Solution 3:

Your budget constraints make me think you should go the route of building a system yourself.

I've been a big proponent of Solaris ZFS-based solutions for NAS and/or iSCSI backends for VMware. With some of the fuzziness surrounding Oracle's acquisition of Sun, I've started using NexentaStor in client deployments. The platform is attractive because of inline compression, deduplication and the ability to present iSCSI storage as well as NFS. See the following article for ZFS platform information:

http://www.anandtech.com/show/3963/zfs-building-testing-and-benchmarking

For the most recent installations, I've been using HP ProLiant DL180 G6 storage nodes and outfitting them with 24GB-48GB RAM, LSI 9211 SAS controllers to replace the onboard Smart Array RAID controllers, and a mix of solid-state (cache), 15k RPM and low-speed 7.2k RPM SAS disks, depending on the application/environment. Add some additional NICs (2 or 4-port gigE) and it's a good setup that is probably a step up from using a low-end appliance or raw Linux NFS.

Nexenta works well with the hardware (drive LEDs, HP agents, etc.) Using this solution, I'm at $5000-$8000 per storage node, depending on drive type. You wouldn't need something this involved, but if you do use a ZFS-based solution, ballpark system requirements for your arrangement should be 6 or more data drives using RAID 1+0 or RAID 5+0 (avoid RAID 6), 8+ GB RAM and multiple dedicated NICs for your storage network (on the ESX and storage node sides).

A commercial setup from PogoLinux may also work. I went the route of building my own because I prefer HP hardware, but there are some canned ZFS solution WITH SUPPORT available here:

http://www.pogolinux.com/products/storage_director

If this is too involved, your next option is something like an HP MSA P2000 SAN; perhaps one of the SAS-attached models like the 2312sa. It's a step up in price, though. Maybe ~$13k+ US for what you're looking for.

Solution 4:

If you don't want to build yourself, check out the SnapServer from Overland. Lots of value for the dollar in these boxes from a company that is quite reputable in the storage arena.

http://www.overlandstorage.com/products/network-attached-storage/index.aspx#top

The N2000 starts at $5k.