What is a storage area network (SAN)? and What are some of the ways that it can be used? [closed]
I need some good examples and links to companies that you are already using for buying SAN hardware, SAN data centers, etc.,
A subtle additional point is that the term SAN, represents the entire network for file access. Usually block based file access.
A SAN includes the physical interconnect (usually Fibre Channel, but iSCSI over Ethernet is another option, as well as strange SCSI based implementations), the switches that allow more than just point to point communication (Usually for FC, but in principle iSCSI's Ethernet switches act much the same), the adapters for devices to connect (FC cards, iSCSI drivers) and finally the actual storage device.
HP makes a nice 'cheap' FC SAN that is really a bunch of SCSI disks in a RAID enclosure (15 disks in a 3U or 4U rack mount) that has FC connections out the back.
The Apple Xserve RAID uses Parellel ATA and Serial ATA (I think in the newest ones) with an FC port out the back.
One of the problems with SAN's is that they were designed more for bank style applications, where you need a fully redundant mesh of connections with no single point of failure. This implies that each host connecting (your server) have two separate FC controllers, each of which has two FC connections. Each server has two connections to each FC switch, such that any card, switch port, or cable failure does not affect the data flow.
Then each storage device has two independent storage processors (Aka RAID controllers with an FC port or two out the back) with two or more connections to each FC switch. FC disk drive usually have two actual FC ports, although usually run through one hot plug connector, (dual ported drives) again following the philosophy of no single point of failure.
FC switches used to be prohibitively expensive, at close to a thousand dollars a port. They have dropped seriously in price lately, but at the outset a few years back, the sticker shock was lethal!
Usually the Storage Processors have lots of CPU and lots of RAM and are designed to just run the disks. They usually outperform by quite some margin any single server based disk farm.
On top of the physical hardware being crazy expensive (in the past, again, things have gotten better lately) there is usually licenses for software running on the storage processors that need to be purchased. For example you might want to mirror two SAN's and you would need to buy the mirroring software from the vendor, and it can be quite expensive, even today.
As answered above, a SAN is basically a backbone network for presenting storage to servers. Generally it is a fiber optic network connecting storage arrays and tape drives to a server. At the server side the storage appears local.
In my opinion the best SAN equipment is made by Brocade, even SAN equipment sold by IBM is just re-branded Brocade equipment.
As for companies used to by these, you will have to find a local vendor you are comfortable working with.
Keep in mind that setting up a SAN involves much more than just buying SAN switches and cables. Your servers will need to be equipped with HBA's, and of course, you will need some sort of SAN capable storage to attach it all to.
From the Wikipedia article:
A storage area network (SAN) is an architecture to attach remote computer storage devices (such as disk arrays, tape libraries, and optical jukeboxes) to servers in such a way that the devices appear as locally attached to the operating system.
SANs can be used always when you need to separate your storage from the servers that are using it. This might be required for redundancy (multiple servers accessing the same storage) or just flexibility of management (freely (re-)assign parts of the storage to hosts).
There are SANs and NASs - generally SANs use fibre-channel/Fibre-channel-over-ethernet/fibre-channel-over-IP or Infiniband - whereas NASs usually use ethernet and IP with protocols like iSCSI, CIFS/SMB, NSF, AppleTalk etc. Most SAN protocols support block-level access, most NAS protocols support file-level access. In very general terms SANs are quicker, more reliable and considerably more expensive than NASs.
There are many SAN vendors including EMC, IBM and HP.
One of the major uses for our SANs is to store VHD files (virtual hard disks) for our Virtual Machines. Our Hyper-V machines run all of their VMs from SAN storage, which provides great performance for us.