What is a Storage Area Network, and which benefits does it have over different storage solutions?
First of all, for a (broad) comparison of DAS, NAS and SAN storage see here.
There are some common misconceptions about the term "SAN", which means "Storage Area Network" and as such, strictly speaking, refers only to the communication infrastructure connecting storage devices (disk arrays, tape libraries, etc.) and storage users (servers). However, in common practice the term "SAN" is used to refer to two things:
- A complete storage infrastructure, including all the hardware and software involved in providing shared access to central storage devices from multiple servers. This usage, although not strictly correct, is commonly accepted and what most people refers to when talking about a "SAN". The rest of this answer will focus on it, thus describing every component of an enterprise-level storage infrastructure.
- A single storage array (see later); as in, "we have a Brand X SAN with 20 TB storage". This usage is fundamentally incorrect, because it doesn't even take into account the real meaning of "SAN" and just assumes it's some form of storage device.
A SAN can be composed of very different hardware, but can usually be broken down into various components:
-
Storage Arrays: this is where data is actually stored (and what is erroneously called a "SAN" quite often). They are composed of:
- Physical Disks: they, of course, archive the data. Enterprise-level disks are used, which means they usually have lower per-disk capacity, but much higher performance and reliability; also, they are a lot more expensive than consumer-class disks. The disks can use a wide range of connections and protocols (SATA, SAS, FC, etc.) and different storage media (Solid-State Disks are becoming increasingly common), depending on the specific SAN implementation.
- Disk Enclosures: this is where the disks are placed. They provide electricity and data connections to them.
- Storage Controllers/Processors: these manage disk I/O, RAID and caching (the term "controller" or "processor" varies between SAN vendors). Again, enterprise-level controllers are used, so they have much better performance and reliability than consumer-class hardware. They can, and usually are, configured in pair for redundancy.
- Storage Pools: a storage pool is a bunch of storage space, comprising some (often many) disks in a RAID configuration. It is called a "pool" because sections of it can be allocated, resized and de-allocated on demand, creating LUNs.
- Logical Unit Numbers (LUNs): a LUN is chunk of space drawn from a storage pool, which is then made available ("presented") to one or more servers. This is seen by the servers as a storage volume, and can be formatted by them using any file system they prefer.
- Tape Libraries: they can be connected to a SAN and use the same communications technology both for connecting to servers and for direct storage-to-tape backups.
-
Communications Network (the "SAN" proper): this is what allows the storage users (servers) to access the storage devices (storage array(s), tape libraries, etc.); it is, strictly speaking, the real meaning of the term "Storage Area Network", and the only part of a storage infrastructure that should be defined as such. There really are lots of solutions to connect servers to shared storage devices, but the most common ones are:
- Fibre Channel: a technology which uses fiber-optics for high-speed connections to shared storage. It includes host bus adapters, fiber-optic cables and FC switches, and can achieve transfer speeds ranging from 1 Gbit to 20 Gbit. Also, multipath I/O can be used to group several physical links together, allowing for higher bandwidth and fault tolerance.
- iSCSI: an implementation of the SCSI protocol over IP transport. It runs over standard Ethernet hardware, which means it can achieve transfer speeds from 100 Mbit (generally not used for SANs) to 100 Gbit. Multipath I/O can also be used (although the underlying networking layer introduces some additional complexities).
- Fibre Channel over Ethernet (FCoE): a technology in-between full FC and iSCSI, which uses Ethernet as the physical layer but FC as the transport protocol, thus avoiding the need for an IP layer in the middle.
- InfiniBand: a very high-performance connectivity technology, less used and quite expensive, but which can achieve some impressive bandwidth.
- Host Bus Adapters (HBAs): the adapter cards used by the servers to access the connectivity layer; they can be dedicated adapters (as in FC SANs) or standard Ethernet cards. There are also iSCSI HBAs, which have a standard Ethernet connection, but can handle the iSCSI protocol in hardware, thus relieving the server of some additional load.
A SAN provides many additional capabilities over direct-attached (or physically shared) storage:
- Fault tolerance: high availability is built-in in any enterprise-level SAN, and is handled at all levels, from power supplies in storage arrays to server connections. Disks are more reliable, RAID is used to withstand single-disk (or multiple-disk) failures, redundant controllers are employed, and multipath I/O allows for uninterrupted storage access even in the case of a link failure.
- Greater storage capacity: SANs can contain many large storage devices, allowing for much greater storage spaces than what a single server could achieve.
- Dynamic storage management: storage volumes (LUNs) can be created, resized and destroyed on demand; they can be moved from one server to another; allocating additional storage to a server requires only some configurations, as opposed to buying disks and installing them.
- Performance: a properly-configured SAN, using recent (although expensive) technologies, can achieve really impressive performance, and is designed from the ground up to handle heavy concurrent load from multiple servers.
- Storage-level replication: two (or more) storage arrays can be configured for synchronous replication, allowing for the complete redirection of server I/O from one to another in fault or disaster scenarios.
- Storage-level snapshots: most storage arrays allow for taking snapshots of single volumes and/or whole storage pools. Those snapshots can then be restored if needed.
- Storage-level backups: most SANs also allow for performing backups directly from storage arrays to SAN-connected tape libraries, completely bypassing the servers which actually use the data; various techniques are employed to ensure data integrity and consistency.
Based on everything above, the benefits of using SANs are obvious; but what about the costs of buying one, and the complexity of managing one?
SANs are enterprise-grade hardware (although there can be a business case for small SANs even in small/medium companies); they are of course highly customizable, so can range from "a couple TBs with 1 Gbit iSCSI and somewhat high reliability" to "several hundred TBs with amazing speed, performance and reliability and full synchronous replication to a DR data center"; costs vary accordingly, but are generally higher (as in "total cost", as well as in "cost per gigabyte of space") than other solutions. There is no pricing standard, but it's not uncommon for even small SANs to have price tags in the tens-of-thousands (and even hundreds-of-thousands) dollars range.
Designing and implementing a SAN (even more so for a high-end one) requires specific skills, and this kind of job is usually done by highly-specialized people. Day-to-day operations, such as managing LUNs, are considerably easier, but in many companies storage management is anyway handled by a dedicated person or team.
Regardless of the above considerations, SANs are the storage solution of choice where high capacity, reliability and performance are required.
Do you need one? Depends. £ or $ per TB is considerably higher than DAS. Plus, the performace of DAS does, I'm afraid, out-perform FC/AL and iSCSI SAN (well, at least in my testing with Oracle and SQL Server DBs). But, with DAS, you don't get the benefits of being able to share storage (good for clustering and VMWare).
A number of storage vendors are migrating away from fibre-channel for the host-to-storage controller connections, in favour of iSCSI, which runs on top of Ethernet. It's the old Token-Ring vs Ethernet saga all over again; with so much industry-wide research and investment in Ethernet, FC just can't keep up. A 10Gbps Ethernet switch is far cheaper than an 8Gbps FC one, plus it can be vLANd or otherwise segmented to provide storage and non-storage data.
However, there are some big benefits of SANs:
- SAN snapshots (point in time recovery point for a server or collection of servers)
- On-site and off-site block level replication (without involving the host server, so no need for software based replication)
- Direct SAN backups - if your backup system can hook into and work with your SAN
If you're considering dipping your toe in the water of shared storage, look at products like HP's P4000 kit.