How to connect multiple HDDs to a computer?
What is the best and most efficient way to connect 10 - 20 SATA HDDs to a home computer? I mean an ordinary motherboard supporting 2 connections for SATA HDDs.
The purpose of this assembly is to build a handy storage system for storing large files like movies, backups, etc. The current HDDs I have are mostly 3TB. Now I need to make an integrated archive for better accessibility to the data stored in different HDDs.
Solution 1:
Can't really answer definitely without knowing how you plan to use those disks.
NAS/enclosures
For example, if you need a large storage, you might use a JBOD backplane or enclosure and "see" the 20 HDDs as two drives (possibly even with RAID). In this case the disk management is outsourced to the enclosure, and if you wanted to enquire about a specific drive status you'd need Linux compatible software capable of "talking" to the enclosure.
The above solution has the advantage of being flexible and well tested. A 10- or 12-disk enclosure on the other hand can be expensive, especially if you go for RAID-capable ones. There is a small risk of incompatibilities with the OS (which you can reduce if you're comfortable tweaking the kernel and modules); be sure to check with the vendor, and try downloading the drivers and reading the relevant README's before the purchase.
This is one possibility.
Issue: data transfer rate
Another drawback is bandwidth: the 10-disk enclosure will run at most at eSATA-1 speed (1.5 Gbps or 3 Gbps - you might achieve 6 Gbps on paper, but I wouldn't bet on that), even if it might be able to sustain that speed longer than a single drive would. Most desktop drives supply a sustained rate of ~500 Mbps, but an external enclosure might be able to read/write in parallel (RAID 0) and share the load between disks.
Issue: low-level control at disk level
If you want independent control of those disks (even just being able to query their SMART status, or know whether a disk has failed - cheap enclosures usually don't allow to do this programmaticallly, you have to visually inspect external LEDs), you need a multi-SATA expansion card (maybe more than one). This will definitely be cheaper than a DIY NAS as above.
Issue: what is really on an enclosure disk set
Unless you go for the JBOD solution and "see" the 20 disks as 20 independent disks, to be handled separately at increased central computational cost, the CPU in the enclosure will allocate space on a set of, say, 10 disks and devise an optimal strategy to access it. This means that on a single disk you won't have a full, independent file system but only a part of it. And which part exactly may be difficult to tell from the outside unless you are the enclosure's electronics that put it there in the first place.
What this means is that, if one disk dies, with the proper RAID setup you're still safe and no data gets lost. But if the electronics die you may find yourself with ten good hard disks that nobody knows how to read, and all data on them is still there, but unreachable; lost to all intents and purposes, needing restore from a complete backup.
So additional things you may want to investigate when purchasing a NAS/enclosure is what kind of data organization is supported and how likely will it be to find spare parts some years down the line. For example, many NAS are actually optimized, custom-built Linux boxes. If the NAS dies on you, chances are that connecting the disks to a suitable Linux computer will make the data on them accessible. Other vendors use proprietary schemes - sometimes standard schemes intentionally tweaked to be incompatible and "encourage" customer fidelity - and can't be read on other vendors' hardware.
Issue: powering all those disks
If not using an external powered enclosure, you'll have the problem of powering those hard disks. At spin-up, eight disks can draw current enough to send a stock PSU into overload, resetting the PC via the "power good" signal and looping the boot process, possibly forever (been there, done that. I'm told that sometimes the disks will keep spinning after the reset, so at the next cycle they won't draw that much current and the system will start. Even so, I shudder to think what that might do to the system's life expectancy). So you'd need one or more PSUs capable of delivering a substantial spin-up current (around 30-60 amp), or a card supporting "staggered/delayed spin-up" (not all do; when they do, they usually have two switch positions, "to be awakened immediately" and "to be awakened in 10 seconds". You might need more than that, if you wanted to spin up the hard disks in four groups).
The DIY option
A third possibility, still depending on what you need those disks for, is to refactor the whole architecture. A port multiplier will set you back around USD 500. A 8-SATA motherboard with gigabit Ethernet can cost as low as USD 79. Three such motherboards, three oversize power units and one gigabit switch, and you have something that can handle 24 hard disks independently (and much more flexibly).
Considerations: a balanced architecture
Even if you prefer the double-port-multiplier-enclosure USD 1,000 solution, you might consider investing into a computer upgrade: a 2-SATA motherboard is probably long in the tooth, and might not get you all the bang for the buck a PME is built to deliver.
Considerations: disk failure rate and maintenance costs
Also consider that dealing with 20 disks, disk failure is something you really want to plan for. Hot-swap capabilities and RAID offloading are much more common (and easy to use/implement) in external NAS/multipliers than in DIY solutions; you may want to factor in the maintenance and downtime costs.
External enclosure failure rate
For external enclosures, you'll often hear horror stories about enclosures failing and even destroying the disks inside, or at least shortening their lifespan. It happens. The main reason why this happens is that enclosure builders are too often lowest-bidding cheapskaters, and they do not pay sufficient attention to a simple fact - a hard disk is an electric inductive motor with a bunch of sensitive electronics on't. It therefore requires, or at the very least it desires, a proper working environment. For a hard disk this boils down to "constant temperature, not too warm, and clean input current". I've met several enclosures that failed abysmally on both counts, delivering "dirty" power with spikes and over/undervoltages, which is the death of electronics, and relying on passive cooling or having a single, often undersized, 12V rear fan with neither redundancy nor failure mode. Which means that when, not if, the USD 2,00 fan fails, the power is not cut, no alert buzzer sounds, and a half dozen of USD 250,00 disks may start silently getting hotter and hotter, until they lock or crash. When that happens, the system can remain powered and in some cases (it depends on the hard disks and their failure mode) actually get even hotter. I have seen a 5 disk enclosure with the plastic front melted. Needless to say, the RAID array was unsalvageable - all disks were dead (thank God for adequate and updated backups!).
Unfortunately, while things like "two redundant fans" or "overtemperature alert" can be gleaned from pictures and manuals before purchase, they're usually available only on much more expensive enclosures. A very safe thermal protection can add up to USD 10 to the bottom line, and several manufacturers seem to believe that risking 2,500 USD of magnetic storage to save those USD 10 is worthwhile.
Solution 2:
You don't, really. There's a few problems you need to deal with. You'd certainly not have enough power connectors in the average power supply - most would come with 6-8 at most. You could possibly use additional power supplies. Your case wouldn't have enough room for all those drives, you need a bigger case. You'd need to add port multiplier cards and backplanes...
You'd end up with something like one of backblaze's pods - a massive box of drives with lots of backplanes, multiple PSUs and lots of cooling. This wouldn't be an average desktop in any manner. With this many drives, it might make sense to consider having a few regular cases and simply splitting up the storage, your motherboard and case won't be a big cost factor with respect to the whole system, AND it would have better reduncancy
Solution 3:
A typical desktop motherboard usually has much 4, 6, or even 8 SATA connections. You can add expansion cards to increase the number. 10-20 drives is likely not to have the physical space to put them in. You probably should consider an external NAS.
Solution 4:
Typically that many hard drives in a single device isn't marketed to the general public. The first thing you want to find is a chassis big enough to support all the hard drives you want. The chassis you will want will need to be a server model or custom fabricated one.
Considering if your hard drives or lff or sff, you can fit 12 or 24 sata hard drives into this chassis, for example. http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/15351-15351-3896136-5080871-4290526-4324034.html?dnr=1
If you already have hardware, you can find a compatible 20 bay chassis that supports your drive needs like this. Then you will need to purchase a powerful enough power supply(s) and you may want a raid card as well to connect all your hard drives. (Single expansion cards will work as well)
Solution 5:
As Keven suggested it’s possible but you’re obviously going to be limited on case space if you don’t want it all sprawled out.
NAS is a good idea but you’ll need more hardware, so you could look at RAID arrays or cards for your PC if you have a case big enough.