Maximum amount of hard drives in 64-bit Linux?
From this LinuxQuestions post:
Linux does not put arbitrary limits on the number of hard disks.
Also, from this post in the Debian mailing list:
That's easy. After /dev/sdz comes /dev/sdaa. And, I've just tested it by making and logging into 800 ISCSI targets on my laptop, after /dev/sdzz comes /dev/sdaaa. :)
and this blog post:
For SATA and SCSI drives under a modern Linux kernel, the same as above applies except that the code to derive names works properly beyond sdzzz up to (in theory) sd followed by 29 z‘s!
So, theoretically there are limits, but in practice they are unreachable.
There is, in fact, a limit on the number of drives exposed by Linux's abstract SCSI subsystem, which includes SATA and USB drives. This is because device files are marked by major/minor device number pairs, and the scheme allocated for the SCSI subsystem has this implicit limit.
https://www.kernel.org/doc/Documentation/devices.txt
The following major opcodes are allocated: 8, 65 through 71, and 128 through 135, resulting in a total of 16 allocated blocks. The minor opcode is limited to 256 possible values (range 0..255). Each disk gets 16 consecutive minor opcodes where the first represents the entire disk and the next 15 represent partitions.
let major = number of major allocated opcodes = 16
let minor = number of minor opcodes per major opcode = 256
let parts = number of minor opcodes per disk = 16
major * (minor / parts) = 16 * (256 / 16) = 256 possible drives
I've previously seen people write 128 as the limit. I believe Linux more recently 128..135, which would explain the discrepancy.
The naming scheme (/dev/sdbz7
) is chosen by userland, not by the Linux kernel. In most cases these are managed by udev, eudev, or mdev (though in the past they were created manually). I don't know their naming schemes. Don't necessarily rely on all Linux-based systems naming devices the same way, as the system administrator can modify the device naming policies.
The RHEL technology capabilities and limits page suggests at least 10000 with a recent enough kernel (see the 'Maximum number of device paths ("sd" devices)' row). This amount is greater than that mentioned by @luiji-maryo because:
- If configured to be allowed, a device can be allocated a major/minor number dynamically (see https://www.kernel.org/doc/Documentation/devices.txt for details).
- Minor Linux device numbers can be much bigger than an 8 bit value.
One way to show this to yourself is using the scsi_debug
module:
modprobe scsi_debug max_luns=10 num_tgts=128
After a short wait on mainstream Linux distro you should now have 1280 more SCSI disks. You can use
ls -l <pathtodisk>
to see their major/minor numbers.
NB (1): virtualisation software normally has much lower (in the hundreds or less e.g. vSphere 6.0 limits) limits on the maximum number of controllers that can be attached to the VM and the maximum number of disks you can hang off those controllers so you're unlikely to hit Linux's limits that way.
NB (2): Both BSG and SG limit themselves (via BSG_MAX_DEVS
and SG_MAX_DEVS
respectively) to a maximum of 32768 devices. Even if you somehow didn't need /dev/ entries for the disks themselves you would have difficulty sending down more specialised SCSI commands without these extra devices.