Can I use an entire drive for as a software raid member?

Yes, you can do that, but it can cause an annoying side effect.

I have a system right next to me where I used the whole device as md RAID members. Every time it boots, it complains about broken partitions on those devices.

That's because data will be written to the very beginning of the drive while it is part of the RAID group. The system will then try to interpret that data as a partition table on boot when inspecting devices.

So far, that hasn't caused problems for me. It just delays the boot procedure and looks really frightening.


I just pulled up the logs to show what I was talking about. This is what's running through my console when I boot up the server.

Please note, the devices /dev/sda through /dev/sdd and /dev/sdf through /dev/sdj are all part of the RAID array. /dev/sde contains the systems root partition.

Nov 24 11:41:52 dump kernel: [   49.717165] sd 0:0:0:0: [sda] 2930277168 512-byte hardware sectors (1500302 MB)
Nov 24 11:41:52 dump kernel: [   49.717172] sd 0:0:0:0: [sda] Write Protect is off
Nov 24 11:41:52 dump kernel: [   49.717173] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
Nov 24 11:41:52 dump kernel: [   49.717182] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Nov 24 11:41:52 dump kernel: [   49.717209] sd 0:0:0:0: [sda] 2930277168 512-byte hardware sectors (1500302 MB)
Nov 24 11:41:52 dump kernel: [   49.717213] sd 0:0:0:0: [sda] Write Protect is off
Nov 24 11:41:52 dump kernel: [   49.717214] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
Nov 24 11:41:52 dump kernel: [   49.717221] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Nov 24 11:41:52 dump kernel: [   49.717222]  sda: unknown partition table
Nov 24 11:41:52 dump kernel: [   49.724463] sd 0:0:0:0: [sda] Attached SCSI disk
Nov 24 11:41:52 dump kernel: [   49.724504] sd 1:0:0:0: [sdb] 2930277168 512-byte hardware sectors (1500302 MB)
Nov 24 11:41:52 dump kernel: [   49.724510] sd 1:0:0:0: [sdb] Write Protect is off
Nov 24 11:41:52 dump kernel: [   49.724512] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
Nov 24 11:41:52 dump kernel: [   49.724519] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Nov 24 11:41:52 dump kernel: [   49.724547] sd 1:0:0:0: [sdb] 2930277168 512-byte hardware sectors (1500302 MB)
Nov 24 11:41:52 dump kernel: [   49.724551] sd 1:0:0:0: [sdb] Write Protect is off
Nov 24 11:41:52 dump kernel: [   49.724552] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
Nov 24 11:41:52 dump kernel: [   49.724559] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Nov 24 11:41:52 dump kernel: [   49.724561]  sdb:Driver 'sr' needs updating - please use bus_type methods
Nov 24 11:41:52 dump kernel: [   49.734320]  unknown partition table

This is only the first two drives in the RAID array. You can already see the unknown partition table errors in there. The same error will be printed for all RAID members.

Once that is done. This starts:

Nov 24 11:41:52 dump kernel: [   50.145507] attempt to access beyond end of device
Nov 24 11:41:52 dump kernel: [   50.145513] sdc: rw=0, want=7018997372, limit=2930277168
Nov 24 11:41:52 dump kernel: [   50.145515] Buffer I/O error on device sdc3, logical block 4250167552
Nov 24 11:41:52 dump kernel: [   50.145626] attempt to access beyond end of device
Nov 24 11:41:52 dump kernel: [   50.145627] sdc: rw=0, want=7018997373, limit=2930277168
Nov 24 11:41:52 dump kernel: [   50.145628] Buffer I/O error on device sdc3, logical block 4250167553

You see it complains about sdc3 which doesn't actually exist (because the whole sdc device is used as a RAID member.

The next time I boot the machine, it might be a different drive it complains about or none at all. Depending on what data exists instead of a partition table on the devices the next time I boot.

And the worst part about it? I can't move to partitions now, because I would have to shrink each RAID member disk by a tiny amount (so I could partition) and that would require re-creating the whole RAID array.


Yes, if you use md for RAID you can use the entire block device without partitioning it. See the mdadm man page for details.