Few questions about creating RAID0 with LVM

I'm little confused, because I found two different instructions about creating RAID0 with LVM. First resource shows way of creating RAID0 wit this pattern:

lvcreate -i[num drives] -I[strip size] -l100%FREE -n[lv name] [vg name]

but looking into official manual, it shows little different approach to do, at least I think, the same task.

lvcreate --type raid0  [--stripes Number --stripesize Size] VG [PVs]

In the second example, we tell explicitly that we are dealing with raid0. I'm not sure now, which pattern is correct. I tried first one and created lvg without problems, but not tested yet deeper.

There is also second question I would like to ask before I mess up my computer ;)

I have two same ssd drives - I want to speed up my performance of system and games. I want to make RAID 0 out of them and recently, I found article saying that it could be done with LVM.

I'm quite surprised, that setting this raid is made on the last "layer" of LVM - logic group. I would expect it to happen on the level of creating volume groups and then create normal partitions on top of raid 0.

Just to make sure, is this going to work well?

sudo lvcreate -i2 -I4 --size 100G -n root lvm-system /dev/sda /dev/sdc 
sudo lvcreate -i2 -I4 --size 4G -n swap lvm-system /dev/sda /dev/sdc 
sudo lvcreate -i2 -I4 -l 100%FREE -n games lvm-system /dev/sda /dev/sdc

I would appreciate any help!


Modern Linux kernel have two different drivers to manage multiple devices: devicemapper and the classical MD software RAID (the one used by mdadm).

The two LVM commands above both create a striped volume, but using different drivers:

  • the first command (the one without --type=raid0) defines a striped segment type which, in turn, is a fancy name for devicemapper-level striping;

  • the second command (the one with --type=raid0) uses the classical Linux MD driver to setup a "true" RAID0.

Managing RAID/striping at the LVM level (rather that at disk/partition level) can be useful when you want to use different protection profile (ie: RAID0 vs RAID1) to different logical volumes (ie: scratch space vs data repository). For more information, give a look here

However, when using homogeneous RAID level (or only an handful of different levels, maybe for the boot drives only), I generally prefer to rely on the classical mdadm, especially when dealing with physical hosts/disks: its behavior is better documented and I found easier to identify and replace a problematic drive. Moreover, as LVM is a quite complex tool, I prefer to manage RAID with mdadm.