How to remove previous RAID configuration in CentOS for re-install

I have a server that was previously setup with Software RAID1 under CentOS 5.5 (/dev/sda and sdb). I added two additional drives to the server and was attempting re-install CentOS. The CentOS installer sees the 2 new drives fine (sdc and sdd), however it does not see the the two original drives sda and sdb as individual drives. Instead it only shows Drive /dev/mapper/pdc_... (Model: Linux device-mapper). Basically what I need to do is strip all RAID configurations off these drives and allow the installer to see them as individual physical disks.

I've tried pulling all the drives except one of the original ones, installing a minimal CentOS and running dmraid -r -E, but it still sees the old RAID partition. None of the CentOS install options (remove previous partitions, etc.) seem to work.


It is an old thread but it ranks high on Google so many people read it and it needs to be updated.

The "correct" way would be to use mdadm with --zero-superblock.

## If the device is being reused or re-purposed from an existing array, 
##  erase any old RAID configuration information:
mdadm --zero-superblock /dev/<drive>
## or if a particular partition on a drive is to be deleted:
mdadm --zero-superblock /dev/<partition>

man mdadm

--zero-superblock
   If  the  device contains a valid md superblock, the block is overwritten with zeros. 
   With --force the block where the superblock would be is overwritten even if it doesn't appear to be valid.

The dd method with bs=<block size> works also but one needs to be careful because not all superblocks are written to the beginning of the disk - some are written to the end of the disk.

Update : rather use gdisk for wiping than any other method

# wipe any GPT data or MBR data
gdisk /dev/sdc
    x = extra functionality
    z = zap GPT data structures (+ MBR also after)

Source:

  • https://wiki.archlinux.org/index.php/RAID#Prepare_the_Devices
  • https://linux.die.net/man/8/mdadm

For me, the fastest (in other word: Easiest to remember) way to fix this is to boot into a rescue mode and overwrite the first few thousand bytes of the disc with dd:

dd if=/dev/zero of=/dev/sda bs=512 count=100

should do the trick. This overwrites the MBR, the partition table and all the relevant data for the RAID.


The problem was with the CentOS Anaconda installer. The Ubuntu installer had no problem seeing the individual drives. Even doing a full Ubuntu install on the drives did not clear out the raid bits. What ended up working was starting the Centos Installer using

linux text nodmraid

That let the installer run without checking for exiting RAID configurations, and the partitioning went.


Ran into this as well. Version 0.90 puts software RAID info at end of disk. You may want to use dd to zero-out the last few MB instead.


Using parted in Knoppix as root worked for me.

knoppix@microknoppix $ su
root@microknoppix $ parted <device>

(parted) print

This will list the partitions on the device. Use the command rm # where # is the entry in the returned list. It will tell you it can’t. Do it again and then type print again. It will show a blank where the partition used to be. Using Gparted confirms unallocated space.

Booted back into CentOS 7 installer and everything went fine.