How to boot after RAID failure (software RAID)?
This is an old chestnut. The short answer is that "grub-install" is often the wrong answer for Software RAID. Here is an example where I have a 3-way RAID-1 array. The /boot partition is stored at /dev/md0. This installs GRUB to each disk, so that if one disk fails, you can boot off one of the other disks.
# grub
grub> find /grub/stage1
(hd0,0)
(hd1,0)
(hd2,0)
grub> device (hd0) /dev/sda
grub> root (hd0,0)
grub> setup (hd0)
grub> device (hd0) /dev/sdb
grub> root (hd0,0)
grub> setup (hd0)
grub> device (hd0) /dev/sdc
grub> root (hd0,0)
grub> setup (hd0)
grub> quit
In the future versions of GRUB, it's much smarter, but CentOS 6 / RHEL 6 still ship with the older GRUB.
To test: Change the "timeout=5" value in your grub.conf file (under /boot) to something like timeout=30. Then swap the location of the two drives before powering the system back on. Or change the the boot order of the hard drives in the BIOS.
(Naturally... make sure you have good backups and know how to put it back to the correct configuration. Testing this out on a throwaway system is always a good idea.)