Linux RAID 1: How to make a secondary HD boot?

1) How can I detect if grub is installed in /dev/sdb's MBR?

You can issue:

# dd if=/dev/sda bs=512 count=1 | xxd | grep -i grub
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00103986 s, 492 kB/s
0000180: 4752 5542 2000 4765 6f6d 0048 6172 6420  GRUB .Geom.Hard

2) Is it safe to run grub-install in /dev/sdb? Is this the correct way of making it bootable?

Yes, you need to have grub installed on both disks in the array.


You tagged software-raid, so learning grub can help: How to boot after RAID failure (software RAID)?

GRUB Legacy identifies HDD devices in the /boot/grub/device.map file and maps them to the Linux devices. The GRUB Legacy (boot manager) file doesn't identify disks the same as Linux does. Instead of /dev/sda, the first disk would be identified as (hd0).

Tutorials on the grub command can be found elsewhere online.

Essentially, the author in the link runs grub commands where each Linux device is treated as the same drive for GRUB Legacy (as it sees it according to the device.map file), e.g. (hd0) for all three disks and not (hd1) etc. This ensures the correct mapping between (hd0) and /dev/sda, etc. for redundancy purposes.

The solution the link author noted doesn't modify the MBR however. The alternate software-raid specific solution needs to be done before disk failure; otherwise, you'll need a boot disk/device. The MBR of each disk should be the same for each disk for a RAID 1 array, even with LVMs. A MBR bootloader can't direct the system to another disk, only to the same boot flagged partition's GRUB Legacy or it will bypass the boot sector and load the kernel (depending on the code), and only within the same disk to my understanding.