RAID1 Recovery after Degradation
It seems both of your disks are dying:
/dev/sda:
4 Start_Stop_Count 0x0032 096 096 020 Old_age Always - 5039
5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 240
187 Reported_Uncorrect 0x0032 079 079 000 Old_age Always - 21
195 Hardware_ECC_Recovered 0x001a 044 015 000 Old_age Always - 26908616
/dev/sdb:
4 Start_Stop_Count 0x0012 099 099 000 Old_age Always - 4911
5 Reallocated_Sector_Ct 0x0033 088 088 005 Pre-fail Always - 90
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 114
197 Current_Pending_Sector 0x0022 001 001 000 Old_age Always - 9640
So, again, never trust to what it says about itself, it lies!
You need to connect a third disk, partition it and add it into your RAIDs. Wait until it finishes rebuild. Intstall bootloader there. Then remove those two failed, and connect fourth one and replicate again to restore redundancy.
And setup periodic check and monitoring, to avoid such a dangerous situation in the future.
It is surprising to see separate boot RAID array with LVM on it. Very unusual. The original purpose of separate boot partition is to not to put it inside LVM so it could be accessed easier (early bootloaders did not know nothing about LVM, so that was a requirement).