How to delete removed devices from a mdadm RAID1?

I had to replace two hard drives in my RAID1. After adding the two new partitions the old ones are still showing up as removed while the new ones are only added as spare. I've had no luck removing the partitions marked as removed.

Here's the RAID in question. Note the two devices (0 and 1) with state removed.

$ mdadm --detail /dev/md1

mdadm: metadata format 00.90 unknown, ignored.
mdadm: metadata format 00.90 unknown, ignored.
/dev/md1:
        Version : 00.90
  Creation Time : Thu May 20 12:32:25 2010
     Raid Level : raid1
     Array Size : 1454645504 (1387.26 GiB 1489.56 GB)
  Used Dev Size : 1454645504 (1387.26 GiB 1489.56 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Tue Nov 12 21:30:39 2013
          State : clean, degraded
 Active Devices : 1
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 2

           UUID : 10d7d9be:a8a50b8e:788182fa:2238f1e4
         Events : 0.8717546

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       0        0        1      removed
       2       8       34        2      active sync   /dev/sdc2

       3       8       18        -      spare   /dev/sdb2
       4       8        2        -      spare   /dev/sda2

How do I get rid of these devices and add the new partitions as active RAID devices?

Update 1

I seem to have gotten rid of them. My RAID is resyncing, but the two drives are still marked as spares and are number 3 and 4, which looks wrong. I'll have to wait for the resync to finish.

All I did was to fix the metadata error by editing my mdadm.conf and rebooting. I tried rebooting before, but this time it worked for whatever reason.

Number   Major   Minor   RaidDevice State
   3       8        2        0      spare rebuilding   /dev/sda2
   4       8       18        1      spare rebuilding   /dev/sdb2
   2       8       34        2      active sync   /dev/sdc2

Update 2

After resyncing the problem is exactly the same as before. The two new partitions are listed as spares while the old ones marked as removed are still there.

Is stopping and re-creating the array the only option for me?

Update 3*

# cat /proc/mdstat 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear] [multipath] 
md1 : active raid1 sdb2[3](S) sdc2[0] sda2[4](S)
      1454645504 blocks [3/1] [U__]

md0 : active raid1 sdc1[0] sdb1[2] sda1[1]
      10488384 blocks [3/3] [UUU]

unused devices: <none>

In your specific case:

mdadm --grow /dev/md1 --raid-devices=3

For everyone else, set --raid-devices to however many functioning devices are in the array currently.