What happens if I force ZFS to detach a hot spare with no valid replicas?
I have a ZFS pool made out of 6 RAIDZs. One of the RAIDZ is degraded, due to loosing two disks in the single RAIDZ close enough together that ZFS wasn't able to recover from the first failure before the second disk failed. Here is the output from "zpool status" shortly after reboot:
pool: pod2
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: resilver in progress for 0h6m, 0.05% done, 237h17m to go
config:
NAME STATE READ WRITE CKSUM
pod2 DEGRADED 0 0 29.3K
raidz1-0 ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F165XG ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F1660X ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F1678R ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F1689F ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F16AW9 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F16C6E ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F16C9F ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F16FCD ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F16JDQ ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F17M6V ONLINE 0 0 0
raidz1-2 ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F17MSZ ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F17MXE ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F17XKB ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F17XMW ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F17ZHY ONLINE 0 0 0
raidz1-3 ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F18BM4 ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F18BRF ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_W1F18XLP ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F09880 ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F098BE ONLINE 0 0 0
raidz1-4 DEGRADED 0 0 58.7K
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F09B0M ONLINE 0 0 0
spare-1 DEGRADED 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F09BEN UNAVAIL 0 0 0 cannot open
disk/by-id/scsi-SATA_ST3000DM001-1CH_W1F49M01 ONLINE 0 0 0 837K resilvered
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F0D6LC ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F0CWD1 ONLINE 0 0 0
spare-4 DEGRADED 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F09C8G UNAVAIL 0 0 0 cannot open
disk/by-id/scsi-SATA_ST3000DM001-1CH_W1F4A7ZE ONLINE 0 0 0 830K resilvered
raidz1-5 ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-1CH_Z1F2KNQP ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F0BML0 ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F0BPV4 ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F0BPZP ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F0BQ78 ONLINE 0 0 0
raidz1-6 ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F0BQ9G ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F0BQDF ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F0BQFQ ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F0CW1A ONLINE 0 0 0
disk/by-id/scsi-SATA_ST3000DM001-9YN_Z1F0BV7M ONLINE 0 0 0
spares
disk/by-id/scsi-SATA_ST3000DM001-1CH_W1F49M01 INUSE currently in use
disk/by-id/scsi-SATA_ST3000DM001-1CH_W1F4A7ZE INUSE currently in use
disk/by-id/scsi-SATA_ST3000DM001-1CH_W1F49MB1 AVAIL
disk/by-id/scsi-SATA_ST3000DM001-1ER_Z5001SS2 AVAIL
disk/by-id/scsi-SATA_ST3000DM001-1ER_Z5001R0F AVAIL
errors: 37062187 data errors, use '-v' for a list
When the first disk failed I replaced it with a hot spare and it began to resilver. Before the resilver completed, a second disk failed, so I replaced the second disk with another hot spare. Since then it will start to resilver, get about 50% done and then starts gobbling memory until it eats it all up and causes the OS to crash.
Upgrading the RAM on the server isn't a straightforward option at this point, and it's unclear to me that doing so would guarantee a solution. I understand that there will be data loss at this stage, but if I can sacrifice the contents of this one RAIDZ to preserve the rest of the pool that is a perfectly acceptable outcome. I am in the process of backing up the contents of this server to another server, but the memory consumption issue forces a reboot (or crash) every 48 hours or so, which interrupts my rsync backup, and restarting the rsync takes time (it can resume once it figures out where it left off, but that takes a very long time).
I think ZFS attempting to deal with two spare replacement operations is at the root of the memory consumption issue, so I want to remove one of the hot spares so ZFS can work on one at a time. However, when I try to detach one of the spares, I get "cannot detach /dev/disk/by-id/scsi-SATA_ST3000DM001-1CH_W1F49M01: no valid replicas". Perhaps I can use the -f option to force the operation, but it's not clear to me exactly what the result of that will be, so I wanted to see if anyone has any input before going forward.
If I can get the system into a stable state where it can remain operational long enough for the backup to complete I plan to take it down for overhaul, but with the current conditions it's stuck in a bit of a recovery loop.
Solution 1:
Right now you can detach the UNAVAIL disks, ZFS is not using those anymore anyway.
You've got two failed disks in a RAIDZ-1 setup. It's very likely you are looking at some data loss and should be ready to restore from backup.
As a side note, RAIDZ has proven to be very flaky in my experience with OpenSolaris/Solaris11. I would advise against using it in ANY kind of production workload.
Also, to reinforce what ewwhite said, FUSE is not your best option. I'd take this opportunity to migrate to something more stable (perhaps FreeBSD 10).