ZFS shows pool state FAULTED, but all devices are ONLINE; how can I recover my data?
Our 100TB NAS based on FreeNAS 8 was unexpectedly powered off due to power failure. After turning it on back, 100TB zpool "projects" was unmounted with state "FAULTED".
I've tried zpool import -fFX
, it was running for about 20 hours, but nothing happened. I've rebooted server with reset button because kill -9 and reboot comands did not work.
Some outputs:
[root@Projects_new] ~# zpool import
pool: projects
id: 8560768094429092391
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-72
config:
projects FAULTED corrupted data
gptid/49d49544-5a47-11e2-b516-00259095142c ONLINE ok
gptid/49f3c886-5a47-11e2-b516-00259095142c ONLINE ok
gptid/4a1052aa-5a47-11e2-b516-00259095142c ONLINE ok
gptid/4a32bf15-5a47-11e2-b516-00259095142c ONLINE ok
gptid/4a9b51d3-5a47-11e2-b516-00259095142c ONLINE ok
gptid/4b2ee18b-5a47-11e2-b516-00259095142c ONLINE ok
Also I've found undocumented option: zpool import -V projects
, after that zpool was imported, but still unaccesible:
[root@Projects_new] ~/zpool_restore# zpool status
pool: projects
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from
a backup source.
see: http://www.sun.com/msg/ZFS-8000-72
scan: none requested
config:
NAME STATE READ WRITE CKSUM
projects FAULTED 0 0 1
gptid/49d49544-5a47-11e2-b516-00259095142c ONLINE 0 0 0
gptid/49f3c886-5a47-11e2-b516-00259095142c ONLINE 0 0 2
gptid/4a1052aa-5a47-11e2-b516-00259095142c ONLINE 0 0 2
gptid/4a32bf15-5a47-11e2-b516-00259095142c ONLINE 0 0 2
gptid/4a9b51d3-5a47-11e2-b516-00259095142c ONLINE 0 0 0
gptid/4b2ee18b-5a47-11e2-b516-00259095142c ONLINE 0 0 0
In this state, zpool clear -f projects
outputs "I/O error".
/dev/gptid/4* are RAID0 devices: 4 on 4 Adaptec controllers and 2 on 1 LSI controller.
Is there any way to import and fix zpool and save data?
Solution 1:
NAME STATE READ WRITE CKSUM
projects FAULTED 0 0 1
gptid/49d49544-5a47-11e2-b516-00259095142c ONLINE 0 0 0
gptid/49f3c886-5a47-11e2-b516-00259095142c ONLINE 0 0 2
gptid/4a1052aa-5a47-11e2-b516-00259095142c ONLINE 0 0 2
gptid/4a32bf15-5a47-11e2-b516-00259095142c ONLINE 0 0 2
gptid/4a9b51d3-5a47-11e2-b516-00259095142c ONLINE 0 0 0
gptid/4b2ee18b-5a47-11e2-b516-00259095142c ONLINE 0 0 0
/dev/gptid/4* are RAID0 devices: 4 on 4 Adaptec controllers and 2 on 1 LSI controller.
So let me just start out by getting something straight. You have a ZFS pool which consists of six devices (as seen by ZFS), striped with no redundancy. Each one of these consists of some unknown number of physical storage devices, themselves striped with no redundancy. A conservative estimate says you have somewhere on the order of 20-25 spinners, quite possibly more, all of which have to work perfectly for your setup to be stable. Remember that physical disk failures are at best uncorrelated, and in practice tend to happen in batches in shared environments (if one disk fails, it's likely that one or more others are marginal and may even fail simply under the stress of resilvering). This makes the best case scenario such that with 25 disks, the probability of failure is 25x that of a single disk, because you have 25 of them each with the same probability of failure as it would have had were it alone.
Now some of those drives (or possibly controllers) apparently have developed some sort of problem, which has trickled through and is being reported by ZFS.
At that point, my question is more or less "what do you expect ZFS to do?". And unfortunately, I think the answer both to that as well as your question is that no, there really isn't a whole lot to be done at this point.
ZFS isn't magic. It is highly resilient to many different types of failures, but once it breaks, it has a tendency to do so in spectacular ways. You can reduce the risk of breakage by using its redundancy features, which for whatever reason you have opted to not do. Its complex on-disk format also makes recovery a lot more complicated than e.g. UFS, NTFS or ext3/4.
If zpool import -fFX
doesn't get your pool back to a usable state, then your best bet might very well be to just recreate the pool in a sane manner and restore the most recent backup. This includes adding some redundancy, such that even if a whole controller or power supply fails the entire pool doesn't fail. Also, configure your controllers to expose the raw disks in a JBOD fashion to ZFS, and use ZFS' redundancy support to add redundancy to your storage; that allows ZFS to make decisions about where to place data and how to arrange redundancy to try to reduce the risk of failure. (For example, metadata can be stored redundantly by copying it to multiple independent vdevs.)