zpool: pool I/O is currently suspended

Solution 1:

If executing sudo zpool clear WD_1TB won't work, try:

$ sudo zpool clear -nFX WD_1TB

where these undocumented parameters mean:

  • -F: (undocumented for clear, the same as for import) Rewind. Recovery mode for a non-importable pool. Attempt to return the pool to an importable state by discarding the last few transactions. Not all damaged pools can be recovered by using this option. If successful, the data from the discarded transactions is irretrievably lost. This option is ignored if the pool is importable or already imported.
  • -n: (undocumented for clear, the same as for import) Used with the -F recovery option. Determines whether a non-importable pool can be made importable again, but does not actually perform the pool recovery. For more details about pool recovery mode, see the -F option, above. and then try to re-import again:
  • -X (undocumented): Extreme rewind. The effect of -X seems to be that some extremely lengthy operation is attempted, that never finishes. In some cases, a reboot was necessary to terminate the process.
  • -V (undocumented): Option by UTSLing, when used for import it makes the pool got imported again, but still without an attempt at resilvering.

Source: ZFS faulted pool problem and man zpool.

$ zpool import WD_1TB

If won't help, try the following commands to remove the invalid zpool:

$ zpool list -v
$ sudo zfs unmount WD_1TB
$ sudo zpool destroy -f WD_1TB
$ zpool detach WD_1TB disk1s2
$ zpool remove WD_1TB disk1s2
$ zpool remove WD_1TB /dev/disk1s2
$ zpool set cachefile=/etc/zfs/zpool.cache WD_1TB

Finally if nothing helps, remove the file /etc/zfs/zpool.cache (optionally) and just restart your computer.


Related:

  • zfs-osx/zfs on GitHub: zpool: pool I/O is currently suspended
  • zfsonlinux/zfs on GitHub: Removing cache device fails
  • How to get rid of phantom pool?
  • zfs export and import between diferent controllers
  • How do I generate the /etc/zfs/zpool.cache file
  • Princeton University: ZFS Troubleshoot