ZFS RAID0 pool without redundancy
You likely are seeing a situation where at least one of your used disks became unavailable. This might be intermittent and resolvable, both Linux implementations (ZFS on Linux as well as zfs-fuse) seem to exhibit occasional hiccups which are easily cured by a zpool clear
or a zpool export
/ zpool import
cycle.
As for your question, yes, ZFS is perfectly capable of creating and maintaining a pool without any redundancy just by issuing something like zpool create mypool sdb sdc sdd
.
But personally, I would not use ZFS just for its deduplication capabilities. Due to its architecture, ZFS deduplication will require a large amount of RAM and plenty of disk I/O for write operations. You probably will find it unsuitable for pools as large as yours as writes will be getting painfully slow. If you need deduplication, you might want to look at offline dedup implementations with a smaller memory and I/O footprint like btrfs
file-level batch deduplication using bedup
or block-level deduplication using dupremove
: https://btrfs.wiki.kernel.org/index.php/Deduplication
This is a duplicate of: Why did rebooting cause one side of my ZFS mirror to become UNAVAIL?
In your case, the device names or symbolic links in the /dev/disk-by-* directory on your system were either not present or were renamed.
It's best to use /dev/disk-by-id
devices for your zpool instead of by-path
, as the path names can change. (grrrr... Ubuntu udev)
In /dev
...
by-id/ by-path/ by-uuid/
So my spools look like the following (note how the devices aren't sda
, sdb
, etc.):
[root@BigHomie ~]# zpool status -v
pool: vol0
state: ONLINE
scan: scrub repaired 0 in 0h1m with 0 errors on Sat May 24 17:14:09 2014
config:
NAME STATE READ WRITE CKSUM
vol0 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-SATA_OWC_Mercury_AccOW140403AS1321905 ONLINE 0 0 0
scsi-SATA_OWC_Mercury_AccOW140403AS1321932 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
scsi-SATA_OWC_Mercury_AccOW140403AS1321926 ONLINE 0 0 0
scsi-SATA_OWC_Mercury_AccOW140403AS1321922 ONLINE 0 0 0