ZFS on a single partition mixed with other partitions

I work for a medical device supplier. We have on a single machine a few partitions that are SquashFS (for OS files, two OS partitions, read-only squashfs), some boot partitions (3x FAT32) and a DATA partition (120Gb, rw, ext4).

The thing is, that being a device used in hospitals, sometimes they pull the plug on the device and on some occasions we get random issues that seem to be linked with data corruption on the DATA partition. The thing is, the medical software logs a lot of stuff so I guess sometimes a write happens and maybe the machine is hard shutdown and it somewhat seems to corrupt adjacent files.

Anyway, I thought perhaps changing ext4 on the data partition to a copy-on-write FS like ZFS could help. One thing though, is... does ZFS require the whole disk to be ZFS (the zpool thing), or can we have a ZFS partition coexist on the same disk with other partitions with a different FS.

Thanks!


Solution 1:

You don't have to format a whole disk as ZFS. A zpool can be built from any combination of whole disks and partitions.

If you do share a ZFS member with other partitions on the same disk, you should keep in mind that input/output (I/O) performance is also shared.

In the simplest configuration, you can have a zpool made of one vdev that is just a partition or a device. I have a computer formatted like this:

root@craptop [~]# zpool status -P
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:30 with 0 errors on Sun Nov 14 00:24:31 2021
config:

        NAME                                                                 STATE     READ WRITE CKSUM
        rpool                                                                ONLINE       0     0     0
          /dev/disk/by-id/ata-LITEONIT_LSS-24L6G_S45N8470Z1ZNDW089292-part4  ONLINE       0     0     0

errors: No known data errors

The ZFS member is a partition under /dev/sda:

root@craptop [~]# blkid /dev/sda4
/dev/sda4: LABEL="rpool" UUID="3735190874680832032" UUID_SUB="15024274719792138025" TYPE="zfs_member" PARTUUID="a9a5ae01-90cd-4945-a9dd-fbccbfbfc075"
root@craptop [~]# lsblk -p
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
/dev/sda      8:0    0  22.4G  0 disk 
├─/dev/sda1   8:1    0  1000K  0 part 
├─/dev/sda2   8:2    0   512M  0 part /boot/efi
├─/dev/sda3   8:3    0     1G  0 part /boot
└─/dev/sda4   8:4    0  20.9G  0 part 
/dev/sdb      8:16   0 931.5G  0 disk 
├─/dev/sdb1   8:17   0  1000M  0 part 
├─/dev/sdb2   8:18   0   260M  0 part 
├─/dev/sdb3   8:19   0  1000M  0 part 
├─/dev/sdb4   8:20   0   128M  0 part 
├─/dev/sdb5   8:21   0   884G  0 part 
├─/dev/sdb6   8:22   0  25.2G  0 part 
└─/dev/sdb7   8:23   0    20G  0 part 
/dev/sdc      8:32   1  58.6G  0 disk 
└─/dev/sdc1   8:33   1    58G  0 part

Without a redundant or parity vdev (mirror, raidz, raidz2, draid, etc.), ZFS can detect silent data corruption but cannot correct it because the only copy of the data is bad.

You should consider creating a zpool with one or more redundant vdevs.

Here is another one of my computers with RAID 1 equivalent vdevs known as mirrors:

root@box1 [~]# zpool status -P
  pool: fastpool
 state: ONLINE
  scan: scrub repaired 0B in 00:04:39 with 0 errors on Sun Nov 14 00:28:40 2021
config:

        NAME                STATE     READ WRITE CKSUM
        fastpool            ONLINE       0     0     0
          mirror-0          ONLINE       0     0     0
            /dev/nvme0n1p3  ONLINE       0     0     0
            /dev/nvme1n1p3  ONLINE       0     0     0

errors: No known data errors

  pool: slowpool
 state: ONLINE
  scan: scrub repaired 0B in 05:45:50 with 0 errors on Sun Nov 14 06:09:52 2021
config:

        NAME              STATE     READ WRITE CKSUM
        slowpool          ONLINE       0     0     0
          mirror-0        ONLINE       0     0     0
            /dev/sda1     ONLINE       0     0     0
            /dev/sdb1     ONLINE       0     0     0
        logs
          /dev/nvme0n1p5  ONLINE       0     0     0
          /dev/nvme1n1p5  ONLINE       0     0     0
        cache
          /dev/nvme0n1p4  ONLINE       0     0     0
          /dev/nvme1n1p4  ONLINE       0     0     0

errors: No known data errors

Getting Started

  • Setup a ZFS storage pool by Aden Padilla
  • man 7 zpoolconcepts

Additional Reading

  • ZFS 101—Understanding ZFS storage and performance by Jim Salter
  • ZFS: You should use mirror vdevs, not RAIDZ. by Jim Salter