ZFS Pool Data Backup and Restore

I currently have a zfs raidz2 pool stuck in a resilvering loop as I was trying to replace the 3TB disks with 8TB disks. After letting the first replacement disk resilver online for over a week it finally finished only to immediately start again. After marking the disk "OFFLINE" the second resilver completed in about 2 days. I marked the disk online and everything looked good (for a couple of minutes) so I replaced the second disk. Once the resilver started for the second disk it showed that the first disk was also resilvering again. I'm now on my 3rd or 4th cycle of resilvering for these two drives, and with two disks resilvering I have no fault tolerance. At this point I would like to back up the zpool to an nfs share and recreate it with the new drives, but I don't want to lose all my dataset configuration which includes all of my jails. Is there a way to export the whole zpool as a backup image that can somehow be restored? The other machine's file system with sufficient disk space to store all this data already has a different filesystem in use so zfs replication is probably not an option. This is a TrueNAS-12.0-U4 installation. The backup machine is running Ubuntu 21.04 with LVM/Ext4. Below is the current pool status.


  pool: pool0
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Jul 29 00:39:12 2021
    13.8T scanned at 273M/s, 13.0T issued at 256M/s, 13.8T total
    2.17G resilvered, 93.77% done, 00:58:48 to go
config:

    NAME                                            STATE     READ WRITE CKSUM
    pool0                                           DEGRADED     0     0     0
      raidz2-0                                      DEGRADED     0     0     0
        gptid/55bf3ad6-3747-11eb-a0da-3cecef030ab8  ONLINE       0     0     0
        gptid/55c837e3-3747-11eb-a0da-3cecef030ab8  ONLINE       0     0     0
        gptid/55f4786c-3747-11eb-a0da-3cecef030ab8  ONLINE       0     0     0
        gptid/60dcf0b8-eef3-11eb-92f9-3cecef030ab8  OFFLINE      0     0     0  (resilvering)
        gptid/56702d96-3747-11eb-a0da-3cecef030ab8  ONLINE       0     0     0
        gptid/5685b5f7-3747-11eb-a0da-3cecef030ab8  ONLINE       0     0     0
        gptid/8f041954-eef3-11eb-92f9-3cecef030ab8  OFFLINE      0     0     0  (resilvering)
        gptid/56920c3a-3747-11eb-a0da-3cecef030ab8  ONLINE       0     0     0
    cache
      gptid/56256b6a-3747-11eb-a0da-3cecef030ab8    ONLINE       0     0     0

errors: No known data errors

You can use zfs snapshot -r pool0@backup; zfs send -R pool0@backup > zfs.img to create a replicated send stream which you can restore with zfs recv.

That said, is seems similar to the issue described here You can also try to disable deferred resilver via the zfs_resilver_disable_defer tunable.