Recreating an XFS file system with `ftype=1`

I have a CentOS 7 system where the root file system is XFS (created with ftype=0, the default CentOS setting at the time the system was installed). Unfortunately, the Docker overlay2 storage driver requires that file system to have been created with ftype=1:

https://docs.docker.com/storage/storagedriver/overlayfs-driver/#prerequisites

So now I'd like to recreate the root FS with ftype=1. I was thinking of doing that as follows:

  1. Boot into a rescue image of some sort.
  2. xfsdump the root FS to a remote location.
  3. Recreate the root FS with ftype=1.
  4. xfsrestore the root FS from the remote dump.

One thing I'm not sure about, though, is whether the xfsdump output carries anything related to the ftype setting. That is, would there be any issues doing the xfsrestore onto an XFS file system with a different ftype setting?

Or is there a better approach to solving this specific problem (that doesn't involve reinstalling the whole system, repartitioning, etc.)?


My proposed method seemed to work fine. Here's my procedure:

  1. Boot into CentOS-7-x86_64-LiveGNOME-1804.iso.
  2. Open a terminal and sudo -s.
  3. Scan for LVM volumes: vgscan
  4. Change into the appropriate volume group (centos in my case): vgchange -ay centos
  5. Scan for the logical volumes in that group: lvscan
  6. Create a mount point for the root FS: mkdir /mnt/root
  7. Mount the logical volume corresponding to the root FS: mount /dev/centos/root /mnt/root
  8. Dump to remote host: xfsdump -J - /mnt/root | ssh <host> 'cat >/data/rootfs.dump'
  9. Unmount the root FS: umount /mnt/root
  10. Recreate the root FS: mkfs.xfs -f -n ftype=1 /dev/centos/root
  11. Mount the recreated root FS: mount /dev/centos/root /mnt/root
  12. Restore from remote host: ssh <host> 'cat /data/rootfs.dump' | xfsrestore -J - /mnt/root
  13. Reboot. Everything should be as it was before, except xfs_info / should now show ftype=1.

Note: My xfsdump call resulted in a number of warnings of the form

xfsdump: WARNING: failed to get bulkstat information for inode 10485897

According to someone who appears to be an XFS developer (http://xfs.9218.n7.nabble.com/xfs-and-lvm-snapshots-tp1241p1246.html):

They can be ignored - they are inodes that were previously unlinked, but are still partially there on the snapshot volume, and visible to the by-handle interfaces that xfsdump is using to extract all of the inodes in the snapshot.