mount: wrong fs type, bad option, bad superblock on /dev/xvdf1, missing codepage or helper program, or other error
If the instances were launched using the same AMI then their root volumes will have been created from the same EBS snapshot, so the problem is likely duplicate XFS UUIDs. The error message from mount
isn't very helpful, but you may see errors like this in /var/log/messages
or equivalent:
Jan 13 23:30:29 ip-172-31-15-234 kernel: XFS (nvme1n1): Filesystem has duplicate UUID 56282b3b-c1f3-425e-90db-e9e26def629d - can't mount
(This example is from a t3 instance using NVMe storage, but it's not NVMe-specific.)
Every XFS filesystem has a (supposedly) unique ID stored on-disk, which protects you from accidentally mounting the same filesystem multiple times. Because the EBS snapshot/restore process is a block-level copy, any volumes you create from a snapshot will have the same UUID as the source volume so you can only mount one at a time.
You can view the UUID for a volume by attaching it but not mounting it, then running xfs_db to examine the attached disk:
# xfs_db -c uuid /dev/nvme1n1
UUID = 56282b3b-c1f3-425e-90db-e9e26def629d
(EDIT: The blkid command will also show you the UUID, even if the device is mounted.)
To work around the issue, you can either use the XFS-specific nouuid
mount option to temporarily ignore the duplicate check, e.g.
# mount -t xfs -o nouuid /dev/nvme1n1 /mnt
or you can use xfs_admin to permanently change the UUID on the volume:
# xfs_admin -U generate /dev/nvme1n1
Clearing log and setting UUID
writing all SBs
new UUID = 1eb81512-3f22-4b79-9a35-f22f29745c60
This worked for me, amazon linux image 2.
mount -t xfs -o nouuid /dev/xvdb /data
xfs_admin -U generate /dev/xvdb
It helped to regenerating UUID and mounting through fstab.
Used the -f (Force) option with mkfs to reformat the partition,
mkfs -t xfs -f /dev/xvdg1*
Re-executed the command
mount /dev/xvdg1 /home/ec2-user/xvdg1/
Output:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 483M 84K 483M 1% /dev
tmpfs 493M 0 493M 0% /dev/shm
/dev/xvda1 7.8G 1.1G 6.6G 14% /
/dev/xvde1 8.0G 1.3G 6.8G 16% /home/ec2-user/xvde1
/dev/xvdf1 8.0G 33M 8.0G 1% /home/ec2-user/xvdf1
/dev/xvdg1 8.0G 33M 8.0G 1% /home/ec2-user/xvdg1