How to change the file system of a partition in a RAID 1?
First, sorry if the question has already been asked and correctly answered, I did not find anything that satisfies me.
I rent a dedicated machine in a datacenter, the machine run with a Debian 10 and has two drives in RAID 1, there are 3 partitions: one for the boot, one for the swap and one for the rest.
The third (/dev/md2) uses the ext4 file system and I would like to use XFS instead.
I am not used to changing the filesystem and this is the first time I have a machine with RAID so I do not know how to do it.
This is a new installation so there is no risk of losing data.
I tried a mkfs.xfs /dev/md2
but it didn't work:
root@Debian-105-buster-64-minimal ~ # mkfs.xfs /dev/md2
mkfs.xfs: /dev/md2 contains a mounted filesystem
And I don't know how it should be unmount/mount due to the RAID.
Thank you in advance for the help.
The df -Th
command :
root@Debian-105-buster-64-minimal ~ # df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 32G 0 32G 0% /dev
tmpfs tmpfs 6.3G 516K 6.3G 1% /run
/dev/md2 ext4 437G 1.2G 413G 1% /
tmpfs tmpfs 32G 0 32G 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/md1 ext3 487M 53M 409M 12% /boot
tmpfs tmpfs 6.3G 0 6.3G 0% /run/user/1000
the fdisk -l
command :
root@Debian-105-buster-64-minimal ~ # fdisk -l
Disk /dev/nvme0n1: 477 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: SAMSUNG MZVLB512HAJQ-00000
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0289e0d1
Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 2048 67110911 67108864 32G fd Linux raid autodetect
/dev/nvme0n1p2 67110912 68159487 1048576 512M fd Linux raid autodetect
/dev/nvme0n1p3 68159488 1000213167 932053680 444.4G fd Linux raid autodetect
Disk /dev/nvme1n1: 477 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: SAMSUNG MZVLB512HAJQ-00000
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xbcb5c0d2
Device Boot Start End Sectors Size Id Type
/dev/nvme1n1p1 2048 67110911 67108864 32G fd Linux raid autodetect
/dev/nvme1n1p2 67110912 68159487 1048576 512M fd Linux raid autodetect
/dev/nvme1n1p3 68159488 1000213167 932053680 444.4G fd Linux raid autodetect
Disk /dev/md1: 511 MiB, 535822336 bytes, 1046528 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md0: 32 GiB, 34325135360 bytes, 67041280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md2: 444.3 GiB, 477076193280 bytes, 931789440 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
The mdstat :
root@Debian-105-buster-64-minimal ~ # cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 nvme0n1p3[0] nvme1n1p3[1]
465894720 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk
md0 : active (auto-read-only) raid1 nvme0n1p1[0] nvme1n1p1[1]
33520640 blocks super 1.2 [2/2] [UU]
resync=PENDING
md1 : active raid1 nvme0n1p2[0] nvme1n1p2[1]
523264 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Solution 1:
/dev/md2 is your root file system, so if you would just format this it means your server would be gone for good. So this is a very good reason why mkfs refuses to format a running, mounted file system.
Seeing your question backing up and restoring the server is entirely out of the scope of your abilities right now.
Since you don't have any data yet on this machine just reinstall it using your file system of choice, that's the easiest and safest way for you to achieve your goal.
Solution 2:
To be clear: "mkfs" deletes everything on the partition (what we usually call "formatting"). You can only "format" an unmounted (unused) partition, and you can't unmount your root (system) partition. Your only option is to re-run the install from scratch, and when setting up your disks change the default options.
However, I don't know of any reason why you would want to have your root filesystem formatted as XFS. XFS is best suited for large (like 50 terabytes to 2 petabytes) filesystems, usually on very fast devices (like large RAID arrays). ext4 is perfectly fine as a root filesystem, and most probably XFS would provide you nothing of value in your configuration.
Solution 3:
You're thinking of RAID as being more magical than it is. Once the RAID is set up and working, then from a practical standpoint, there is absolutely no difference between a partition on a RAID device and a partition on any other kind of device.
So, to reformat a RAID partition, you would first need to unmount it just like any other mounted partition, with umount /dev/md2
. Then you can run mkfs.xfs /dev/md2
to create the filesystem, and then mount it again.
Having said all that, you're not going to be able to follow those instructions on your specific setup. The reason is that /dev/md2
is your root filesystem. The root filesystem must remain mounted while the machine is running, so the umount
command will fail. Reformatting the root partition takes a few extra steps:
-
Back up any data you want to keep.
-
Boot from a Live CD, preferably of the same distribution as what you want the new OS to be.
-
Typically, the Live CD will detect your RAID array automatically, so it will be available to you immediately. If not, you'll have to recreate the array to be able to access the partitions. (Note: This is the only step that's different between partitions on RAID arrays and partitions on any other type of disk.)
-
Run
mkfs.xfs <device>
. (Note: there's no guarantee that the Live CD will call the partitions by the same name as the original OS called them, so you'll have to check.) -
Install the operating system of your choice on your new, blank XFS filesystem.
-
Reboot into the new OS, install any software you need, and restore the data you backed up in step #1.
Solution 4:
This is your current disk layout.
+-----+ +-------+ +------------------------+
| | | | | |
+-----------+--------------+-----------------------------+
| nvme0n1p1 | nvme0n1p2 | nvme0n1p3 | <- Disk0 nvme0n1
+-----------+--------------+-----------------------------+
| | | | | |
+-----------+--------------+-----------------------------+
| nvme1n1p1 | nvme1n1p2 | nvme1n1p3 | <- Disk1 nvme1n1
+-----------+--------------+-----------------------------+
| | | | | |
| md0 | | md1 | | md2 |
+-----+ +-------+ +------------------------+
Swap /boot / (the root disk)
So you have three separate Linux software RAID1 partitions.
md1 and md2 have a EXT4
filesystem on them, and your files are inside that filesystem. md0 has a swap filesystem and no files.
To change filesystem, you will have to back up the data, redo the filesystem, and restore the data.
Doing this requires you to boot off another disk like a LiveCD or rescue disk because you're messing with the root file system.
You say this box is at a data center. So you must either visit the DC and work there, or if the hardware is a server-grade device then it will have some kind of out-of-band management console like an ILO(HP) or a DRAC(Dell) or a CIMC(cisco) or an RSA(IBM) or an IPMI Interface (more generic phrase, used by Supermicro and other makers)
Regardless, the host will be out of service while you work on it.
Here's suggested plan #1 if you have data:
Note this is long, convoluted, and doubtless has some errors. You should check plan#2 below.
- Check your backups are working. If this goes badly, you will need them.
- Organise spare disk - you have ~500 GB but not all of it is used. A 4 GB pen disk is probably enough based on your
df
output - Organise an outage window, or a maintenance window. If you think this will take 10 minutes, make it 4 hours long.
- Organise a visit with your DC. Some have more-stringent requirements than others. Some don't allow unsupervised access to the data floor, more so if you share a rack with other customers.
- Ask the DC for a crash cart, and if they don't have one, organise a working monitor/keyboard/mouse and cabling for power and a multibox.
- Download the latest ISO for your distro. Create a CD or a USB disk and test that it works on some spare box.
On the day:
- Remind everyone affected of the planned outage and the consequences (ie the Foo server will be out tonight which means no access to the Foo systems from 10PM till 6AM )
- Pack a warm coat/hat and earmuffs - DCs can be cold and loud.
- Arrive at the DC before the window, connect up and make sure you have working console
- Connect your temporary USB disk, format it, and do a dirty rsync of the data. 1.2 GB won't take very long.
At the time
- Start on time by shutting down the server and booting to a live environment off your distro's live disk and get root.
- Assemble the RAID partitions (not strictly necessary here because they are RAID1 and either disk is readable)
- Mount the raids are mounted with md2 as /oldbox and then md1 as /oldbox/boot
- Mount your other USB disk as /x or something clearly different
- Do the final dump with something like
time rsync -avH /oldbox* /x --progress --delete
and wait. - Repeat the above command - no files should change and it should complete quickly the second time.
-
df -h
should show about the same amount of files on /x as on /oldbox - umount /x and remove it from host. This is to protect your data.
- umount /oldbox/boot and /oldbox.
This is the point of no-return and no easy rollback.
- make the new filesystems with something like:
mkfs.xfs -L rootdisk /dev/md2
andmkfs.xfs -L bootdisk /dev/md1
they may require a-f
to overwrite the existing filesystems - mount the new disks in the Live OS on /oldbox and /oldbox/boot
- re-add your USB disk and mount it read-only as /x with
mount -o ro /dev/sda1 /x
or similar. - rsync the data back with
time rsync -avH /x/* /oldbox/ --progress --delete
- Go have a coffee/fresh air/etc, let this finish.
- At a command prompt
chroot /oldbox
this will give you a root prompt "in the new disks" -
mount /dev/
to give you device nodes in the chroot. - Check the contents of /oldbox/etc/fstab - if it mentions UUIDs for disks this will need updating.
- reinstall grub using
grub-install
It should install into your MBR on both /dev/nvme0n1 and 1n1 - Umount all the disks, remove the USB disk and store it.
- Reboot and hope it works.
Note this process has been written out of my head, and doubtless there's some step I missed. You might want to set up a spare computer at work with two disks, then install Debian with EXT4 and raid1 just as in prod, then try the process offline first. You could even do that pre-test in a virtual machine.
Plan #2: the easier way
You only have 1.2 GB of files on your disk. Is there any service actually running? If not, do a backup as described above, and then format the whole machine. Install from scratch and then restore just the parts of the data you require. You might choose to go with LVM but that's more complexity.
Upshot: Creating a new filesystem deletes the files in the old one. And changing the root filesystem can't be done with the host running.