resize2fs seems stuck at pass 3 (scanning inode table) - what to do?

I have a machine running Arch Linux (2010, I believe) with a 6TB RAID-5 array hooked up to a Highpoint RocketRaid 2320. I've been having issues with the RAID controller's drivers and the latest Linux kernels thanks to the driver not being open-source, and as a result I am migrating the system to Windows Server.

Problem is that the 6TB disk originally was comprised only of an ext4 partition. I shrunk the partition down as much as I could, and added a NTFS partition in the empty space so I could start moving files. That went fine. Problem is that now I need to shrink the ext4 partition again, move files, shrink again, etc. The second run through resize2fs is taking way longer than the first pass. It seems to be getting stuck at pass 3:

[root@nar-shaddaa rc.d]# resize2fs -p /dev/sdb3 863000000
resize2fs 1.41.14 (22-Dec-2010)
Resizing the filesystem on /dev/sdb3 to 863000000 (4k) blocks.
Begin pass 2 (max = 29815167)
Relocating blocks             XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 3 (max = 36670)
Scanning inode table          XXXXXXXXXXX-----------------------------

It has been sitting this way, sucking up 100% of one core for more than 19 hours now:

[root@nar-shaddaa rc.d]# ps aux | grep resize2fs
root     16277 94.1 19.8 627096 613940 pts/1   R+   Jun15 1184:37 resize2fs -p /dev/sdb3 863000000

I had originally run through resize2fs -P /dev/sdb3 to get the minimum partition size, but to be safe I rounded up to the nearest millionth block (hence the even 863 million). Prior to starting resize2fs, e2fsck reported the file system as clean:

[root@nar-shaddaa rc.d]# e2fsck -yv /dev/sdb3
e2fsck 1.41.14 (22-Dec-2010)
x-files: clean, 286672/300400640 files, 867525660/1201576187 blocks

I'm concerned because this has been going on for far longer than the first resize took (which was just under an hour), and I do not appear to be getting any sort of update from resize2fs at all yet it is clearly sucking CPU cycles. Do I wait longer (and if so, how long)? Or do I cancel it and use a different tool to resize the partition down?


Solution 1:

I finally figured out what it was. After cancelling the original resize (just a simple ctrl+C), I ran e2fsck -f -y /dev/sdb3 to correct any issues I made. I was able to mount the partition still under the original size, so no data was lost. I then ran resize2fs with the debug flag (resize2fs -d 14 <xxx>) and noticed that it was stuck in a constant loop trying to relocate a chunk of inodes.

I finally got it to work by using an older version of e2fsprogs. I put Ubuntu 9.10 (Karmic Koala) on a USB stick, booted into it, installed the open-source rr232x drivers so I could manipulate the array, and ran the older version of e2fsprogs (resize2fs 1.41.9 (22-Aug-2009), to be exact).

I had originally tried the resize2fs -p /dev/sdb3 863000000, and it had told me that it required ~26 million blocks. So I took the target size, added that to it and did resize2fs -p /dev/sdb3 1000000000. 10 minutes later I'm greeted with the message:

/dev/sdb3 is now at 1000000000 blocks

Now I guess the ultimate question is why the newer version of e2fsprogs couldn't/wouldn't tell me that I was asking for too small a size (and why it offered a size that small in the first place)?