Doing an rm -rf on a massive directory tree takes hours
Solution 1:
No.
rm -rf
does a recursive depth-first traversal of your filesystem, calling unlink()
on every file. The two operations that cause the process to go slowly are opendir()
/readdir()
and unlink()
. opendir()
and readdir()
are dependent on the number of files in the directory. unlink()
is dependent on the size of the file being deleted. The only way to make this go quicker is to either reduce the size and numbers of files (which I suspect is not likely) or change the filesystem to one with better characteristics for those operations. I believe that XFS is good for unlink() on large file, but isn't so good for large directory structures. You might find that ext3+dirindex or reiserfs is quicker. I'm not sure how well JFS fares, but I'm sure there are plenty of benchmarks of different file system performance.
Edit: It seems that XFS is terrible at deleting trees, so definitely change your filesystem.
Solution 2:
As an alternative, move the directory aside, recreate it with the same name, permissions and ownership and restart any apps/services that care about that directory.
You can then "nice rm" the original directory in the background without having to worry about an extended outage.
Solution 3:
Make sure you have the right mount options set for XFS.
Using -ologbufs=8,logbsize=256k with XFS will probably triple your delete performance.
Solution 4:
It's good to use ionice for IO-intensive operations like that regardless of filesystem used.
I suggest this command:
ionice -n7 nice rm -fr dir_name
It will play nicely for background operations on server with heavy IO load.
Solution 5:
If you are doing the rm at effectively at the file level then it will take a long time. This is why block based snapshots are so good:).
You could try splitting the rm into separate areas and trying to do it in parallel however I might not expect it to make any improvement. XFS is known to have issues deleting files and if that is a large part of what you do then maybe a different file system for that would be an idea.