How to delete insanely large folder in Linux

We have a cache folder, that accidentally grow large enough that it break the server. We have 8 GB RAM on server, and when I run simple rm command to delete all files within it consume all of RAM and still do not delete it after 5 hrs.

So, we try to use find but it fails too after 12 hrs of operation. Now from last 24hrs a find with perl statement is running, and the folder is still not delete in fact not a single file is deleted.

When we ls the parent folder, it shows folder size around 1GB, I just wonder how many million files are there.

So, my question, Is there any way we can delete files without listing them, so it just delete folder or files within without making a list (aka do not call getdir() like system calls)

I am really considering formatting the server to get rid of it now.

EDIT:

I have used find with -delete and with -exec rm -f {}

EDIT2:

Based on this article we are running the perl command right now (well it is about 24+ hrs that command is still running), but nothing is been done in folder size visible to us.


One option would be to use the -delete switch of find. I know you said you've already used find, but haven't said exactly how you've used it. Normally this will try and enumerate every file, but I believe using the following command it should delete things as it finds them:

find . -type f -print -delete (You can remove -print if you don't want an output, and of course cd to your folder or enter the correct folder name)

Worth a try at least.

Edit-- here's a suggestion from another question over @ unix

mkdir empty_dir
rsync -a --delete empty_dir/    yourdirectory/

After trying various suggestions the one that worked for me was to backup the files I needed, format the mount and restore them back. Of course this depends on how large the files you want to backup are. However, one can only assume you are doing backups of your filesystems anyway - in which case you can simply format and restore.