Fast way to recursively count files in linux

I'm using the following to count the number of files in a directory, and its subdirectories:

find . -type f | wc -l

But I have half a million files in there, and the count takes a long time.

Is there a faster way to get a count of the number of files, that doesn't involve piping a huge amount of text to something that counts lines? It seems like an inefficient way to do things.


Solution 1:

If you have this on a dedicated file-system, or you have a steady number of files overhead, you may be able to get a rough enough count of the number of files by looking at the number of inodes in the file-system via "df -i":

root@dhcp18:~# df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sda1            60489728   75885 60413843    1% /

On my test box above I have 75,885 inodes allocated. However, these inodes are not just files, they are also directories. For example:

root@dhcp18:~# mkdir /tmp/foo
root@dhcp18:~# df -i /tmp 
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sda1            60489728   75886 60413842    1% /
root@dhcp18:~# touch /tmp/bar
root@dhcp18:~# df -i /tmp
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sda1            60489728   75887 60413841    1% /

NOTE: Not all file-systems maintain inode counts the same way. ext2/3/4 will all work, however btrfs always reports 0.

If you have to differentiate files from directories, you're going to have to walk the file-system and "stat" each one to see if it's a file, directory, sym-link, etc... The biggest issue here is not the piping of all the text to "wc", but seeking around among all the inodes and directory entries to put that data together.

Other than the inode table as shown by "df -i", there really is no database of how many files there are under a given directory. However, if this information is important to you, you could create and maintain such a database by having your programs increment a number when they create a file in this directory and decrement it when deleted. If you don't control the programs that create them, this isn't an option.

Solution 2:

I wrote a custom file-counting program for this StackOverflow question: https://stackoverflow.com/questions/1427032/fast-linux-file-count-for-a-large-number-of-files

You can find the GitHub repo here if you'd like to browse, download, or contribute: https://github.com/ChristopherSchultz/fast-file-count

Solution 3:

If you want to count recursively the number of files in a directory the locate command is the fastet one I know, assumed you have an up-to-date database (sudo update database .. made per default via chron job every day). However, you can speed up the command if you avoid the grep pipe.

See man locate:

-c, --count
       Instead  of  writing  file  names on standard output, write the number of 
       matching entries only.

So the fastest command is:

locate -c -r '/path/to/dir'

Solution 4:

I would also try:

find topDir -maxdepth 3 -printf '%h %f\n'

And then process the output, reducing into a count for the directories.

This is especially useful if you anticipate the directory structure.

Solution 5:

Parallelize it. Run a separate find command for each subdirectory and run them at the same time. Can automate this using xargs.