A better unix find with parallel processing?

The unix find(1) utility is very useful allowing me to perform an action on many files that match certain specifications, e.g.

find /dump -type f -name '*.xml' -exec java -jar ProcessFile.jar {} \;

The above might run a script or tool over every XML file in a particular directory.

Let's say my script/program takes a lot of CPU time and I have 8 processors. It would be nice to process up to 8 files at a time.

GNU make allows for parallel job processing with the -j flag but find does not appear to have such functionality. Is there an alternative generic job-scheduling method of approaching this?


Solution 1:

xargs with the -P option (number of processes). Say I wanted to compress all the logfiles in a directory on a 4-cpu machine:

find . -name '*.log' -mtime +3 -print0 | xargs -0 -P 4 bzip2

You can also say -n <number> for the maximum number of work-units per process. So say I had 2500 files and I said:

find . -name '*.log' -mtime +3 -print0 | xargs -0 -n 500 -P 4 bzip2

This would start 4 bzip2 processes, each of which with 500 files, and then when the first one finished another would be started for the last 500 files.

Not sure why the previous answer uses xargs and make, you have two parallel engines there!

Solution 2:

GNU parallel can help too.

find /dump -type f -name '*.xml' | parallel -j8 java -jar ProcessFile.jar {}

Note that without the -j8 argument, parallel defaults to the number of cores on your machine :-)

Solution 3:

No need to "fix" find - make use of make itself to handle the parallelism.

Have your process create a log file or some other output file, and then use a Makefile like this:

.SUFFIXES:  .xml .out

.xml.out:
        java -jar ProcessFile.jar $< 1> $@

and invoked thus:

find /dump -type f -name '*.xml' | sed -e 's/\.xml$/.out/' | xargs make -j8

Better yet, if you ensure that the output file only gets created on successful completion of the Java process you can take advantage of make's dependency handling to ensure that next time around only unprocessed files get done.

Solution 4:

Find has a parallel option you can use directly using the "+" symbol; no xargs required. Combining it with grep, it can rip through your tree quickly looking for matches. for example, if I'm looking for all files in my sources directory containing the string 'foo', I can invoke
find sources -type f -exec grep -H foo {} +