Parallelize Bash script with maximum number of processes

Depending on what you want to do xargs also can help (here: converting documents with pdf2ps):

cpus=$( ls -d /sys/devices/system/cpu/cpu[[:digit:]]* | wc -w )

find . -name \*.pdf | xargs --max-args=1 --max-procs=$cpus  pdf2ps

From the docs:

--max-procs=max-procs
-P max-procs
       Run up to max-procs processes at a time; the default is 1.
       If max-procs is 0, xargs will run as many processes as  possible  at  a
       time.  Use the -n option with -P; otherwise chances are that only one
       exec will be done.

With GNU Parallel http://www.gnu.org/software/parallel/ you can write:

some-command | parallel do-something

GNU Parallel also supports running jobs on remote computers. This will run one per CPU core on the remote computers - even if they have different number of cores:

some-command | parallel -S server1,server2 do-something

A more advanced example: Here we list of files that we want my_script to run on. Files have extension (maybe .jpeg). We want the output of my_script to be put next to the files in basename.out (e.g. foo.jpeg -> foo.out). We want to run my_script once for each core the computer has and we want to run it on the local computer, too. For the remote computers we want the file to be processed transferred to the given computer. When my_script finishes, we want foo.out transferred back and we then want foo.jpeg and foo.out removed from the remote computer:

cat list_of_files | \
parallel --trc {.}.out -S server1,server2,: \
"my_script {} > {.}.out"

GNU Parallel makes sure the output from each job does not mix, so you can use the output as input for another program:

some-command | parallel do-something | postprocess

See the videos for more examples: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1


maxjobs=4
parallelize () {
        while [ $# -gt 0 ] ; do
                jobcnt=(`jobs -p`)
                if [ ${#jobcnt[@]} -lt $maxjobs ] ; then
                        do-something $1 &
                        shift  
                else
                        sleep 1
                fi
        done
        wait
}

parallelize arg1 arg2 "5 args to third job" arg4 ...

Here an alternative solution that can be inserted into .bashrc and used for everyday one liner:

function pwait() {
    while [ $(jobs -p | wc -l) -ge $1 ]; do
        sleep 1
    done
}

To use it, all one has to do is put & after the jobs and a pwait call, the parameter gives the number of parallel processes:

for i in *; do
    do_something $i &
    pwait 10
done

It would be nicer to use wait instead of busy waiting on the output of jobs -p, but there doesn't seem to be an obvious solution to wait till any of the given jobs is finished instead of a all of them.


Instead of a plain bash, use a Makefile, then specify number of simultaneous jobs with make -jX where X is the number of jobs to run at once.

Or you can use wait ("man wait"): launch several child processes, call wait - it will exit when the child processes finish.

maxjobs = 10

foreach line in `cat file.txt` {
 jobsrunning = 0
 while jobsrunning < maxjobs {
  do job &
  jobsrunning += 1
 }
wait
}

job ( ){
...
}

If you need to store the job's result, then assign their result to a variable. After wait you just check what the variable contains.