Run Bash scripts in folder all at the same time
Solution 1:
GNU parallel
is perfect for this sort of thing. Install with sudo apt install parallel
and then:
parallel -j0 ::: my_folder/*sh
The -j
controls how many processes t run in parallel, and -j0
means "all of them". Set it to another value (e.g. -j20
) to run the scripts in batches instead.
Solution 2:
To run all scripts at the same time (in parallel) use:
script_1.sh &
script_2.sh &
script_3.sh &
script_4.sh &
script_5.sh &
To run the one after the other (sequentially) use:
script_1.sh &&
script_2.sh &&
script_3.sh &&
script_4.sh &&
script_5.sh
Enhancement for comments
If you have 200 scripts you want to run at the same time (which might bog down the machine BTW) use this script:
#!/bin/bash
for Script in my_folder/*.sh ; do
echo bash "$Script" &
done
Set the script attributes to executable with the command:
chmod a+x /path/to/script.sh
The first time you run the script it will only echo the names of the 200 scripts it will be executing. When you are happy the right names are being selected edit the script and change this line:
echo bash "$Script" &
to:
bash "$Script" &
There are three ways you can call a bash script from another as answered here:
- How to call shell script from another shell script?
Make the other script executable, add the
#!/bin/bash
line at the top, and the path where the file is to the$PATH
environment variable. Then you can call it as a normal command;Or call it with the source command (alias is .) like this:
source /path/to/script
;Or use the bash command to execute it:
/bin/bash /path/to/script
;
In OP's case one or more of the 200 scripts did not contain the shebang #!/bin/bash
first line in the file. As such option 3. had to be used.
200 Scripts running at the same time
A comment has been raised about whether they are "running at the same time". On a typical 8 CPU system 25 scripts will be sharing one CPU at the same time but only one one script will execute at a time until it's time slice (measured in milliseconds) runs out. Then the next job will receive its fair share of milliseconds, then the next job, etc., etc.
In loose terms we can say 200 jobs are running "concurrently" but not "simultaneously" across 8 CPUs which equates to 25 jobs per CPU:
Above image and comments below from Linux kernel scheduler
Solution 3:
There's a tool for this, read man run-parts
.
For example, I do:
run-parts ${visorhome}/pbackup.d/
in my Palm Pilot backup script. ${visorhome}/pbackup.d/
:
01PopulateJpilot 02Extract_Pedometer 03URLs 04google 05Books 06Weight 07Sec 08Bkgm 50hardlinks
Solution 4:
To run all *.sh
scripts in my_folder
directory at the same time using xargs
:
$ ls my_folder/*.sh | xargs -P0 -n1 bash
To limit the number of concurrently executed scripts to 10
:
$ ls my_folder/*.sh | xargs -P10 -n1 bash