Pass command line arguments via sbatch

Solution 1:

I thought I'd offer some insight because I was also looking for the replacement to the -v option in qsub, which for sbatch can be accomplished using the --export option. I found a nice site here that shows a list of conversions from Torque to Slurm, and it made the transition much smoother.

You can specify the environment variable ahead of time in your bash script:

$ var_name='1'
$ sbatch -D `pwd` exampleJob.sh --export=var_name

Or define it directly within the sbatch command just like qsub allowed:

$ sbatch -D `pwd` exampleJob.sh --export=var_name='1'

Whether this works in the # preprocessors of exampleJob.sh is also another question, but I assume that it should give the same functionality found in Torque.

Solution 2:

Using a wrapper is more convenient. I found this solution from this thread.

Basically the problem is that the SBATCH directives are seen as comments by the shell and therefore you can't use the passed arguments in them. Instead you can use a here document to feed in your bash script after the arguments are set accordingly.

In case of your question you can substitute the shell script file with this:

#!/bin/bash
sbatch <<EOT
#!/bin/bash

#SBATCH -o "outFile"$1".txt"
#SBATCH -e "errFile"$1".txt"

hostname

exit 0
EOT

And you run the shell script like this:

bash [script_name].sh [suffix]

And the outputs will be saved to outFile[suffix].txt and errFile[suffix].txt

Solution 3:

If you pass your commands via the command line, you can actually bypass the issue of not being able to pass command line arguments in the batch script. So for instance, at the command line :

var1="my_error_file.txt"
var2="my_output_file.txt"
sbatch --error=$var1 --output=$var2 batch_script.sh