Run Bash Script As Slurm Pipeline through rcbio/1.3.3
This page shows you how to run a regular bash script as a pipeline. The runAsPipeline
script, accessible through the rcbio/1.3.3
module, converts an input bash script to a pipeline that easily submits jobs to the Slurm scheduler for you.
Features of the new pipeline:
Submit each step as a cluster job using
sbatch
.Automatically arrange dependencies among jobs.
Email notifications are sent when each job fails or succeeds.
If a job fails, all its downstream jobs automatically are killed.
When re-running the pipeline on the same data folder, if there are any unfinished jobs, the user is asked to kill them or not.
When re-running the pipeline on the same data folder, the user is asked to confirm to re-run or not if a job or a step was done successfully earlier.
For re-run, if the script is not changed,
runAsPipeline
does not re-process the bash script and directly use old one.If user has more than one Slurm account, adding
-A
or—account=
to command line to let all jobs to use that Slurm account.When adding new input data and re-run the workflow, affected successfully finished jobs will be auto re-run.
Please read below for an example.
Log on to O2
If you need help connecting to O2, please review the Using Slurm Basic and the How to Login to O2 wiki pages.
From Windows, download and install MobaXterm for Windows: https://mobaxterm.mobatek.net/,to connect to o2.hms.harvard.edu
and make sure the port is set to the default value of 22.
From a Mac Terminal, use the ssh
command, inserting your HMS Account instead of user123:
ssh user123@o2.hms.harvard.edu
Start interactive job, and create working directory
# if you have multiple slurm accounts, you'll have to add in -A or --account=
srun --pty -p interactive -t 0-12:0:0 --mem 2000MB -c 1 /bin/bash
mkdir ~/testRunBashScriptAsSlurmPipeline
cd ~/testRunBashScriptAsSlurmPipeline
Load the pipeline related modules
# This will setup the path and environment variables for the pipeline
module load rcbio/1.3.3
Build some testing data in the current folder
Take a look at the example files
The original bash script
How does this bash script work?
There is a loop that goes through the two university text files (for loop in line 8 above) to search for John and Mike (line 12 above), and then searches for Nick and Julia (line 14 above). After all searching is finished (line 16 above), then the results are merged into a single text file (line 18 above) . This means that the merge step (line 18 above) has to wait until the earlier two steps (line 12 and 14 above) are finished. However, the runAsPipeline
workflow builder can't read this script directly. We will need to create a modified bash script that adds parts that explicitly tell the workflow builder the order in which the jobs need to run, among other things.
The modified bash script
Notice that there are a few things added to the script here:
Step 1 is denoted by
#@1,0,find1,u,sbatch -p short -c 1 -t 50:0
(line 10 above), which means this is step 1 that depends on no other step, is named find1, and file$u
needs to be copied to the/tmp
directory. The sbatch command tells the pipeline runner the sbatch command to run this step.Step 2 is denoted by
#@2,0,find2,u
(line 13 above), which means this is step2 that depends on no other step, is named find2, and file$u
needs to be copy to/tmp
directory. The sbatch command tells the pipeline runner the sbatch command to run this step.Step 3 is denoted by
#@3,1.2,merge
, (line 18), which means that this is step3 that depends on step1 and step2, and the step is named merge. Notice, there is no sbatch here, so the pipeline runner will use default sbatch command from command line (see below).
Notice the format of step annotation is #@stepID,dependIDs,stepName,reference,sbatchOptions. Reference is optional, which allows the pipeline runner to copy data (file or folder) to local /tmp
folder on the computing node to speed up the software. sbatchOptions is also optional, and when it is missing, the pipeline runner will use the default sbatch command given from command line (see below).
Here are two more examples:
#@4,1.3,map,,sbatch -p short -c 1 -t 50:0
Means step4 depends on step1 and step3, this step is named map, there is no reference data to copy, and submit this step with sbatch -p short -c 1 -t 50:0
#@3,1.2,align,db1.db2
Means step3 depends on step1 and step2, this step is named align, $db1
and $db2
are reference data to be copied to /tmp
, and submit with the default sbatch command (see below).
Test run the modified bash script as a pipeline
This command will generate new bash script of the form slurmPipeLine.checksum.sh
in flag folder. The checksum
portion of the filename will have a MD5 hash that represents the file contents. We include the checksum
in the filename to detect when script contents have been updated.
This runAsPipeline command will run a test of the script, meaning does not really submit jobs. It will only show a fake job ids like 1234
for each step. If you were to append run
at the end of the command, the pipeline would actually be submitted to the Slurm scheduler.
Ideally, with useTmp
, the software should run faster using local /tmp
disk space for database/reference than the network storage. For this small query, the difference is small, or even slower if you use local /tmp
. If you don't need /tmp,
you can use noTmp
.
With useTmp
, the pipeline runner copy related data to /tmp
and all file paths will be automatically updated to reflect a file's location in /tmp
when using the useTmp
option.
Sample output from the test run
Note that only step 2 used -t 50:0
, and all other steps used the default -t 10:0
. The default walltime limit was set in the runAsPipeline
command, and the walltime parameter for step 2 was set in the bash_script_v2.sh
script.
Run the modified bash script as a pipeline
Thus far in the example, we have not actually submitted any jobs to the scheduler. To submit the pipeline, you will need to append the run
parameter to the command. If run
is not specified, test
mode will be used, which does not submit jobs and gives the placeholder of 1234
for jobids in the command's output.
Monitoring the jobs
You can use the command:
To see the job status (running, pending, etc.). You also get two emails for each step, one at the start of the step, one at the end of the step.
Successful job email
The key elements are time and memory used.
Check job logs
You can use the command:
This command list all the logs created by the pipeline runner. *.sh
files are the slurm scripts for each step, *.out
files are output files for each step, *.success
files means job successfully finished for each step and *.failed
means job failed for each steps.
You also get two emails for each step, one at the start of the step, one at the end of the step.
Cancel all jobs
You can use the command to cancel running and pending jobs:
What happens if there is some error?
You can re-run this command in the same folder. We will delete an input file to see what happens.
This command will check if the earlier run is finished or not. If not, ask user to kill the running jobs or not, then ask user to rerun the successfully finished steps or not. Click 'y', it will rerun, directly press 'enter' key, it will not rerun.
Failed job email
The key element here is the error message.
Notice here, step2 job is automatically canceled because this job failed. We deleted universityB.txt, so the job has failed. We don’t get an email from the downstream step3 job.
Fix the error and re-run the pipeline
You can rerun this command in the same folder
This command will automatically check if the earlier run is finished. If the run has not finished, the script will ask the user if they want to kill the running jobs or not, then ask user to rerun the successfully finished steps or not. Click 'y', it will rerun, directly press 'enter' key, it will not rerun.
Notice here, step3 will run by default. It will run without prompting the user for permission.
What happens if we add more input data and re-run the pipeline?
You can rerun this command in the same folder
This command will check if the earlier run is finished, and will prompt the user if they kill any running jobs. Next, it will then ask the user if they want to rerun any successfully finished steps. Click 'y', it will rerun, directly press 'enter' key, it will not rerun.
For the new data, RCBio will submit 2 jobs. Step3 will also still automatically run.
Re-run a single job manually
For details about the second option: Get more informative slurm email notification and logs through rcbio/1.3
To run your own script as Slurm pipeline
If you have a bash script with multiple steps and you wish to run it as Slurm pipeline, here is how you can do that:
modify your old script and add the notation to mark the start and end of any loops, and the start of any step for which you want to submit as an
sbatch
job.use
runAsPipeline
with your modified bash script, as detailed above.
How does the runAsPipeline
RCBio pipeline runner work?
In case you wonder how it works, here is a simple example to explain.
For each step per loop, the pipeline runner creates a file that looks like the one below. (Here it is named flag.sh
):
Your analysis commands will be wrapped in an srun
so we can monitor if it completed successfully. If your commands worked (meaning exited in 0 status), then we will create the success
file. Next, we will run sacct
to get stats for the job step, and will send a job completion email with sendJobFinishEmail.sh
. The sendJobFinishEmail.sh
script is available in /n/app/rcbio/1.3.3/bin/
, if you are interested in the contents of that script.
Then the job script will be submitted with:
Let us know if you have any questions by emailing rchelp@hms.harvard.edu. Please include your working folder and the commands used in your email. Any comments and suggestions are welcome!
We have additional example ready-to-run workflows available, which may be of interest to you.