Run Bash Script As Slurm Pipeline through rcbio/1.0

 

This page shows you how to run a regular bash script as a pipeline. The runAsPipeline script, accessible through the rcbio/1.0 module, converts an input bash script to a pipeline that easily submits jobs to the Slurm scheduler for you.

Features of the new pipeline:

  • Submit each step as a cluster job using sbatch.

  • Automatically arrange dependencies among jobs.

  • Email notifications are sent when each job fails or succeeds.

  • If a job fails, all its downstream jobs automatically are killed.

  • When re-running the pipeline on the same data folder, if there are any unfinished jobs, the user is asked to kill them or not.

  • When re-running the pipeline on the same data folder, the user is asked to confirm to re-run or not if a step was done successfully earlier.

Please read below for an example.

Log on to O2

If you need help connecting to O2, please review the Using Slurm Basic wiki page.

From Windows, use the graphical PuTTY program to connect to o2.hms.harvard.edu and make sure the port is set to the default value of 22.

From a Mac Terminal, use the ssh command, inserting your eCommons ID instead of user123:

ssh user123@o2.hms.harvard.edu


Start interactive job, and create working folder

For example, for user abc123, the working directory will be

srun --pty -p interactive -t 0-12:0:0 --mem 2000MB -n 1 /bin/bash mkdir /n//users/a/abc123/testRunBashScriptAsSlurmPipeline cd /n//users/a/abc123/testRunBashScriptAsSlurmPipeline



Load the pipeline related modules

# This will setup the path and environmental variables for the pipeline module load rcbio/1.0


Build some testing data in the current folder


Take a look at the example files


The original bash script


How does this bash script work?

There is a loop that goes through the two university text files (for loop in line 7 above) to search for John and Mike (line 11 above), and then searches for Nick and Julia (line 13 above).  After all searching is finished (line 14 above), then the results are merged into a single text file (line 16 above) . This means that the merge step (line 14 above) has to wait until the earlier two steps (line 11 and 13 above) are finished. However, the runAsPipeline workflow builder can't read this script directly. We will need to create a modified bash script that adds parts that explicitly tell the workflow builder the order in which the jobs need to run, among other things. 

The modified bash script



Notice that there are a few things added to the script here:

  • before the loop starts, #loopStart,i was added (line 7 above). Here the variable i is looping variable, which will be recognized by the pineline runner.

  • before the loop ends, #loopEnd was added (line 17 above). This will be recognized by the pineline runner.

  • Step 1 is denoted by #@1,0,find1,u,sbatch -p short -n 1 -t 50:0 (line 11 above), which means this is step 1 that depends on no other step, is named find1, and file $u needs to be copied to the /tmp directory. The sbatch command tells the pipeline runner the sbatch command to run this step.  

  • Step 2 is denoted by #@2,0,find2,u (line 14 above), which means this is step2 that depends on no other step, is named find2, and file $u needs to be copy to /tmp directory. The sbatch command tells the pipeline runner the sbatch command to run this step.  

  • Step 3 is denoted by #@3,1.2,merge, which means that this is step3 that depends on step1 and step2, and the step is named merge. Notice, there is no sbatch here,  so the pipeline runner will use default sbatch command (see below).   

Notice the format of step annotaion is #@stepID,dependIDs,stepName,reference,sbatchOptions. Reference is optional, which allows the pineline runner to copy data (file or folder) to local /tmp folder on the computing node to speed up the software. sbatchOptions is also optional, and when it is missing, the pipeline runner will use the default sbatch command given from command line (see below).

Here are two more examples:

#@4,1.3,map,,sbatch -p short -n 1 -t 50:0   Means step4 depends on step1 and step3, named map, no reference data to copy, with sbatch -p short -n 1 -t 50:0

#@3,1.2,align,db1.db2   Means step3 depends on step1 and step2, named align, $db1 and $db2 as reference data to be copied to /tmp , with default sbatch command (see below).

Test run the modified bash script as a pipeline


This command will generate new bash script named slurmPipeLine.201801100946.sh in flag folder (201801100946 is the timestamp that runAsPipeline was invoked at). Then test run it, meaning does not really submit jobs, but only create a fake job id, 123 for each step. If you were to append run at the end of the command, the pipeline would actually be submitted to the Slurm scheduler.

Ideally, with 'useTmp', the software should run faster using local /tmp disk space for database/reference than the network storage. For this small query, the difference is small, or even slower if you use local /tmp. If you don't need /tmp, you can use noTmp.

Sample output from the test run

Note that only step 2 used -t 50:0, and all other steps used the default -t 10:0. The default walltime limit was set in the runAsPipeline command, and the walltime parameter for step 2 was set in the bash_script_v2.sh script.



Run the modified bash script as a pipeline

Thus far in the example, we have not actually submitted any jobs to the scheduler. To submit the pipeline, you will need to append the run parameter to the command. If run is not specified, test mode will be used, which does not submit jobs and gives theplaceholder of 123 for jobids in the command's output. 

Monitoring the jobs

You can use the command:

To see the job status (running, pending, etc.). You also get two emails for each step, one at the start of the step, one at the end of the step.

Check job logs

You can use the command:

This command list all the logs created by the pipeline runner. *.sh files are the slurm scripts for eash step, *.out files are output files for each step, *.success files means job successfully finished for each step and *.failed means job failed for each steps.

You also get two emails for each step, one at the start of the step, one at the end of the step.

Re-run the pipeline

You can rerun this command in the same folder

This command will check if the earlier run is finished or not. If not, ask user to kill the running jobs or not, then ask user to rerun the successfully finished steps or not. Click 'y', it will rerun, directly press 'enter' key, it will not rerun. 

To run your own script as Slurm pipeline

If you have a bash script with multiple steps and you wish to run it as Slurm pipeline, modify your old script and add the notation to mark the start and end of any loops, and the start of any step for which you want to submit as an sbatch job. Then you can use runAsPipeline with your modified bash script, as detailed above. 

How does it work

In case you wonder how it works, here is a simple example to expain:

For each step per loop, the pipeline runner reates a file looks like this (here it is named flag.sh): 

Then submit with: 

sendJobFinishEmail.sh is in /n/app/rcbio/1.0/bin 
There is a bug in the script, please change: 
[ -f $flag.failed ] 
to: 
[ ! -f $flag.success ] 

Let us know if you have any questions. Please include your working folder and commands used in your email. Any comment and suggestion are welcome!