|
...
With useTmp, the pipeline runner copy related data to /tmp and all file paths will be automatically updated to reflect a file's location in /tmp when using the useTmp option.
Sample output from the test run
...
Code Block | ||
---|---|---|
| ||
cancelAllJobscheckJobsSlurm flag/alljobs.jid |
Re-run the pipeline
...
Code Block | ||
---|---|---|
| ||
cd proper/directory module load rcbio/1.1 and all/related/modules #submit job with proper partition, time, number of cores and memory sbatch --requeue --mail-type=ALL -p short -t 2:0:0 -c 2 --mem 2G /working/directory/flag/stepID.loopID.stepName.sh Or: runSingleJob "module load bowtie/1.2.2; bowtie -x /n/groups/shared_databases/bowtie_indexes/hg19 -p 2 -1 read1.fq -2 read2.fq --sam > out.bam" "sbatch -p short -t 1:0:0 -c 2 -mem 8G" |
For details about the second option: Get more informative slurm email notification and logs through rcbio/1.2
To run your own script as Slurm pipeline
...
In case you wonder how it works, here is a simple example to explain:
For each step per loop, the pipeline runner creates a file looks like this (here it is named flag.sh):
Code Block | ||
---|---|---|
| ||
#!/bin/bash srun -n 1 bash -c "{ echo I am running...; hostname; otherCommands; } && touch flag.success" sleep 5 export SLURM_TIME_FORMAT=relative echo Job done. Summary: sacct --format=JobID,Submit,Start,End,State,Partition,ReqTRES%30,CPUTime,MaxRSS,NodeList%30 --units=M -j $SLURM_JOBID sendJobFinishEmail.sh flag [ -f flag.success ] && exit 0 || exit 1 |
Then submit with:
Code Block | ||
---|---|---|
| ||
sbatch -p short -t 10:0 -o flag.out -e flag.out flag.sh |
...