Start an interactive job, with a walltime of 2 hours, 2000MB of memory. 

srun --pty -p interactive -t 0-02:0:0 --mem 2000MB -n 1 /bin/bash

Create a working directory on scratch and change into the newly-created directory. For example, for user abc123, the working directory will be

mkdir /n/scratch/users/a/abc123/KallistoAndSleuth
cd  /n/scratch/users/a/abc123/KallistoAndSleuth


Copy some test data following this page: Build Folder Structures From Sample Sheet for rcbio NGS Workflows

Load the necessary modules: 

module load gcc/6.2.0 python/2.7.12 rcbio/1.3.3

Copy the example Kallisto and Sleuth bash script, and the Sleuth R code:

cp /n/app/rcbio/1.3.3/bin/kallistoSleuth.sh /n/app/rcbio/1.3.3/bin/sleuth.r .


Now you can modify the options as needed. For example, if you have single end data, you should add read length. Please reference the Kallisto user manual if you have any questions.

To edit the Kallisto and Sleuth bash script:

nano kallistoSleuth.sh

Then to edit the Sleuth R script:

nano sleuth.r


To test the pipeline run the following command. Jobs will not be submitted to the scheduler.

runAsPipeline "kallistoSleuth.sh -i BDGP6" "sbatch -p short -t 10:0 -n 1" useTmp


# this is test run

To run the pipeline:

runAsPipeline "kallistoSleuth.sh -i BDGP6" "sbatch -p short -t 10:0 -n 1" useTmp run 2>&1 | tee output.log

# notice here 'run 2>&1 | tee output.log' is added to the command


To understand how 'runAsPipeline' works, how to check output, how to re-run the pipeline, please visit: Run Bash Script As Slurm Pipeline

Now you are ready to run an rcbio workflow

To instead run the workflow on your own data, transfer the sample sheet to your local machine following this wiki page and modify the sample sheet. Then you can transfer it back to O2 under your account, then go to the build folder structure step.