Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Current »

Start an interactive job, with a walltime of 2 hours, 2000MB of memory. 

srun --pty -p interactive -t 0-02:0:0 --mem 2000MB -n 1 /bin/bash

Create a working directory on scratch and change into the newly-created directory. For example, for user abc123, the working directory will be

mkdir /n/scratch3/users/a/abc123/rsem
cd  /n/scratch3/users/a/abc123/rsem

Load modules and setup path, then copy some test data. For details to use your own data, please visit this page: Build Folder Structures From Sample Sheet for rcbio NGS Workflows

module load gcc/6.2.0 python/2.7.12 rcbio/1.3.3

cp /n/shared_db/misc/rcbio/data/fruitFlyFastq/sampleSheet.xlsx . 

buildSampleFoldersFromSampleSheet.py sampleSheet.xlsx

Copy the example rsemBowtie2 bash script:

cp /n/app/rcbio/1.3.3/bin/rsemBowtie2.sh .

Now you can modify the options as needed. Please reference the http://deweylab.github.io/RSEM/ if you have any questions.

To edit the Bowtie2 and RSEM bash script:

nano rsemBowtie2.sh

To test the pipeline run the following command. Jobs will not be submitted to the scheduler.

runAsPipeline "rsemBowtie2.sh -r mm10" "sbatch -p short --mem 6G -t 2:0:0 -n 1" useTmp

# this is a test run

To run the pipeline:

runAsPipeline "rsemBowtie2.sh -r mm10" "sbatch -p short --mem 6G -t 2:0:0 -n 1" useTmp run 2>&1 | tee output.log

# notice here 'run 2>&1 | tee output.log' is added to the command

To understand how 'runAsPipeline' works, how to check output, how to re-run the pipeline, please visit: Run Bash Script As Slurm Pipeline

Now you are ready to run an rcbio workflow

To instead run the workflow on your own data, transfer the sample sheet to your local machine following this wiki page and modify the sample sheet. Then you can transfer it back to O2 under your account, then go to the build folder structure step.

  • No labels