Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Warning
titleDeprecated Page

This documentation was written in 2016 specifically for those transitioning from our previous cluster named Orchestra to the then-brand-new O2 cluster. As the Orchestra cluster was retired in March 2018, we would strongly recommend that any new O2 cluster user review the "Using Slurm Basic" page instead of this page. As Orchestra used the LSF scheduler, this page details the parallels between LSF and the Slurm scheduler, which is what O2 uses. If you have no experience with the LSF scheduler, this page will be confusing!This page is no longer being updated.

Table of Contents


The Orchestra cluster used the LSF scheduler. LSF takes bsub commands and dispatches jobs to cluster nodes.

...

Note: if you have only logged into Orchestra before, you need a separate O2 account. Contact RC Help Research Computing to get one.

  • ssh login to the hostname: o2.hms.harvard.edu
  • Use PuTTY on Windows or Terminal on Mac or whatever method you used for Orchestra. You will land on a machine named something like login01login02, etc.

...

Beta test note: You need to module load gcc/6.2.0 to see many of the available bioinformatics modules. module spider will list of all of the available modules, even if you do not have gcc/6.2.0 loaded.

What are nodes like?

The majority of nodes on O2 (as of 201701) have 32 cores and 256 GB RAM each.

...

Though you can log in to O2 login nodes with your eCommons authentication credentials, in order to submit jobs to the SLURM scheduler, your eCommons ID needs to be assigned to a SLURM account.  To check whether this has already been done, please run the command:

...

In order to associate your eCommons ID with a SLURM account, and thereby be permitted to submit jobs to the SLURM scheduler (and avoid such errors), please submit a request to us using the Account Request form at: https://rc.hms.harvard.edu/#cluster .

What if I am using both Orchestra and O2?

...

You don't have to use a separate script. You can use the --wrap option to run a single command. However, we discourage running jobs this way because they are harder to troubleshoot (SLURM job accounting doesn't retain commands used with --wrap), and certain complex commands (such as that include piping, or using | ) may will not be interpreted properly.

...