NOTICE: FULL O2 Cluster Outage, January 3 - January 10th
O2 will be completely offline for a planned HMS IT data center relocation from Friday, Jan 3, 6:00 PM, through Friday, Jan 10
- on Jan 3 (5:30-6:00 PM): O2 login access will be turned off.
- on Jan 3 (6:00 PM): O2 systems will start being powered off.
This project will relocate existing services, consolidate servers, reduce power consumption, and decommission outdated hardware to improve efficiency, enhance resiliency, and lower costs.
Specifically:
- The O2 Cluster will be completely offline, including O2 Portal.
- All data on O2 will be inaccessible.
- Any jobs still pending when the outage begins will need to be resubmitted after O2 is back online.
- Websites on O2 will be completely offline, including all web content.
More details at: https://harvardmed.atlassian.net/l/cp/1BVpyGqm & https://it.hms.harvard.edu/news/upcoming-data-center-relocation
Running Singularity Containers in O2
Do you need to install a container? Please contact rchelp@hms.harvard.edu, provide all relevant information about your container, and we will install it for you.
Overview
We are running a pilot project to support Singularity containers in O2; Singularity allows users to execute software containers within regular O2 jobs, and it is fully compatible with existing Docker images.Â
Common Install Formats
Typically requests will contain one of the following:
Docker or singularity image file (.img, .sif, etc)
Definition file(s) for installing containers (
<spec>
), or a link to container repository. These can be built using the commands:singularity build <container-name>.sif docker://path/to/containername #or singularity build <container-name>.sif <spec>
nf-core modules: a path to the directory containing nf-core images created from a command like:
#Run this, replacing <package-name> with something like 'rnaseq' nf-core download -x none --container-system singularity --parallel-downloads 8 nf-core/<package-name> ##Prompt Hints## #Best practice to select a numbered version release as opposed to "dev" for example 3.14.0 for nf-core/rnaseq ? Select release / branch: 3.14.0 [release] #Select "no" for this step, as there are no institutional configs for HMS at this time ? Include the nf-core's default institutional configuration files into the download? No #This step is optional, if a path isn't ready select "no" otherwise this path may be set to your singularity container path described below (/n/app/singularity/containers/<hmsid>, typically) ? Nextflow and nf-core can use an environment variable called $NXF_SINGULARITY_CACHEDIR that is a path to a directory where remote Singularity images are stored. This allows ? downloaded images to be cached in a central location. ? Define $NXF_SINGULARITY_CACHEDIR for a shared Singularity image download folder? [y/n]: n After the download has finished the singularity containers should be one directory down from the download folder. For example:
How to run Singularity containers in O2
Once a container has been approved and moved under the path /n/app/singularity/containers it can be used in O2 within any Slurm Interactive srun or sbatch job.
The singularity executable is not available on login nodes, to execute your container you must be running an interactive or batch slurm job
The most common utilization of singularity containers is to start a shell within the container using the command singularity shell /path/to/container_file as in the example below:
compute-a-17-80:~ singularity shell /n/app/singularity/containers/debian10.sif
Singularity> cat /etc/os-release |head -1
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
Singularity>
or to execute a specific command inside the container  with singularity exec /path/to/container_file command_to_run  followed by any required flag or inputs, for example:
In the above example the cat  command is executed inside the ubuntu container.Â
Singularity containers can also be executed within standard O2 batch job by calling the desired singularity command inside the sbatch script, for example
Note 1:
By default only /tmp and /home/$USER are available inside the singularity container.Â
Any other filesystems available in O2 compute nodes can be mounted inside the Singularity container using the flag -B, for exampleÂ
Access permissions for those filesystem is preserved inside the container.
Note 2:
By default not all env variables might be ported inside the singularity container. If a variable defined outside Singularity needs to be ported inside the container and it is not available by default, it can be pre-set  outside the container with the prefix SINGULARITYENV_.  For example the variable FOO can be ported inside the singularity container by presetting it as SINGULARITYENV_FOO
Note 3:
If you plan to use one or more GPU cards inside the container you need to submit the O2 job to a partition that supports GPU computing and add the flag --nv
to your singularity command, for example:
singularity exec --nv -B /n/scratch /n/app/singularity/containers/$USER/your_container.sif
Â
Â
Additional documentation about the singularity command can be found on the official Singularity webpage.