/
Running Singularity Containers in O2

Running Singularity Containers in O2

Do you need to install a container? Try our self-install tool, or generate container images using the steps found in Common Install Formats, and share the path to rchelp@hms.harvard.edu. We will install the images in /n/app/singularity/containers.

Overview

We are running a pilot project to support Singularity containers in O2; Singularity allows users to execute software containers within regular O2 jobs, and it is fully compatible with existing Docker images. 

Common Install Formats

Typically requests will contain one of the following:

  • Docker or singularity image file (.img, .sif, etc)

  • Definition file(s) for installing containers (<spec>), or a link to container repository. These can be built using the commands in an interactive session:

    singularity build <container-name>.sif docker://path/to/containername #or singularity build <container-name>.sif <spec>
  • nf-core modules: a path to the directory containing nf-core images created from a command like:

    #Run this, replacing <package-name> with something like 'rnaseq' nf-core download -x none --container-system singularity --parallel-downloads 8 nf-core/<package-name> ##Prompt Hints## #Best practice to select a numbered version release as opposed to "dev" for example 3.14.0 for nf-core/rnaseq ? Select release / branch: 3.14.0 [release] #Select "no" for this step, as there are no institutional configs for HMS at this time ? Include the nf-core's default institutional configuration files into the download? No #This step is optional, if a path isn't ready select "no" otherwise this path may be set to your singularity container path described below (/n/app/singularity/containers/<hmsid>, typically) ? Nextflow and nf-core can use an environment variable called $NXF_SINGULARITY_CACHEDIR that is a path to a directory where remote Singularity images are stored. This allows ? downloaded images to be cached in a central location. ? Define $NXF_SINGULARITY_CACHEDIR for a shared Singularity image download folder? [y/n]: n After the download has finished the singularity containers should be one directory down from the download folder. For example:

How to run Singularity containers in O2

Once a container has been approved and moved under the path /n/app/singularity/containers it can be used in O2 within any Slurm Interactive srun or sbatch  job.

The singularity executable is not available on login nodes, to execute your container you must be running an interactive or batch slurm job



The most common utilization of singularity containers is to start a shell within the container using the command  singularity shell /path/to/container_file  as in the example below:

compute-a-17-80:~ singularity shell /n/app/singularity/containers/debian10.sif Singularity> cat /etc/os-release |head -1 PRETTY_NAME="Debian GNU/Linux 10 (buster)" Singularity>



or to execute a specific command inside the container  with singularity exec /path/to/container_file command_to_run  followed by any required flag or inputs, for example:

compute-a-16-21:~ singularity exec /n/app/singularity/containers/debian10.sif cat /etc/os-release |head -1 PRETTY_NAME="Debian GNU/Linux 10 (buster)"

In the above example the cat  command is executed inside the ubuntu container. 

Singularity containers can also be executed within standard O2 batch job by calling the desired singularity command inside the sbatch script, for example

#!/bin/bash #SBATCH -p short #SBATCH -t 2:00:00 #SBATCH -c 1 #SBATCH --mem=4G singularity exec /n/app/singularity/containers/$USER/your_container.sif tool_to_run





Note 1:

By default only /tmp and /home/$USER are available inside the singularity container. 

Singularity> df -h Filesystem Size Used Avail Use% Mounted on overlay 16M 32K 16M 1% / devtmpfs 126G 0 126G 0% /dev tmpfs 126G 8.0K 126G 1% /dev/shm /dev/mapper/compute--a--17--80-root 20G 4.6G 16G 24% /etc/hosts tmpfs 16M 32K 16M 1% /.singularity.d/actions orchestra2-p10.med.harvard.edu:/ifs/mdcp10/systems/Orchestra/home/$USER 135T 123T 13T 91% /home/$USER /dev/mapper/compute--a--17--80-tmp 302G 33M 302G 1% /tmp /dev/mapper/compute--a--17--80-var 20G 553M 19G 3% /var/tmp

Any other filesystems available in O2 compute nodes can be mounted inside the Singularity container using the flag -B, for example 

compute-a-16-21:~ singularity exec -B /n/scratch,/n/app /n/app/singularity/containers/debian10.sif df -h Filesystem Size Used Avail Use% Mounted on overlay 16M 32K 16M 1% / devtmpfs 126G 0 126G 0% /dev tmpfs 126G 8.0K 126G 1% /dev/shm /dev/mapper/compute--a--17--80-root 20G 4.6G 16G 24% /etc/hosts tmpfs 16M 32K 16M 1% /.singularity.d/actions orchestra2-p10.med.harvard.edu:/ifs/mdcp10/systems/Orchestra/home/$USER 135T 123T 13T 91% /home/$USER /dev/mapper/compute--a--17--80-tmp 302G 33M 302G 1% /tmp /dev/mapper/compute--a--17--80-var 20G 552M 19G 3% /var/tmp data.vstscrtchmdcp01.med.harvard.edu:/scratch 910T 562T 348T 62% /n/scratch orchestra2-p10.med.harvard.edu:/o2_app 10T 2.2T 7.9T 22% /n/app

Access permissions for those filesystem is preserved inside the container.

Note 2:

By default not all env variables might be ported inside the singularity container. If a variable defined outside Singularity needs to be ported inside the container and it is not available by default, it can be pre-set  outside the container with the prefix SINGULARITYENV_.  For example the variable FOO can be ported inside the singularity container by presetting it as SINGULARITYENV_FOO

compute-a-16-21:~ FOO="something" compute-a-16-21:~ export SINGULARITYENV_FOO=$FOO


Note 3:

If you plan to use one or more GPU cards inside the container you need to submit the O2 job to a partition that supports GPU computing and add the flag --nv to your singularity command, for example:

singularity exec --nv -B /n/scratch /n/app/singularity/containers/$USER/your_container.sif

 

 

Additional documentation about the singularity command can be found on the official Singularity webpage.















Related content