Running Singularity Containers in O2

Overview

We support the containerization tool Singularity in O2, Singularity allows users to execute software containers within regular O2 jobs and it is fully compatible with existing Docker images. 

How to import Singularity or Docker containers in O2

The Singularity software is available by default (no module needed) from any compute node on the O2 cluster; however, due to security concerns, singularity can only be used to run images that have been tested and approved. The testing process is fully automated, and can be initiated by any users. 

To test and deploy a singularity container in O2 you need to submit it  using our csubmitter tool, which works only from within O2 jobs and does not work from login  nodes.

Make sure to request at least 8GB of memory with your O2 job to use the csubmitter tool



First start an interactive O2 job and  load the csubmitter module with:

1 module load csubmitter/latest

Then submit a container for testing with:   

1 csubmitter --name ProjectName --image-path /path/to/container/container_name.sif (or .def, .simg, dockerfile)

where ProjectName is a name you assign to the container project. You will be able to replace a container with a new one by submitting the new container using the same ProjectName

The flag --image-path must be followed by the path to the container file to be scanned. It is also possible to scan and import a container directly from a web repository, as shown in the example below:

1 csubmitter --name ProjectName --image-uri shub://user/image:tag (or docker://user/image:tag)

where the flag --image-uri is followed by the web address of the desired container from either the singularity or the docker repository. 

Containers available from other repositories or webpages must be  first downloaded on O2 and then submitted with the --image-path flag.

Note: The testing process can take from several minutes to a few hours, depending on the type of container tested.

If no security concern is detected the container will pass the test and it is automatically copied in the pre-authorized path /n/app/singularity/containers/$USER/ from where it can  be executed in O2.

To check the status of the testing process you can use the command:

1 2 3 4 5 6 7 8 9 csubmitter --status +----+------------------+---------------------------------------------+------------+---------------------+------------+---------------+ | id | Container Name | Source | Status | Submitted date | Scan Grade | Type | +----+------------------+---------------------------------------------+------------+---------------------+------------+---------------+ | 1 | ProjectName1 | shub://user/image:tag | processing | 2020-10-26T08:22:29 | N/A | ContainerUri | | 2 | ProjectName2 | 58856a0d-e3cc-44cc-9ef8-c47dfa77da05 | submitted | 2020-10-27T16:52:58 | N/A | ContainerFile | | 3 | ProjectName3 | 0fa978a8-6fbd-4805-86a6-9072ad610f4e | processed | 2020-11-09T09:30:55 | Pass | ContainerFile |



When the testing  is completed the Status will report as processed and if no vulnerability is found the Scan Grade will report Pass and the container file  will be  available  under /n/app/singularity/containers/$USER/

It is also possible to see detailed information about a specific container request using the command csumbitter --status <id> where <id> is the ID number of the desired container.

You  can also run the command csubmitter --help to see more information about this command.

More details about the csubmitter tool are available here

 

The csubmitter tool is still in a pilot-test stage and might not work properly all the times.

If you notice that after a day your container has not been processed, please let us know at rchelp@hms.harvard.edu

 

How to prepare your Docker container to pass the csubmitter scan

The csubmitter scan checks for vulnerability in the container software. To avoid a failing a scan, make sure that all system libraries inside your Docker container are the last version available. Usually this can be done by running the command apt update and apt-get upgrade (or the equivalent command for the OS used inside the container) at the end of the installation process.

When installing a pre-built container directly from a repository the easiest approach is to create a Singularity definition file and bootstrap the Singularity container from the desired Docker container.

For example, if you needed to install the Docker container ubuntu:latest you can create a Singularity definition file (my_container.def in this example) containing the following lines:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Bootstrap: docker From: ubuntu:latest %setup %files %post apt update -y apt-get upgrade -y %environment %runscript

and then submit directly this definition file using the command:

csubmitter --name my_container --image-path /path/to/my_container.def

this will build the singularity container starting from the desired Docker container but will also update all system libraries from the original container. You can use the above template to install your desired Docker container.

How to run Singularity containers in O2

Once a container has been approved and moved under the path /n/app/singularity/containers it can be used in O2 within any Slurm Interactive srun or sbatch  job.

The singularity executable is not available on login nodes, to execute your container you must be running an interactive or batch slurm job



The most common utilization of singularity containers is to start a shell within the container using the command  singularity shell /path/to/container_file  as in the example below:

1 2 3 4 5 compute-a-17-80:~ singularity shell /n/app/singularity/containers/debian10.sif Singularity> cat /etc/os-release |head -1 PRETTY_NAME="Debian GNU/Linux 10 (buster)" Singularity>



or to execute a specific command inside the container  with singularity exec /path/to/container_file command_to_run  followed by any required flag or inputs, for example:

1 2 compute-a-16-21:~ singularity exec /n/app/singularity/containers/debian10.sif cat /etc/os-release |head -1 PRETTY_NAME="Debian GNU/Linux 10 (buster)"

In the above example the cat  command is executed inside the ubuntu container. 

Singularity containers can also be executed within standard O2 batch job by calling the desired singularity command inside the sbatch script, for example

1 2 3 4 5 6 7 8 #!/bin/bash #SBATCH -p short #SBATCH -t 2:00:00 #SBATCH -c 1 #SBATCH --mem=4G singularity exec /n/app/singularity/containers/$USER/your_container.sif tool_to_run





Note 1:

By default only /tmp and /home/$USER are available inside the singularity container. 

1 2 3 4 5 6 7 8 9 10 Singularity> df -h Filesystem Size Used Avail Use% Mounted on overlay 16M 32K 16M 1% / devtmpfs 126G 0 126G 0% /dev tmpfs 126G 8.0K 126G 1% /dev/shm /dev/mapper/compute--a--17--80-root 20G 4.6G 16G 24% /etc/hosts tmpfs 16M 32K 16M 1% /.singularity.d/actions orchestra2-p10.med.harvard.edu:/ifs/mdcp10/systems/Orchestra/home/$USER 135T 123T 13T 91% /home/$USER /dev/mapper/compute--a--17--80-tmp 302G 33M 302G 1% /tmp /dev/mapper/compute--a--17--80-var 20G 553M 19G 3% /var/tmp

Any other filesystems available in O2 compute nodes can be mounted inside the Singularity container using the flag -B, for example 

1 2 3 4 5 6 7 8 9 10 11 12 compute-a-16-21:~ singularity exec -B /n/scratch3,/n/app /n/app/singularity/containers/debian10.sif df -h Filesystem Size Used Avail Use% Mounted on overlay 16M 32K 16M 1% / devtmpfs 126G 0 126G 0% /dev tmpfs 126G 8.0K 126G 1% /dev/shm /dev/mapper/compute--a--17--80-root 20G 4.6G 16G 24% /etc/hosts tmpfs 16M 32K 16M 1% /.singularity.d/actions orchestra2-p10.med.harvard.edu:/ifs/mdcp10/systems/Orchestra/home/$USER 135T 123T 13T 91% /home/$USER /dev/mapper/compute--a--17--80-tmp 302G 33M 302G 1% /tmp /dev/mapper/compute--a--17--80-var 20G 552M 19G 3% /var/tmp vast-cnode.med.harvard.edu:/scratch01 910T 562T 348T 62% /n/scratch3 orchestra2-p10.med.harvard.edu:/o2_app 10T 2.2T 7.9T 22% /n/app

Access permissions for those filesystem is preserved inside the container.

Note 2:

By default not all env variables might be ported inside the singularity container. If a variable defined outside Singularity needs to be ported inside the container and it is not available by default, it can be pre-set  outside the container with the prefix SINGULARITYENV_.  For example the variable FOO can be ported inside the singularity container by presetting it as SINGULARITYENV_FOO

1 2 compute-a-16-21:~ FOO="something" compute-a-16-21:~ export SINGULARITYENV_FOO=$FOO


Note 3:

If you plan to use one or more GPU cards inside the container you need to submit the O2 job to a partition that supports GPU computing and add the flag --nv to your singularity command, for example:

singularity exec --nv -B /n/scratch3 /n/app/singularity/containers/$USER/your_container.sif

 

 

Additional documentation about the singularity command can be found on the official Singularity webpage.