NOTICE: FULL O2 Cluster Outage, January 3 - January 10th

O2 will be completely offline for a planned HMS IT data center relocation from Friday, Jan 3, 6:00 PM, through Friday, Jan 10

  • on Jan 3 (5:30-6:00 PM): O2 login access will be turned off.
  • on Jan 3 (6:00 PM): O2 systems will start being powered off.

This project will relocate existing services, consolidate servers, reduce power consumption, and decommission outdated hardware to improve efficiency, enhance resiliency, and lower costs.

Specifically:

  • The O2 Cluster will be completely offline, including O2 Portal.
  • All data on O2 will be inaccessible.
  • Any jobs still pending when the outage begins will need to be resubmitted after O2 is back online.
  • Websites on O2 will be completely offline, including all web content.

More details at: https://harvardmed.atlassian.net/l/cp/1BVpyGqm & https://it.hms.harvard.edu/news/upcoming-data-center-relocation

CheckPointing with DMTCP


Checkpointing saves a job’s current state to disk; you can restart the job from that saved point at a later time. This can provide protection against job failures due to bugs, network errors, full disks, or node failures. Checkpointing is especially useful for jobs running for multiple days, but should not be used for short jobs.

The Checkpointing software DMTCP https://dmtcp.sourceforge.io is available in O2.

DMTCP software is NOT guaranteed to work or to support all applications and languages.

A given process might fail to run or restart from a saved checkpoint. We strongly encourage you to test DMTCP on your workflow before depending on it.

 

MPI jobs will not checkpoint with the version of dmtcp currently installed on O2.
GPU jobs will only checkpoint in limited circumstances. See below.

What is CheckPointing?

The process of CheckPointing consists in creating periodic snapshots of the running process and the active memory (RAM). Those snapshots can then be used to restart the execution of that process from the recorded point. The process is similar to creating manual restart points inside your code by saving important data with a given period. However, it is handled outside your code by the DMTCP software.

How does it work

The two most common approaches for using DMTCP are to either checkpoint your execution at a given constant interval or to manually initiate checkpointing from within the code (when possible).

In both cases the first step is to load the dmtcp module with either module load gcc/6.2.0 dmtcp or module load gcc/9.2.0 dmtcp.

Constant Interval CheckPointing:

After loading the dmtcp module you should be able to start your command with:

dmtcp_launch --interval CKP_FREQ your_program

where CKP_FREQ is the checkpointing frequency in seconds and your_program is the command you need to run within the job. In this case DMTCP will create a memory checkpoint every CKP_FREQ seconds. (See the CAUTION below about choosing CKP_FREQs that are too small.)

Custom CheckPointing:

It is also possible to manually create checkpoints by starting the command without specifying an interval

dmtcp_launch your_program

and by executing the shell command

dmtcp_command --checkpoint

directly from within your code, placing it at strategic points of the code. In the python example below the dmtcp_command could be placed at the beginning of a loop where each iteration consumes a very large amount of time:

CAUTION:

The creation of a checkpoint is a potentially time consuming process that can also generate very large files, depending on the RAM (memory) used by the running processes.

When a checkpoint is created DMTCP will write to file all data currently loaded in RAM. Therefore, a job using ~100GB of RAM will create a similar size of data, which could fill up your storage quota.
Checkpoint 100 jobs using only 1GB of RAM will also be enough to fill your $HOME storage quota. We therefore encourage you to write checkpoint data to your scratch folder when possible.

 

We strongly advise checkpointing only jobs that are expected to run for more than one day. Checkpointing short jobs may significantly slow them down without substantial benefit.

How to restart your checkpoint job

After loading the dmtcp module you can restart a job from its last checkpoint using the commands:

from the folder where the job’s checkpoint was created. The file dmtcp_restart_script.sh is created by the DMTCP software.

You can also build a script that will automatically restart a program from its last checkpoint if one is available using a sbatch script like:

with the assumption that each job is executed in a separate folder and therefore no dmtcp_restart_script.sh file is present when a job is dispatched the first time.

If you start a job with the above template and you don’t want it to restart from the saved checkpoints make sure to delete any local file created by DMTCP.

Checkpointing GPU jobs

 

Â