Checkpointing saves a job’s current state to disk; you can restart the job from that saved point at a later time. This can provide protection against job failures due to bugs, network errors, full disks, or node failures. Checkpointing is especially useful for jobs running for multiple days, but should not be used for short jobs.
DMTCP software is NOT guaranteed to work or to support all applications and languages.
A given process might fail to run or restart from a saved checkpoint. We strongly encourage you to test DMTCP on your workflow before depending on it.
MPI jobs will not checkpoint with the version of dmtcpcurrently installed on O2. GPU jobs will only checkpoint in limited circumstances. See below.
What is CheckPointing?
The process of CheckPointing consists in creating periodic snapshots of the running process and the active memory (RAM). Those snapshots can then be used to restart the execution of that process from the recorded point. The process is similar to creating manual restart points inside your code by saving important data with a given period. However, it is handled outside your code by the DMTCP software.
How does it work
The two most common approaches for using DMTCP are to either checkpoint your execution at a given constant interval or to manually initiate checkpointing from within the code (when possible).
In both cases the first step is to load the dmtcp module with either module load gcc/6.2.0 dmtcp or module load gcc/9.2.0 dmtcp.
Constant Interval CheckPointing:
After loading the dmtcp module you should be able to start your command with:
dmtcp_launch --interval CKP_FREQ your_program
where CKP_FREQ is the checkpointing frequency in seconds and your_program is the command you need to run within the job. In this case DMTCP will create a memory checkpoint every CKP_FREQ seconds. (See the CAUTION below about choosing CKP_FREQs that are too small.)
It is also possible to manually create checkpoints by starting the command without specifying an interval
and by executing the shell command
directly from within your code, placing it at strategic points of the code. In the python example below the dmtcp_command could be placed at the beginning of a loop where each iteration consumes a very large amount of time:
# something here
for it in range(0,some_number_here):
# do something here that takes
# a very long time
if __name__ == '__main__':
The creation of a checkpoint is a potentially time consuming process that can also generate very large files, depending on the RAM (memory) used by the running processes.
When a checkpoint is created DMTCP will write to file all data currently loaded in RAM. Therefore, a job using ~100GB of RAM will create a similar size of data, which could fill up your storage quota. Checkpoint 100 jobs using only 1GB of RAM will also be enough to fill your $HOME storage quota. We therefore encourage you to write checkpoint data to your scratch folder when possible.
We strongly advise checkpointing only jobs that are expected to run for more than one day. Checkpointing short jobs may significantly slow them down without substantial benefit.
How to restart your checkpoint job
After loading the dmtcp module you can restart a job from its last checkpoint using the commands:
export DMTCP_COORD_HOST=$( hostname ) ### this might change depending on your shell
from the folder where the job’s checkpoint was created. The file dmtcp_restart_script.sh is created by the DMTCP software.
You can also build a script that will automatically restart a program from its last checkpoint if one is available using a sbatch script like:
with the assumption that each job is executed in a separate folder and therefore no dmtcp_restart_script.sh file is present when a job is dispatched the first time.
If you start a job with the above template and you don’t want it to restart from the saved checkpoints make sure to delete any local file created by DMTCP.
Checkpointing GPU jobs
DMTCP cannot be used to checkpoint GPU processes.
However you might be able to checkpoint a GPU job by executing custom checkpointing in your code where you know the GPU is not being used, and data currently stored on GPU memory (VRAM) is not required to restart the job.