Filesystems


How much space do I have?

It varies based upon the filesystem / directory, so please reference the Filesystem Quotas page for a comprehensive overview of quotas per user and/or per group.

Will I be charged for storage usage?

HMS IT will start charging for storage and compute usage on O2  July 1, 2021 for any labs whose PIs do not have a primary or secondary appointment with an HMS Quad department. If you would like to verify if your PI/lab has such an appointment with an HMS Quad department and will not be charged for O2 usage, please contact us at rchelp@hms.harvard.edu.

We do not yet have a specific start date for the start of the chargeback program, though it will most likely occur in summer 2021. For additional information, please reference the new Research Computing Core website.

There will not be any charges for O2 storage or compute usage prior to the rollout date of the chargeback program.

Where do I put my files?

For most users' purposes, a filesystem is just a directory, like /home. However, filesystems can differ with respect to:

  • speeds of reading/writing data
  • backup policies
  • user/group access permissions
  • how much can be stored on them. The filesystem quotas page describes how to find out how much space you are using.

Where to put your files depends on:

  • size of the data
  • who needs to see them
  • whether they are temporary data or require backups
  • how you will be accessing them
  • how often you will be accessing them

It is almost never a good idea to have more than, say, 10,000 files in a directory. Your work and others' will be faster if you split that huge directory into a bunch of smaller sub-directories.

IMPORTANT NOTE: None of the standard filesystems are automatically encrypted, and cannot be used for HIPAA-protected or other secure data (Harvard's data security above level 3) unless those data have been de-identified.

Home directory (/home/ab123)

Every user gets a home directory where they land when logging into O2. For eCommons ID ab123 , this would be /home/ab123. This is a good place to put small data sets, lab notes, scripts, and important analysis results. Your home directory is of limited size, so if it fills up you'll need to use other filesystems. Home directories are backed up nightly.

For a small data analysis not requiring large data sets or huge output, a standard workflow would be:

  • (Optionally) Copy data from a desktop or other location to the home directory
  • Run analysis, writing output to the home directory
  • (Optionally) Copy data back to a desktop

Group directories

  • /n/groups/mygroup/
  • /n/data1/institution/department/lab/
  • /n/data2/institution/department/lab/

A group directory is used by a lab (or a set of researchers sharing data). These directories can be read by any member of the lab, which is quite useful when multiple researchers need to see the same data. Unlike home directories, the entire lab directory has a quota, and lab members work together to keep the space from filling up. These directories are used for large data sets, reference data, or scripts used by a whole lab. Eligible labs can use the Active (O2) Compute Storage Request Form to request a group directory or to increase its quota. Group directories are backed up nightly.

You might run an analysis on data in your home directory using reference data from your lab directory. You might then put results into the lab directory for other lab members to use.

Scratch directory (/n/scratch3/users/a/ab123)

Each user is entitled to space (10 TiB OR 1 million files/directories) under the /n/scratch3 filesystem. You can create a scratch3 directory for storing temporary data.

** These files are not backed up and will be deleted if they are not accessed for 30 days. **

Note: It is against HMS IT policy to artificially refresh last access time of any file located under /n/scratch3.

For workflows that allow for full control of temp/intermediate files, you can leave your input data under your home or group (if available) directory, make the first step in the workflow read from the original directory, do all of the temp/intermediate writes to /n/scratch3, and perform the final write back to the home or group location. So in a 5 step pipeline, step 1 reads from /n/groups or /home, steps 2-4 write intermediate files to /n/scratch3, and step 5 reads from /n/scratch3 and writes back to the final output /n/groups or /home directory. Here is a suggested workflow:

  • Create a directory under /n/scratch3 if needed
  • Set up your workflow so that the input is read from /n/groups or /home, but temporary/intermediate files are written to your scratch3 directory.
  • Write any needed results back to /n/groups or /home
  • Delete temporary data, or let it be auto-deleted

For workflows that write temp/intermediate files to the current directory, you can create a directory under /n/scratch3 and cd to it. Run the workflow from your scratch3 directory, specifying full paths to input files in /n/groups or /home and full final output paths to /n/groups or /home. Here is a suggested workflow using example ID "ab123":

  • Create a directory under /n/scratch3 if needed.
  • Set up your workflow so that full paths are used to refer to input files in /n/groups or /home.
  • Change directories (cd) to your /n/scratch3 directory, and run the analysis from there:
    • cd /n/scratch3/users/a/ab123
  • Write or copy any needed results back to /n/groups, /home, or your desktop, with copies submitted as an sbatch job or from an interactive session:
    • srun --pty -p interactive -t 0-12:00 /bin/bash
  • Delete temporary data, or let it be auto-deleted

For workflows that allow little flexibility in the location of temporary/intermediate files, data can be copied over to /n/scratch3, computed against there, and copied back to /n/groups or /home. This creates a redundant copy of the input, takes up storage space, and requires time to transfer the data to and from /n/scratch3. Here is a suggested workflow:

  • Create a directory under /n/scratch3 if needed.
  • Copy data from /n/groups, /home, or your desktop to your scratch3 directory. We recommend submitting this as an sbatch job, or be copied from an interactive session (e.g. srun --pty -p interactive -t 0-12:00 /bin/bash)
  • Run the analysis in your scratch3 directory, writing all temporary/intermediate files to this space
  • Copy any needed results back to your home or group directory on O2 via a cluster job or from an interactive session, or download to your desktop via the O2 file transfer servers (transfer.rc.hms.harvard.edu)
  • Delete temporary data, or let it be auto-deleted

IMPORTANT NOTE: If you are transferring files to /n/scratch3 using a tool and flag to preserve timestamps (e.g. rsync -a or -t), those files will also be subject to the deletion policy based on the original timestamp. If the preserved timestamp on a file is greater than 30 days, it will be deleted the next day, even if it had just been moved. This behavior may also occur if you are installing software on /n/scratch3 for personal usage for whatever reason; if there is a step inside the installation process that is simply copying files, and timestamps are preserved, your software may appear to stop functioning randomly as those files get purged prematurely. This is dangerous because the user rarely has insight as to when this occurs. Please be very judicious about handling files when moving them to or generating them on /n/scratch3; as mentioned above, if you are affected by this behavior, the files are unrecoverable.

GPU dedicated scratch space (/n/scratch_gpu/users/a/ab123)

This space is very similar to the standard /n/scratch3 space described above, with a quota of 15TiB per user and an idle-data retention policy of 30 days. The /n/scratch_gpu filesystem is dedicated for GPU computing and, at this time, only available for labs whose PI has a primary or secondary appointment in a pre-clinical HMS department. To learn more about scratch_gpu please visit Scratch_gpu Storage

Accessing folders on "research.files.med.harvard.edu" from O2

The file server "research.files.med.harvard.edu" is mostly used by desktop systems, but can be accessed from O2 from a few selected systems. You must transfer data from research.files to an O2 filesystem to compute against it!

Collaborations created by HMS IT are on the research.files filesystem. This filesystem is mostly used for sharing data between labs, or for departmental shared space, and it can be mounted as a shared drive on Windows and Mac desktops. To request a new research.files folder or an increase in an existing folder, please use the Active Collaborations Storage Form.

When needed, this filesystem can be accessed as /n/files on either the transfer partition on O2, or the transfer cluster at transfer.rc.hms.harvard.edu.

  • O2's dedicated transfer servers mount "research.files.med.harvard.edu" at the path: /n/files . This allows an easy place to copy files between /n/files and other O2 directories without having to submit jobs. See File Transfer for more information.
  • O2 login and most compute nodes do not mount /n/files
  • For those users who must use /n/files during batch jobs, you can request access to use the transfer job partition, which has a few low-power compute nodes that mounted /n/files for this purpose. See File Transfer for more information. 

Restoring Accidentally Deleted Files

Most shared filesystems retain snapshots for up to 60 days, the exception being temporary filesystems. If snapshots are available for a directory, they are located in a hidden directory called .snapshot. (This directory will not be visible by doing an ls or even ls -a.) . To retrieve a backup:

  • From a command prompt on O2, type cd .snapshot and then ls to see available backups of that directory. 

  • Inside the .snapshot directory, there will be directories with date/times in their names, containing a copy of all files at that date/time. Each sub-directory will also have its own .snapshot directory.
  • There are two types of directories within .snapshot, daily snapshots (retained for 14 days) and weekly snapshots (retained for 60 days); these are distinguished based upon the inclusion of "daily" or "weekly" in the directory name. You can identify a directory to restore a backup from based upon when the file or directory you want to restore was created and when it was accidentally deleted.
  • You can't write files to these directories, but you can copy files from here back to the original directories with the cp command.

Here is an example of how to restore a file from the .snapshot directory:

Longer-Term Storage Considerations

Networked storage on O2 can be classified in one of three ways: active, standby, or cold. For additional information on these storage offerings (including a comparative table), please reference the HMS RC page on Storage.

Active Storage

Active storage is storage upon which you expect to compute. For more information about this designation, please visit the HMS RC page on Active storage.

/n/standby

Standby storage is storage you might want to leverage if you have files that you only need to access infrequently. For more information about this storage offering, please visit the HMS RC page on Standby storage. A form to request standby storage can also be found on this page, as well as details for how to access this filesystem. Note that /n/standby is NOT accessible via O2 login nodes or compute nodes.

Cold Storage

Cold storage is storage you might leverage if you do not plan to ever access certain files ever again, e.g. you'd like to store certain files or datasets for archival only. A file or folder qualifies for cold storage if and only if you have no plans to access it for the foreseeable future (e.g. never). To inquire about this offering, please contact rdm@hms.harvard.edu. Note that this is a future storage option, and is not currently deployed for general use.

Copying data to O2 and between filesystems

See File Transfer for information on moving data to/from desktops, or between filesystems.

Shared Filesystems

These filesystems are housed on a central file server and are available from any system within O2.

filesystem

use

/n/groups

shared group data storage (Contact Research Computing if you need a group space)

/n/data1

shared group data storage

/n/data2

shared group data storage

/home

individual account data storage

/n/scratch3temporary/intermediate file storage
/n/scratch_gputemporary/intermediate GPU related file storage
/n/standbylonger term archival storage

Note: The /n/files filesystem, which allowed shared group data storage (access to eCommons collaborations), is not accessible from O2 compute or login nodes, only from the transfer partition. This partition has restricted access, so you will need to request access to run jobs there. See File Transfer for more details.

Temporary Filesystems

These filesystems tend to allow fast read and writes, but are not backed up. If you are doing significant I/O on a networked filesystem (like /n/groups or/home), it is often better to copy files from your home or group directory, process them, and copy output back, than to operate directly on files in your home or group directory.

/tmp is the standard UNIX temporary directory, and /tmp is a different hard drive on each machine. If you require the fastest I/O for your job, you can have a program write temporary intermediate files to the /tmp directory. A file you place in /tmp on a login node is not available in /tmp on a compute node or even on a different login node. If a job writes to /tmp, it will write to /tmp on the node the job is running on. There is not a lot of space, and you will be sharing it with anyone else that requires the use of /tmp on that node. Each node has a different /tmp, and if you need to fetch files from there, you will need to ssh directly to that compute node. Also, these files may be deleted any time after your job finishes.

Temporary filesystems are never backed up and are periodically automatically purged of unused data. The contents of these filesystems may also be deleted in the event of a system being rebooted or reinstalled.


[ Information below here is not important for most users ]

Synchronized Filesystems

These filesystems are housed on local disks on individual machines. We keep these filesystems synchronized using our deployment management infrastructure.

filesystem

use

/

top of UNIX filesystem

/usr

most installed software

/var

variable data such as logs and databases

Synchronized O2 filesystems are never backed up. The source system images from which compute nodes and application servers are built are backed up daily, and these can be used to reinstall a system.