Filesystems
How much space do I have?
It varies based upon the filesystem / directory, so please reference the Filesystem Quotas page for a comprehensive overview of quotas per user and/or per group.
Will I be charged for storage usage?
HMS IT has been charging for storage and compute usage on O2 as of July 1, 2021 for any labs whose PIs do not have a primary or secondary appointment with an HMS Quad department. If you would like to verify if your PI/lab has such an appointment with an HMS Quad department and will not be charged for O2 usage, please contact the Research Computing Core at rccore@hms.harvard.edu.
For additional information, please reference the Research Computing Core website.
Where do I put my files?
For most users' purposes, a filesystem is just a directory, like /home
. However, filesystems can differ with respect to:
speeds of reading/writing data
backup policies
user/group access permissions
how much can be stored on them. The filesystem quotas page describes how to find out how much space you are using.
Where to put your files depends on:
size of the data
who needs to see them
whether they are temporary data or require backups
how you will be accessing them
how often you will be accessing them
It is almost never a good idea to have more than, say, 10,000 files in a directory. Your work and others' will be faster if you split that huge directory into a bunch of smaller sub-directories.
IMPORTANT NOTE: None of the standard filesystems are automatically encrypted, and cannot be used for HIPAA-protected or other secure data (Harvard's data security above level 3) unless those data have been de-identified.
Home directory (/home/ab123
)
Every user gets a home directory where they land when logging into O2. For HMS ID ab123 , this would be /home/ab123
. This is a good place to put small data sets, lab notes, scripts, and important analysis results. Your home directory is of limited size, so if it fills up you'll need to use other filesystems. Home directories are backed up nightly.
For a small data analysis not requiring large data sets or huge output, a standard workflow would be:
(Optionally) Copy data from a desktop or other location to the home directory
Run analysis, writing output to the home directory
(Optionally) Copy data back to a desktop
Group directories
/n/groups/mygroup/
/n/data1/institution/department/lab/
/n/data2/institution/department/lab/
A group directory is used by a lab (or a set of researchers sharing data). These directories can be read by any member of the lab, which is quite useful when multiple researchers need to see the same data. Unlike home directories, the entire lab directory has a quota, and lab members work together to keep the space from filling up. These directories are used for large data sets, reference data, or scripts used by a whole lab. Eligible labs can use the Active (O2) Compute Storage Request Form to request a group directory or to increase its quota. Group directories are backed up nightly.
You might run an analysis on data in your home directory using reference data from your lab directory. You might then put results into the lab directory for other lab members to use.
Scratch directory (/n/scratch/users/a/ab123
)
Each user is entitled to space (25 TiB OR 2.5 million files/directories) under the /n/scratch
filesystem. You can create a personal scratch user directory for storing temporary data.
** These files are not backed up and will be deleted if they are not accessed for 30 days. **
Note: It is against HMS IT policy to artificially refresh last modification time of any file located under /n/scratch
.
For workflows that allow for full control of temp/intermediate files, you can leave your input data under your home or group (if available) directory, make the first step in the workflow read from the original directory, do all of the temp/intermediate writes to /n/scratch
, and perform the final write back to the home or group location. So in a 5 step pipeline, step 1 reads from /n/groups
or /home
, steps 2-4 write intermediate files to /n/scratch
, and step 5 reads from /n/scratch
and writes back to the final output /n/groups
or /home
directory. Here is a suggested workflow:
Set up your workflow so that the input is read from
/n/groups
or/home
, but temporary/intermediate files are written to your scratch directory.Write any needed results back to
/n/groups
or/home
Delete temporary data, or let it be auto-deleted
For workflows that write temp/intermediate files to the current directory, you can create a directory under /n/scratch
and cd
to it. Run the workflow from your scratch directory, specifying full paths to input files in /n/groups
or /home
and full final output paths to /n/groups
or /home
. Here is a suggested workflow using example ID "ab123":
Set up your workflow so that full paths are used to refer to input files in
/n/groups
or/home
.Change directories (
cd
) to your/n/scratch
directory, and run the analysis from there:cd /n/scratch/users/a/ab123
Write or copy any needed results back to
/n/groups
,/home
, or your desktop, with copies submitted as ansbatch
job or from an interactive session:srun --pty -p interactive -t 0-12:00 /bin/bash
Delete temporary data, or let it be auto-deleted
For workflows that allow little flexibility in the location of temporary/intermediate files, data can be copied over to /n/scratch
, computed against there, and copied back to /n/groups
or /home
. This creates a redundant copy of the input, takes up storage space, and requires time to transfer the data to and from /n/scratch
. Here is a suggested workflow:
Copy data from
/n/groups
,/home
, or your desktop to your scratch directory. We recommend submitting this as ansbatch
job, or be copied from an interactive session (e.g.srun --pty -p interactive -t 0-12:00 /bin/bash
)Run the analysis in your scratch directory, writing all temporary/intermediate files to this space
Copy any needed results back to your home or group directory on O2 via a cluster job or from an interactive session, or download to your desktop via the O2 file transfer servers (transfer.rc.hms.harvard.edu)
Delete temporary data, or let it be auto-deleted
IMPORTANT NOTE: If you are transferring files to /n/scratch
using a tool and flag to preserve timestamps (e.g. rsync -a
or -t
), those files will also be subject to the deletion policy based on the original timestamp. If the preserved timestamp on a file is greater than 30 days, it will be deleted the next day, even if it had just been moved. This behavior may also occur if you are installing software on /n/scratch
for personal usage for whatever reason; if there is a step inside the installation process that is simply copying files, and timestamps are preserved, your software may appear to stop functioning randomly as those files get purged prematurely. This is dangerous because the user rarely has insight as to when this occurs. Please be very judicious about handling files when moving them to or generating them on /n/scratch
; as mentioned above, if you are affected by this behavior, the files are unrecoverable.
GPU dedicated scratch space (/n/scratch_gpu/users/a/ab123)
Research Computing is no longer providing the /n/scratch_gpu filesystem. Please use /n/scratch instead.
Accessing folders on "research.files.med.harvard.edu" from O2
The file server "research.files.med.harvard.edu
" is mostly used by desktop systems, but can be accessed from O2 from a few selected systems. You must transfer data from research.files to an O2 filesystem to compute against it!
Collaborations created by HMS IT are on the research.files
filesystem. This filesystem is mostly used for sharing data between labs, or for departmental shared space, and it can be mounted as a shared drive on Windows and Mac desktops. To request a new research.files folder or an increase in an existing folder, please use the Active Collaborations Storage Form.
When needed, this filesystem can be accessed as /n/files
on either the transfer partition on O2, or the transfer cluster at transfer.rc.hms.harvard.edu.
O2's dedicated transfer servers mount "
research.files.med.harvard.edu
" at the path:/n/files
. This allows an easy place to copy files between/n/files
and other O2 directories without having to submit jobs. See File Transfer for more information.O2 login and most compute nodes do not mount
/n/files
For those users who must use
/n/files
during batch jobs, you can request access to use thetransfer
job partition, which has a few low-power compute nodes that mounted/n/files
for this purpose. See File Transfer for more information.
Restoring Accidentally Deleted Files
Most shared filesystems retain snapshots for up to 60 days, the exception being temporary filesystems. If snapshots are available for a directory, they are located in a hidden directory called .snapshot
. (This directory will not be visible by doing an ls
or even ls -a
.) . To retrieve a backup:
From a command prompt on O2, type
cd .snapshot
and thenls
to see available backups of that directory.Inside the
.snapshot
directory, there will be directories with date/times in their names, containing a copy of all files at that date/time. Each sub-directory will also have its own.snapshot
directory.There are two types of directories within
.snapshot
, daily snapshots (retained for 14 days) and weekly snapshots (retained for 60 days); these are distinguished based upon the inclusion of "daily" or "weekly" in the directory name. You can identify a directory to restore a backup from based upon when the file or directory you want to restore was created and when it was accidentally deleted.You can't write files to these directories, but you can copy files from here back to the original directories with the
cp
command.
Here is an example of how to restore a file from the .snapshot
directory:
# change into .snapshot directory
mfk8@login01:~ $ cd .snapshot
# See contents of .snapshot directory.
# The available snapshot directories are named with a timestamp of when these backups were taken.
# In this example, the directory names contain the prefix "O2_home_" because we are in a user's home directory.
mfk8@login01:.snapshot $ ls
FSAnalyze-Snapshot-Current-1533427124 O2_home_daily_2018-10-28_02-00 O2_home_daily_2018-11-04_02-00 O2_home_weekly_2018-10-07_16-00
home.daily O2_home_daily_2018-10-29_02-00 O2_home_daily_2018-11-05_02-00 O2_home_weekly_2018-10-14_16-00
home.weekly O2_home_daily_2018-10-30_02-00 O2_home_daily_2018-11-06_02-00 O2_home_weekly_2018-10-21_16-00
O2_home_daily_2018-10-24_02-00 O2_home_daily_2018-10-31_02-00 O2_home_weekly_2018-09-09_16-00 O2_home_weekly_2018-10-28_16-00
O2_home_daily_2018-10-25_02-00 O2_home_daily_2018-11-01_02-00 O2_home_weekly_2018-09-16_16-00 O2_home_weekly_2018-11-04_16-00
O2_home_daily_2018-10-26_02-00 O2_home_daily_2018-11-02_02-00 O2_home_weekly_2018-09-23_16-00 SIQ-41aaccf519955ee9fff3befe969e62d7-latest
O2_home_daily_2018-10-27_02-00 O2_home_daily_2018-11-03_02-00 O2_home_weekly_2018-09-30_16-00
# To restore a file we accidentally deleted on November 6, we can change to the previous day's backup directory:
mfk8@login01:.snapshot $ cd O2_home_daily_2018-11-05_02-00
# Then we can list the contents of the directory:
mfk8@login01:O2_home_daily_2018-11-05_02-00 $ ls
file1.txt file2.txt file3.txt
# Once the file to restore has been identified, we can copy it back to the home directory:
mfk8@login01:O2_home_daily_2018-11-05_02-00 $ cp file1.txt ../../
# If you instead needed to restore a directory, use 'cp -r' instead of solely 'cp'
Longer-Term Storage Considerations
Networked storage on O2 can be classified in one of three ways: active, standby, or cold. For additional information on these storage offerings (including a comparative table), please reference the HMS RC page on Storage.
Active Storage
Active storage is storage upon which you expect to compute. For more information about this designation, please visit the HMS RC page on Active storage.
/n/standby
Standby storage is storage you might want to leverage if you have files that you only need to access infrequently. For more information about this storage offering, please visit the HMS RC page on Standby storage. A form to request standby storage can also be found on this page, as well as details for how to access this filesystem. Note that /n/standby
is NOT accessible via O2 login nodes or compute nodes.
Cold Storage
Cold storage is storage you might leverage if you do not plan to ever access certain files ever again, e.g. you'd like to store certain files or datasets for archival only. A file or folder qualifies for cold storage if and only if you have no plans to access it for the foreseeable future (e.g. never). For more information about Cold Storage, please visit the dedicated HMS RC page. The form to request moving data to Cold Storage is available on that page. Additionally, please note that access to data in Cold is currently limited to HMS IT.
Copying data to O2 and between filesystems
See File Transfer for information on moving data to/from desktops, or between filesystems.
Shared Filesystems
These filesystems are housed on a central file server and are available from any system within O2.
filesystem | use |
---|---|
| shared group data storage (Contact Research Computing if you need a group space) |
| shared group data storage |
| shared group data storage |
| individual account data storage |
| temporary/intermediate file storage |
| longer term archival storage |
Note: The /n/files
filesystem, which allowed shared group data storage, is not accessible from O2 compute or login nodes, only from the transfer
partition. This partition has restricted access, so you will need to request access to run jobs there. See File Transfer for more details.
Temporary Filesystems
These filesystems tend to allow fast read and writes, but are not backed up. If you are doing significant I/O on a networked filesystem (like /n/groups
or/home
), it is often better to copy files from your home or group directory, process them, and copy output back, than to operate directly on files in your home or group directory.
/tmp
is the standard UNIX temporary directory, and /tmp
is a different hard drive on each machine. If you require the fastest I/O for your job, you can have a program write temporary intermediate files to the /tmp
directory. A file you place in /tmp
on a login node is not available in /tmp
on a compute node or even on a different login node. If a job writes to /tmp
, it will write to /tmp
on the node the job is running on. There is not a lot of space, and you will be sharing it with anyone else that requires the use of /tmp
on that node. Each node has a different /tmp
, and if you need to fetch files from there, you will need to ssh
directly to that compute node. Also, these files may be deleted any time after your job finishes.
Temporary filesystems are never backed up and are periodically automatically purged of unused data. The contents of these filesystems may also be deleted in the event of a system being rebooted or reinstalled.
[ Information below here is not important for most users ]
Synchronized Filesystems
These filesystems are housed on local disks on individual machines. We keep these filesystems synchronized using our deployment management infrastructure.
filesystem | use |
---|---|
| top of UNIX filesystem |
| most installed software |
| variable data such as logs and databases |
Synchronized O2 filesystems are never backed up. The source system images from which compute nodes and application servers are built are backed up daily, and these can be used to reinstall a system.