NOTICE: FULL O2 Cluster Outage, January 3 - January 10th

O2 will be completely offline for a planned HMS IT data center relocation from Friday, Jan 3, 6:00 PM, through Friday, Jan 10

  • on Jan 3 (5:30-6:00 PM): O2 login access will be turned off.
  • on Jan 3 (6:00 PM): O2 systems will start being powered off.

This project will relocate existing services, consolidate servers, reduce power consumption, and decommission outdated hardware to improve efficiency, enhance resiliency, and lower costs.

Specifically:

  • The O2 Cluster will be completely offline, including O2 Portal.
  • All data on O2 will be inaccessible.
  • Any jobs still pending when the outage begins will need to be resubmitted after O2 is back online.
  • Websites on O2 will be completely offline, including all web content.

More details at: https://harvardmed.atlassian.net/l/cp/1BVpyGqm & https://it.hms.harvard.edu/news/upcoming-data-center-relocation

Scratch3 Storage

To provide more robust and reliable storage, HMS IT has deployed a new storage cluster, designated as /n/scratch, to replace /n/scratch3

The /n/scratch3 filesystem was retired on Jan 16, 2024

The timeline for this update was:  

  1. November 13 – Beta access to new /n/scratch for preliminary testing. 

  2. December 8 – Full access to /n/scratch for all users. The existing /n/scratch3 will temporarily remain available. 

  3. January 8 /n/scratch3 becomes read-only. 

  4. January 16 /n/scratch3 is retired. The path /n/scratch3 will no longer exist on O2 and no data will be recoverable from the old /n/scratch3

Please update your workflow to use/n/scratch see our documentation about changes in the new Scratch Storage

If you have any questions or concerns, contact Research Computing at rchelp@hms.harvard.edu


Scratch3 Overview

  • The /n/scratch3 filesystem is bring retired on Jan 15, 2024 (see above).

  • Please use the new storage at /n/scratch . Documentation is available at: Scratch Storage

Scratch3 Configuration

Scratch3 is a storage location on O2 for temporary data. In summary:

  • THE SCRATCH SPACE IS NOT BACKED UP!

  • Files are AUTOMATICALLY DELETED 30 days after they are last accessed.

  • There are NO SNAPSHOTS available for scratch3 folders.

  • O2 users can have up to 10 TiB and 1 million files/directories in their scratch folder.

  • Use this space for intermediate files in a longer workflow, or for extra output that doesn't matter much.

To reiterate, if you delete a file on purpose or by accident, or just leave it sitting for 30 days, it is GONE. You can't get it back, and Research Computing can't get it back for you. Using this model lets us provide cost-effective, high-end storage to many more users than would be possible with permanent and backed-up files. But it is important that you understand the risks of using it.

Important - Artificially modifying file access times is against policy and may result in losing your access to scratch3.

Scratch3 Directories: Users

On scratch3, user directories are found under:

  • /n/scratch3/users/<first_hms_id_char>/<HMSID>

Where "<first_hms_id_char>" is the first character of your HMS ID (formerly eCommons ID) and  "<HMSID>" is your HMS ID.

 

For example, the scratch3 directory for a user with the username "abc123" would be located at:

  • /n/scratch3/users/a/abc123

 

For an HMS ID of zz999 the scratch3 directory will be at:

  • /n/scratch3/users/z/zz999

Scratch3 Storage Usage

The storage usage is on a user basis rather than a group. To check your personal and group storage utilization and limits, please run the following command:

quota-v2

More information about the O2 quota-v2 command can be found on this wiki page.

HMS Research Computing retired the old tool for reporting storage utilization and limits called quota on August 8, 2023.

The quota-v2 tool retrieves more comprehensive information than the previous quota tool, executes faster, and runs on all O2 nodes (login, compute, and transfer cluster).