NOTICE: FULL O2 Cluster Outage, January 3 - January 10th
O2 will be completely offline for a planned HMS IT data center relocation from Friday, Jan 3, 6:00 PM, through Friday, Jan 10
- on Jan 3 (5:30-6:00 PM): O2 login access will be turned off.
- on Jan 3 (6:00 PM): O2 systems will start being powered off.
This project will relocate existing services, consolidate servers, reduce power consumption, and decommission outdated hardware to improve efficiency, enhance resiliency, and lower costs.
Specifically:
- The O2 Cluster will be completely offline, including O2 Portal.
- All data on O2 will be inaccessible.
- Any jobs still pending when the outage begins will need to be resubmitted after O2 is back online.
- Websites on O2 will be completely offline, including all web content.
More details at: https://harvardmed.atlassian.net/l/cp/1BVpyGqm & https://it.hms.harvard.edu/news/upcoming-data-center-relocation
Text for Grants Referring to the HMS Research Cluster
Researchers sometimes need text about Research Computing's resource and/or support, to include in a grant application. If the below text will not be sufficient, please contact Research Computing. We can provide you with a multi-page "Research Computing Fact Sheet" that describes our resources and services in more detail.
The Research Computing Group supports a large High Performance Compute Cluster, use of a broad range of software applications across life sciences and biomedical research domains. It also provides a wide variety of training, as well as informatics and data analysis consulting.Â
O2 is a shared, heterogeneous High Performance Compute facility which includes 390+ compute nodes, 12,000+ compute cores, 147 GPU cards (47 Nvidia RTX8000, 24 Tesla V100s, and some lab-specific or older cards), and more than 100TB of memory. The vast majority of the compute nodes are built on Intel architecture. Most compute nodes have between 256GB of memory; a few have up to 1TB of memory. O2 is also connected to several enterprise storage systems (optimized for scratch or more permanent storage) with over 20 petabytes of network and local data storage capacity. O2 is located in a state-of-the-art, off-campus data center with multiple critical systems replicated at a secondary location for disaster recovery.
O2 also has hundreds of applications for computational analysis available on the cluster, including major computational tools (Matlab, Mathematica), computer languages (R, Python, Perl) and modern software for many life science research disciplines. Jobs are managed by the SLURM scheduler and the nodes are based on CentOS 7.7.1908Â Linux architecture. Hundreds of HMS-affiliated researchers use O2 for big and small projects. In 2022, more than 20 million jobs were submitted to O2.