Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Researchers sometimes need text about Research Computing's resource and/or support, to include in a grant application. If the below text will not be sufficient, please contact Research Computing. We can provide you with a multi-page "Research Computing Fact Sheet" that describes our resources and services in more detail.


...


The Research Computing Group supports a large High Performance Compute Cluster, use of a broad range of software applications across life sciences and biomedical research domains. It also provides a wide variety of training, as well as informatics and data analysis consulting. 


O2 is a shared, heterogeneous High Performance Compute facility which includes 370390+ compute nodes, 1112,000+ compute cores, 103 GPU cards (47 Nvidia RTX8000, 24 Tesla V100s, and some lab-specific or older cards), and more than 90TB 100TB of memory. The vast majority of the compute nodes are built on Intel architecture. Most compute nodes have between 256GB of memory; a few have up to 1TB of memory. O2 is also connected to several enterprise storage systems (optimized for scratch or more permanent storage) with over 20 petabytes of network and local data storage capacity. O2 is located in a state-of-the-art, off-campus data center with multiple critical systems replicated at a secondary location for disaster recovery.


O2 also has hundreds of applications for computational analysis available on the cluster, including major computational tools (Matlab, Mathematica), computer languages (R, Python, Perl) and modern software for many life science research disciplines. Jobs are managed by the SLURM scheduler and the nodes are based on CentOS 7.7.1908 Linux architecture. Hundreds of HMS-affiliated researchers use O2 for big and small projects. In 2020 more than 31 million jobs were submitted to O2.