NOTICE: FULL O2 Cluster Outage, January 3 - January 10th
O2 will be completely offline for a planned HMS IT data center relocation from Friday, Jan 3, 6:00 PM, through Friday, Jan 10
- on Jan 3 (5:30-6:00 PM): O2 login access will be turned off.
- on Jan 3 (6:00 PM): O2 systems will start being powered off.
This project will relocate existing services, consolidate servers, reduce power consumption, and decommission outdated hardware to improve efficiency, enhance resiliency, and lower costs.
Specifically:
- The O2 Cluster will be completely offline, including O2 Portal.
- All data on O2 will be inaccessible.
- Any jobs still pending when the outage begins will need to be resubmitted after O2 is back online.
- Websites on O2 will be completely offline, including all web content.
More details at: https://harvardmed.atlassian.net/l/cp/1BVpyGqm & https://it.hms.harvard.edu/news/upcoming-data-center-relocation
Installing LocalColabFold Locally
If you would like to maintain control over when LocalColabFold is updated, you may choose to install it to a local folder under your direct control. We outline instructions on how to do so below.
Installing LocalColabFold
First, begin an interactive session (Using Slurm Basic | Interactive Sessions ) and load the conda module (preferably with nothing else loaded, run module purge
to remove all active modules from your current environment):
$ module load miniconda3/4.10.3
Then, cd
to a location where you would like to clone the LocalColabFold repository (GitHub - YoshitakaMo/localcolabfold: ColabFold on your local PC ), then do so:
$ git clone https://github.com/YoshitakaMo/localcolabfold.git
This command will create a folder in your working directory called localcolabfold
. cd
into this directory. Then, if you ls
this location, you should see the files that are in the repository view on GitHub accessible via your terminal in your local folder that was just created. We are interested in the install_colabbatch_linux.sh
file. Remember the path to this file (we will refer to this path as /path/to/install_colabbatch_linux.sh
from now on; replace it with your own path). Now, cd
to a location where you would like the environment to live, then invoke this script:
$ cd /path/to/desired/location
$ sh /path/to/install_colabbatch_linux.sh
This should create a directory at /path/to/desired/location
called colabfold_batch
. Wait for the installation to finish.
Keeping Your LocalColabFold Installation Updated
The main reason you are probably maintaining a personal LocalColabFold installation is likely because you would like to use the newest features as they are implemented into ColabFold, without waiting for the module to be updated erratically (or perhaps not at all depending on stability). To keep your installation updated, return to the location where you cloned the repository (recall that it was called localcolabfold
when it was created via git clone
:
Now that you’re here, you’ll want to make sure that these scripts are updated:
Now, you’ll want to run the update_linux.sh
script, and pass the path to your environment as an argument:
After this is complete, you should have the newest version of ColabFold baked into your LocalColabFold installation.
Executing LocalColabFold Locally
Now that you have installed LocalColabFold locally, there is one more hurdle to using this installation.
First, make sure the bin
subdirectory is added to your PATH
variable:
If you prefer, you can paste this line into your ~/.bashrc
file instead so that it is automatically set up each time you log in to O2.
In order to leverage GPU resources, you will need to load a CUDA module, which in turn requires you to load a GCC module:
Via internal testing, Research Computing has discovered that even though install_colabbatch_linux.sh
installs a local copy of (GCC and) CUDA, it is unable to leverage these resources, and we were unable to provide access to these local copies. The present workaround is to use the O2 modules in their place as above.