Using (Local)ColabFold on O2

ColabFold (https://github.com/sokrypton/ColabFold ) is an emerging protein folding prediction tool based on Google DeepMind’s Alphafold (see https://harvardmed.atlassian.net/wiki/spaces/O2/pages/1995177985 ). LocalColabFold (https://github.com/YoshitakaMo/localcolabfold ) is a packaging of ColabFold for use on local machines; we provide instructions on how to leverage LocalColabFold on O2 below. LocalColabFold uses MMseqs2 (conditionally faster than jackhmmer), and runs AlphaFold2 for single protein modeling and AlphaFold-Multimer for protein complex modeling. If you are unsure about which to use, feel free to try both tools and compare results.

Note: If you’re new to Slurm or O2, please see for lots of information on submitting jobs.

 

Using the Module

LocalColabFold is available via our LMOD module system ( ).

The default behavior of the tool is to send proteins to the remote server for alignment before returning to leverage the local GPU resource, this should only be used for small queries. We recommend processing “large” volumes of proteins by creating MSAs locally (see below). See the Caveats section for more information.

To access the module:

$ module load localcolabfold/1.5.2

A snapshot of the help text follows:

$ module help localcolabfold/1.5.2 ------------------------------------------------------------------------------ Module Specific Help for "localcolabfold/.1.5.2" ------------------------------------------------------------------------------ For detailed usage instructions, go to: https://github.com/YoshitakaMo/localcolabfold This module was created loosely based on the process outlined in the YoshitakaMo/localcolabfold repository (release 1.5.1). However, this module packages colabfold release 1.5.2. This module currently requires gcc/9.2.0 to be loaded due to requiring external cuda libraries. If you are working under a different compiler stack (e.g. gcc/6.2.0), you may want to install this yourself until we offer an updated version of the cuda module under a different compiler. Visit the repository website for more information about how to install this yourself.

This output shows the last time the module was updated. ColabFold has developed quickly at times. If you are looking for a bleeding edge version of ColabFold, you can install your own copy () and manually keep it up to date.

Generating MSAs Using Local MMseqs2

Generating MSAs locally using MMseqs2 reduces the load on remote servers managed by Colabfold developers, and allows users to run larger batches without risk of being limited (see ). MMseqs2 can be loaded from within the LocalColabFold module:

$ module load gcc/9.2.0 localcolabfold/1.5.2

Parameters for using MMseqs2 through the command colabfold_search are revealed by loading the modules above in an interactive session:

MMseqs2 accepts .fasta files containing multiple amino acid sequences as input,

including complexes where proteins are separated with a colon (:).

These inputs can contain a single sequence, or a "batch" of several proteins as input. The path to this file should be included in a colabfold_search command. We have public databases available in /n/shared_db/ () and we will use these for database paths in the simplified examples below:

If you are using MMseqs2 version 14-7e284, please use /n/shared_db/misc/mmseqs2/14-7e284

These commands can be combined with a sbatch ( ) script. The resources required to complete a LocalColabFold job may vary by structure and complexity. It is generally best to start with a relatively conservative request for resources, then increase as needed based on information from past jobs. This information can be found using commands like O2_jobs_report ( ). Below is a simplified example of an sbatch script that runs the file INPUT.fasta against colabfold_search on the short partition:

The output should include MSAs in .a3m format. These can be submitted to LocalColabFold as input in the next section, similar to a FASTA file.

Executing LocalColabFold On O2

LocalColabFold can be loaded as a module by running:

Once you have loaded these modules, you’ll want to submit your job to the gpu (or gpu_quad if you have access) partition so that you can leverage GPU resources ( ). Parameters for using LocalColabFold through the command colabfold_batch can be shown by loading the modules above in an interactive session ( ) and running:

At the moment, we do not recommend invoking ‘--amber’ or ‘--templates’ flags since these lead some jobs to fail

LocalColabFold accepts both .a3m and .fasta files containing amino acid sequences, including complexes where proteins are separated with a colon (:). These inputs can contain a single sequence, or a "batch" of several proteins as input. The path to this file should be included in a colabfold_batch command. Below is a simplified example of a colabfold_batch command (graciously provided by the Center for Computational Biomedicine):

Similar to the previous scripts mentioned, it is best to start with a modest resource request and slowly increase as needed. Below is a simplified example of a sbatch script that runs the file INPUT.fasta against colabfold_batch on the gpu partition:

ColabFold does NOT support multiple GPUs. Please refrain from requesting more than one GPU per colabfold_batch invocation, as this will not speed up your run time, and will inhibit your ability to have your job dispatched in a timely manner.

The output directory will contain several .pdb, .json, and .png files for the predicted complex structure. This includes pLDDT and PAE metrics that assess the accuracy of each prediction. The 'best' ranked structure will be called {sample_id}_unrelaxed_rank_1_model_{i}.pdb .

Caveats

LocalColabFold is a repackaging of ColabFold for local use. This means that LocalColabFold requires all the same local hardware resources and connections that ColabFold would require (but without the Google Colab soft dependency). This includes the allowing of shipping the protein sequence to a remote server maintained by the ColabFold developers for processing during the alignment step. This server is shared by all users of ColabFold, and is not an HPC environment to our knowledge. This means that LARGE BATCHES OF PROTEIN ALIGNMENTS MUST BE GENERATED LOCALLY USING MMSEQS2, regardless of whether you are using the O2 module or your own installation on O2. At this time, the developers define large as “a few thousand” sequences. This could change, and is at the discretion of the system administrators maintaining the remote server. Please be considerate of other ongoing analysis on O2 when submitting large queries.

Large volumes of submissions to the remote server may cause the submitting compute node’s IP address to be rate-limited, or even blacklisted, which will impact all users of LocalColabFold on O2 that land on that compute node. Furthermore, because there are a limited number of compute nodes with GPU resources, if volume is high enough, all of O2’s GPU compute nodes can easily be blacklisted in a short amount of time.

Troubleshooting/FAQ

Errors with using --amber or --templates

As noted above, occasionally jobs will fail if either of the above flags are enabled - this is a known issue and requires action from the ColabFold developers. For now, simply resubmit the job without these flags, or if these functions are required for your work, you can also try submitting your sequences against Alphafold (and adjust your resource requirements accordingly).

Please contact rchelp@hms.harvard.edu with any questions regarding the module or troubleshooting the installation process that this section does not address or addresses insufficiently. Depending on the question, we may need to refer you to the developers, but we will do our best to assist.

“I was using localcolabfold/latest, and now it’s gone! What happened to it?”

The module formerly known as localcolabfold/latest is now localcolabfold/1.3.0. At the time this module was first made available, the versioning was not as clear-cut (primarily, the relationship between localcolabfold versions and colabfold releases was not standardized). However, it is now, and it would be improper for a latest version to not actually be the latest version, so we have renamed the module accordingly in accordance to its true associated colabfold release version. If you have workflows or pipelines revolving around loading this module, please change them to load 1.3.0 instead. It is worth mentioning that 1.3.0 and 1.5.2 do NOT use the same AlphaFold models, so you should consider them to be incompatible with each other in terms of research reproducibility. If you were using localcolabfold/latest for an existing project, we recommend continuing with 1.3.0, or restarting entirely with 1.5.2, and not mix-and-matching.

localcolabfold/1.3.0 (formerly localcolabfold/latest) sometimes hangs or crashes on the alignment step

We have recently discovered that the developers changed the way they handle alignment requests to the remote mmseqs2 server. This includes how the request is structured and sent/received. Unfortunately, this means that alignment functionality with versions older than 1.5.2 may be impacted. Presently, the workaround for this is to generate your alignments locally with our mmseqs modules. For reference, you may check to see which mmseqs2 module is used with each LocalColabFold installation by loading the corresponding module and typing module list.