Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
>> configCluster

	Must set WallTimeQueueName and QueueNameWallTime before submitting jobs to O2.  E.g.

	>> c = parcluster;
	>> c.AdditionalProperties.QueueName = 'queue-name';
	>> % 5 hourhours
walltime
	>> c.AdditionalProperties.WallTime = '05:00:00';
	>> c.AdditionalProperties.QueueName = 'queue-name';
	>> c.saveProfile

>>saveProfile

Complete.  Default cluster profile set to "o2 R2023a".

now your default cluster profile is set to o2 local R2019a R2023a and you should be able to verify it by running the command parclustercommand parcluster

Code Block
>> parcluster

ans =

 Generic Cluster

    Properties:

                   Profile: o2 R2019aR2023a
                  Modified: false
                      Host: compute-a-16-22161
                NumWorkers: 100000
                NumThreads: 1

        JobStorageLocation: /home/abc123/MdcsDataLocationabc/.matlab/3p_cluster_jobs/o2/R2023a/R2019ashared
         ClusterMatlabRoot: /n/app/matlab/2019a2023a-v2
           OperatingSystem: unix

   RequiresOnlineLicensing: false
 IntegrationScriptsLocation  PreferredPoolNumWorkers: 32
     PluginScriptsLocation: /n/app/matlab/2019a/toolbox/localsupport-packages/matlab-parallel-server/scripts/IntegrationScripts/o2
      AdditionalProperties: List properties

    Associated Jobs:

            Number Pending: 0
             Number Queued: 0
            Number Running: 0
           Number Finished: 0

>>

...

Note 2:  After running the configCluster command, the default cluster profile is set to the O2 cluster, ; if you want to go back and use the "local" cluster profile, you can change the default profile using the command  parallelcommand parallel.defaultClusterProfile('local')

Note 3: Running the configCluster command sets the cluster profile only for the currently used MATLAB version. If later on you use a different version of MATLAB you will need to run configCluster again Note 4: O2 MATLAB cluster profile is not compatible with Orchestra profile. If you plan to run on both clusters it is recommended to use a different version of MATLAB in each cluster (for example 2016b in Orchestra and 2017a in O2)again.


Setting the submission parameter for the O2 MATLAB cluster profile 

...

Code Block
>> c=parcluster;
% Specify the walltime (e.g. 48 hours)
>> c.AdditionalProperties.WallTime = '48:00:00';
% Specify a partition to use for MATLAB jobs	
>> c.AdditionalProperties.QueueName = 'partition-name';


% Optional flags
% Specify memory to use for MATLAB jobs, per node (MB)
>> c.AdditionalProperties.Mem = '4000';

% Specify memory to use for MATLAB jobs, per CPU core (MB)
>> c.AdditionalProperties.MemUsageMemPerCPU = '40002000';

% Specify the GPU card to run on
>> c.AdditionalProperties.GpuCard = 'gpu-card-to-use';

% Request 2 GPUs per node
>> c.AdditionalProperties.GpusPerNode = 2;

% addAdd directly any sbatch supported flag manually (for example, mem per node and Num tasks per node):. 
% The "AdditionalSubmitArgs" field can be used for any Slurm flag except the walltime and partition.
% This is the method we recommend.
>> c.AdditionalProperties.AdditionalSubmitArgs = '--mem=4000 --tasks-per-node=2'

% Save changes after modifying AdditionalProperties for the above changes to persist between MATLAB sessions
>> c.saveProfile

Note that set parameters by default will not be retained by default and will need to must be re-entered if the c object is deleted. To save permanently the submission parameter you must execute the command c.saveProfile

Important: Use --mem-per-cpu (or the flag c.AdditionalProperties.MemUsageMemPerCPU) instead of --mem to request a custom amount of memory when using the mpi partitionthe mpi partition. The slurm flag --mem is used to request a given amount of memory per node, so , unless you are enforcing a balanced distribution of tasks (i.e. MATLAB workers) per node, you might end up with too much or not enough memory on a given node, depending on how the tasks are allocated.

...