Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents

...

Age= The Value is based on the job pending time (since eligible) normalized against the PriorityMaxAge parameter, currently PriorityMaxAge is set to 7-00:00:00. 

JobSize = The job size factor correlates to the number of nodes or CPUs the job has requested, the larger the job, the closer to 1 is the jobsize factor. Currently the contribution from this factor is negligible.

Partition = The value is calculated as the ratio between the priority of the partition requested by the job against the maximum partition priority. Currently the max partition priority is set to 14 (for partition interactive) 

QOS = The Quality of Services factor is calculated as the ratio between the job's qos priority and the maximum qos priority. By default each job is submitted with the qos "normal" which has a zero priority value  

TRES = not currently active, should always be zero

FairShare =This value is proportional to the ratio of resources available to each users and the amount of resources that has been consumed by the user submitting the job, see below for details.


Each of these factors is then augmented by a custom multiplier in order to obtain the overall JobPriority value accordingly with the formula:

JobPriority=Age*PriorityWeightAge+

                     Fairshare*PriorityWeightFairShare+

                    JobSize*PriorityWeightJobSize+

                    Partition*PriorityWeightPartition+

                    QOS*PriorityWeightQOS+

                    TRES*PriorityWeightTRES


where the multipliers are currently set to the values:

PriorityWeightAge = 500000
PriorityWeightFairShare = 1000000
PriorityWeightJobSize = 10000
PriorityWeightPartition = 400000
PriorityWeightQOS = 2000000
PriorityWeightTRES = (null)

FairShare Calculation

Each user fairshare is currently calculated as 

Code Block
F = 2**(-U/(S*d))

where: 

S is the normalized number of shares made available for each users. In our current setup all users get the same number of raw share

U is the normalized usage.  This is calculated as   U= Uh / Rh  where Uh is the user historical usage subject to the half-life decay  and Rh is the total historical usage across the cluster also subject to the half-life decay 

Uh and Rh are calculated as

Uh = Ucurrent_period + (0.5* Ulast_period)+((0.5**2)*Uperiod-2)+... 

Rh = Rcurrent_period + (0.5* Rlast_period)+((0.5**2)*Rperiod-2)+...

and the periods are based on the PriorityDecayHalfLife time interval, currently set to 6:00:00 (6 hours).  

Currently Usage is calculated as: Allocated_Ncpus*elapsed_seconds+Allocated_Mem_GiB*0.0625*elapsed_seconds+Allocated_NGPUs*5*elapsed_seconds

d is the FairShareDampeningFactor. This is used to reduce the impact of resource consumption on the fairshare value and to account for the ratio of active users against total users. The value is currently set to 10 and it is dynamically changed as needed. 

The initial fairshare value (with zero normalized usage) for each user is equal to 1; if a user is consuming exactly his/her share amount of available resources then his/her fairshare value will be 0.5.

It takes approximately 48 hours for a fully depleted fairshare to return from 0 to 1, assuming no additional usage is being accumulated by the user during those ~48 hours.


Two useful commands to see the priority of pending jobs and fairshare are sprio and sshare




Code Block
languagetext
login02:~ sprio -l
         JOBID     USER   PRIORITY        AGE  FAIRSHARE    JOBSIZE  PARTITION        QOS        NICE                 TRES
        6444966    uid13      12489       5000       4061          0       3429          0           0
        6445056    uid13      13061       5000       4061          0       4000          0           0
        6445068    uid13      10775       5000       4061          0       1714          0           0
        6445078    uid13      10204       5000       4061          0       1143          0           0
        6445083    uid13      10204       5000       4061          0       1143          0           0
        6586939    uid45       6583       4812         57          0       1714          0           0
        6586940    uid45       6583       4812         57          0       1714          0           0
        6586941    uid45       6583       4812         57          0       1714          0           0
        6586942    uid45       6583       4812         57          0       1714          0           0
        6586943    uid45       6583       4812         57          0       1714          0           0
        6586944    uid45       6583       4812         57          0       1714          0           0
        6586945    uid32       6583       4812         57          0       1714          0           0
        6586946    uid32       6583       4812         57          0       1714          0           0
        6586947    uid32       6583       4812         57          0       1714          0           0
        6586948    uid32       6583       4812         57          0       1714          0           0




login02:~ sshare -u $USER -U
             Account       User  RawShares  NormShares    RawUsage  EffectvUsage  FairShare
-------------------- ---------- ---------- ----------- ----------- ------------- ----------
rccg                      rp189          1    0.000787         320      0.000002   0.999832


Partition Priority Tiers

The scheduler tries first to dispatch jobs in the partition interactive, then jobs in the partition priority and finally jobs submitted to all remaining partitions. As a consequence interactive and priority jobs will most likely be dispatched first, even if they have a lower overall priority than jobs pending on other partitions (short,medium,long,mpi,etc.).  

Backfill scheduling

Low priority jobs might be dispatched before high priority jobs only if doing so does not impact the expected start time of the high priority jobs and if the required resources by the low priority jobs are free and idle. 


How to manage priority of your own jobs

...

Once jobs have been submitted, and are still pending, there are two commands that can be used to modify their relative priority:

scontrol top <jobid>


This command increases the priority of job <jobid> to match the maximum priority of all user’s jobs and subtracts that priority from all those other jobs in equal decrements.

...


and in this case the total priority of the user is “conserved”. 


scontrol update jobid=<jobid(s)> nice=<+value>


This command can be applied to multiple jobs at the same time and it will subtract the desired priority points to the given jobs; note that positive nice value reduce the job priority (and negative values cannot be applied) 

...

Code Block
JobID   Priority
1            100
2            100
3            100
4            100
5            100
6             90

the command scontrol update jobid=1,2,3,4,5 nice=11 will change the priority of the pending jobs to:

...

and in this case the total priority of the user is not conserved. However the priority of the jobs across the cluster usually varies from minimum of ~2K to maximum of 12K~15K by thousands of points, so changing it with small “nice” values should have a negligible impact on the jobs pending time.

...

In order to improve this situation, we are introducing a system where priority points (technically called “quality of service” or QOS) will be assigned, on a weekly basis, to users that have been submitting jobs requesting reasonably accurate resources.


We understand that it is often not possible to predict exactly the memory and/or run time required by each job, but many users are requesting more then 10X the amount of memory and wall time actually need. Our goal is for O2 users to test workflows they will be using heavily, and to check resources actually consumed by their jobs and adjust future job submissions accordingly.

...

  • check the report you receive every Monday, which contains information about your overall usage for the previous 7 days.

  • get more detailed info at any time by running the command “O2sacct” directly from the O2 command line. Use “O2sacct -h” to get help or check our wiki page about getting information about current and past jobs.

...

Below are a few examples of workflows where the resource requests should or shouldn’t be changed.


Ex 1 (Good):

Your jobs’ runtime varies between 1 and 4 hours and you are requesting a wall-time of 5 hours for all your jobs.

There is no need for further optimization. You are using only a little extra time, and it may be hard to predict which jobs will be the slower ones.


Ex 2 (Bad):

You submit a thousand jobs with a runtime of < 5 min, but 10 of those jobs run for 8 hours. You are requesting 8 hours for all your jobs. 

You should request ~10 minutes wall time for all the jobs, and resubmit the small number of jobs that run for too long. Your pend times will be substantially shorter, so the overall time to run all the jobs will likely be shorter. (Of course, if you can predict which jobs will take a long time, you can submit those ten separately. But it may be difficult to predict that.)


Ex 3 (Good):

Your job’s memory consumption varies uniformly between 1 and 10 GiB  and you are requesting 11GiB

No need to optimize.


Ex 4 (Bad):

You submit a thousand jobs. The vast majority of those jobs use of 1-3GiB of memory but 10 jobs use ~10GiB. You are requesting 12 GiB of memory all your jobs. 

If you reduce the requested memory for all jobs, and resubmit the limited number of jobs that might fail (exceeding memory limits), the pend times for all jobs – and therefore the total time – will likely be less. We can work with you to modify your workflows to easily identify and rerun the failed jobs.


Ex 5 (Good):

You run several single core jobs, without explicitly requesting any memory allocation. (I.e., your script has no “#SBATCH –mem”  line).  The scheduler allocates by default 1GiB of memory for each job but they use only ~100MiB (nearly .1 GiB) of memory each.

There is no need to optimize further. By default only 1GiB is allocated for each core, so you’re not wasting too much.


Ex 6 (Bad):

You run many 10-core jobs, without explicitly requesting any memory allocation. The jobs are using only a total of ~100MiB of memory each. Unlike the previous example, 10GiB of memory will be allocated for each job (1GiB per core). You should explicitly request a smaller amount of memory, to reduce your pending times and other users’ pending times.


Ex 7 (Bad):

Your jobs run for 10-30 minutes, but you just submit them with the short partition default time limit of 12 hours.

If you are running more than a few jobs, please reduce the requested time to an hour or so. Your jobs will probably pend for a shorter time, and help the scheduler run more efficiently.


Note: Reward QOS are not compatible and cannot be added with other custom priority QOS that might have been granted for special situations.