Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

In order to keep filesystems from filling up and disrupting work, we use filesystem quotas to limit usage in certain areas by user or group. This also helps us observe growth in disk usage over time so we can plan future expansion.

By default, the filesystem quotas are as follows:

filesystem

quota

(maximum total data size allowed)

file limit

(maximum number of files allowed)

/home

100 GiB per user

none

/n/groups

varies by lab/group

none

/n/data#

varies by lab/group

none

/n/scratch3

10 TiB per user

1,000,000 files or directories

/n/scratch_gpu

15 TiB per user*

none

scratch_gpu is only available for labs whose PI has a primary or secondary appointment in a pre-clinical HMS department.

Checking Usage

You can use the quota and du commands to check filesystem usage.

Usage by User and Group

The quota command on O2 will show your usage and usage by groups of which you are a member for directories (accessible on O2) that have quotas imposed.

Type quota at the command prompt on any O2 system. The output will look something like:

...

For data on /n/scratch3, you need to use the scratch3_quota.sh command:

Code Block
$ /n/cluster/bin/scratch3_quota.sh
Directory: /n/scratch3/users/m/mfk8
Space used: 0TiB used of 10TiB
Files/directories: 1256 of 1000000

...

For more information on scratch3, please refer to the dedicated scratch3 wiki page.

scratch_gpu quotas

Quota utilization for Research Computing is no longer providing the /n/scratch_gpu is reported by the command quota if used from a node where the /n/scratch_gpu filesystem is available (login, transfer or compute-g nodes) filesystem. Please use /n/scratch3 instead.

Usage by Directory

Another way to check usage is to total the size of files in a directory using the du command. For example, you might want to see how much space your sub-directory in your group's shared directory is consuming:

  • To check the size of a directory (e.g. /n/groups/smith/mydirectory ):

    • Run the command: du --apparent-size -hs /n/groups/smith/mydirectory

    • The output returned is the total size.

  • Note that du can take quite some time for directories containing large numbers (tens of thousands or more) of files, because it must check the size of every file to compute the total. In general, it is better to use quota to find usage information, when possible, or at least to run du on sub-directories instead of top-level directories.

  • The --apparent-size option is required to find files' actual sizes. Without this option, the reported size will include data protection overhead (redundant copies of data on the O2 file server, which protects against hard drive failures).

  • Please do not run du from a login node. Long running and computationally intensive processes will be killed on login nodes. To ensure that your command for checking directory usage is not interrupted, please run du from a compute node instead. You can use the srun --pty command to start an interactive job, and then run du once you have been allocated resources. More information on running SLURM jobs can be found on the Using Slurm Basic wiki page.

...

You can verify that you are over quota by running the quota command. If you see an ! at the end of a line of output, then it means you have hit or exceeded a limit.

...

Use the commands above to confirm that you are above your quota, and delete data as needed to let you write new files again.

Note that the quota command results are only updated hourly. If you were writing files very rapidly, the quota command might not show a completely full quota. Also, deleting files won't immediately change the results from that command. If you delete 5 GiB of files, you should be able to write 5 GiB of new files in that location immediately, even if quota hasn't caught up yet.

You can delete a whole directory with a command like rm -rf dir. Please be careful when using a command like this: you could delete all of your files!

...