Scratch is a storage location on O2 for temporary data. In summary:
THE SCRATCH SPACE IS NOT BACKED UP!
Files are AUTOMATICALLY DELETED 30 days after they are last accessed.
There are NO SNAPSHOTS available for scratch3 folders.
O2 users can have up to 10 TiB and 1 million files/directories in their scratch folder.
Use this space for intermediate files in a longer workflow, or for extra output that doesn't matter much.
To reiterate, if you delete a file on purpose or by accident, or just leave it sitting for 30 days, it is GONE. You can't get it back, and Research Computing can't get it back for you. Using this model lets us provide cost-effective, high-end storage to many more users than would be possible with permanent and backed-up files. But it is important that you understand the risks of using it.
Important - Artificially modifying file access times is against policy and may result in losing your access to scratch3.
A new scratch space is now available on O2, mounted at /n/scratch3.
Users cannot write files directly under /n/scratch3 itself. A large number of user directories have been pre-created on scratch3. If you don't have a scratch3 directory already, then you must use the scratch3 create directory script.
** The old /n/scratch2 was retired on June 26 2020 **
Where "<first_eCommons_id_char>" is the first character of your eCommons id and "<eCommons>" is your eCommons ID.
For example, the scratch3 directory for a user with the username "abc123" would be located at:
For an eCommons ID of zz999 the scratch3 directory will be at:
Create a User Scratch3 Directory
To create your own scratch3 directory on O2, please run the following commandfrom a login node (not an interactive job):
Note: As mentioned above, a large number of user's directories have been pre-created on scratch3. If you have a pre-created scratch3 directory and run the script, then it will inform you that you have a directory already.
Scratch3 Storage Usage
The storage usage is on a user basis rather than a group. To check the storage usage, please run the following commandfrom a login node (not an interactive job):