CLUSTER DEFAULTS, PARTITION LIMITS, AND STORAGE

Great Lakes Cluster Defaults

Cluster Defaults Default Value
Default walltime 60 minutes
Default memory Per CPU 768 MB
Default number of CPUs

no memory specified: 1 core
Memory specified: memory/768 = # of cores (rounded down)

/scratch file deletion policy

60 days without being accessed.  (see SCRATCH STORAGE POLICIES below)

/scratch quotas per root account

10 TB storage and 1 million inode limit (see SCRATCH STORAGE POLICIES below)

/home quota per user

80 GB

Max queued jobs per user per account

5,000 

Shell timeout if idle:

2 hours

Great Lakes Partition Limits

Partition Limit standard gpu gpu_mig40 spgpu largemem standard-oc viz* debug*
Max walltime 2 weeks 2 hours 4 hours
Max running Mem per root account 3,500 GB 1.5 TB 660 GB 3,500 GB 40 GB
Max running CPUs per root account 500 cores 36 cores 132 cores 500 cores 8 cores
Max running GPUs per root account n/a n/a n/a 5 n/a

*there is only one gpu in each viz node, and these are only accessible through Open OnDemand’s Remote Desktop application

*all debug limits are per user, and only one job can run at a time.  largemem and standard-oc limits are per account

Please see the section on Partition Policies for more details.

Great Lakes Storage

Every user has a /scratch directory for every Slurm account they are a member of.  Additionally for that account, there is a shared data directory for collaboration with other members of that account.  The account directory group ownership is set using the Slurm account-based UNIX groups, so all files created in the /scratch directory are accessible by any group member, to facilitate collaboration.

Example:
/scratch/msbritt_root/msbritt1
/scratch/msbritt_root/msbritt1/bob
/scratch/msbritt_root/msbritt1/shared_data

Please note that for your slurm accounts they represent different funding sources. 0--> UMRCP, 1 -->Research Allocation X,  2--> Research Allocation Y,  etc. so msbritt0 would represent UMRCP and msbrit1 is a paid account.

Please see the section on Storage Policies for more details.