Slurm environment variables are a critical aspect for users of HPC clusters to understand, as they provide useful information about job execution and allow customization of Slurm behavior. Here's a brief explanation of the categories you mentioned:
Output Environment Variables:
These are set by Slurm for each job and include details about the job's run-time environment. Some common output environment variables include:
Great Lakes HIgh Performance Computing Cluster
To be able to submit non-interactive jobs, you'll need to first prepare a batch script. This will allow you to run your workflows when resources become available and to submit multiple jobs at once. The batch script contains commands and directives for resource requirements (such as CPU, memory, and runtime length of your job) and the instructions for running your program or workflow. It helps you request resources and run your job on an HPC cluster, making it easier to manage and organize your computational tasks.
Researchers are urged to acknowledge ARC in any publication, presentation, report, or proposal on research that involved ARC hardware (Great Lakes or other resources) and/or staff expertise.
“This research was supported in part through computational resources and services provided by Advanced Research Computing at the University of Michigan, Ann Arbor.”
Great Lakes Cluster Defaults
Cluster Defaults
Default Value
Default walltime
60 minutes
Default memory Per CPU
768 MB
Default number of CPUs
no memory specified: 1 core
Memory specified: memory/768 = # of cores (rounded down)
Common Job Submission Options
This is the simplest case and is shown in the example above. The majority of software cannot use more than this. Some examples of software for which this would be the right configuration are SAS, Stata, R, many Python programs, most Perl programs.
NOTE: If you will be using licensed software, for example, Stata, SAS, Abaqus, Ansys, etc., then you may need to request licenses. See below table of common submission options for the syntax; in the Software section, we show the command to see which software requires you to request a license.
This is similar to what a modern desktop or laptop is likely to have. Software that can use more than one processor may be described as multicore, multiprocessor, or mulithreaded. Some examples of software that can benefit from this are MATLAB and Stata/MP. You should read the documentation for your software to see if this is one of its capabilities.
#!/bin/bash
#SBATCH --job-name JOBNAME
#SBATCH --nodes=1
#SBATCH --cpus-per-task=4
#SBATCH --mem-per-cpu=1g
#SBATCH --time=00:15:00
This is the classic MPI approach, where multiple machines are requested, one process per processor on each node is started using MPI. This is the way most MPI-enabled software is written to work.
#!/bin/bash
#SBATCH --job-name JOBNAME
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --mem-per-cpu=1g
#SBATCH --time=00:15:00
#SBATCH --account=test
#SBATCH --partition=standard
#SBATCH --mail-type=NONE
srun --cpu-bind=none hostname -s
This is often referred to as the “hybrid mode” MPI approach, where multiple machines are requested and multiple processes are requested. MPI will start a parent process or processes on each node, and those in turn will be able to use more than one processor for threaded calculations.
#!/bin/bash
#SBATCH --job-name JOBNAME
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=4
#SBATCH --mem-per-cpu=1g
#SBATCH --time=00:15:00
#SBATCH --account=test #SBATCH --partition=standard
1. Get Duo
DUO is required to access the majority of UM services and all HPC services. If you need to set up DUO please visit this page.
2. Get a Great Lakes user login
You must establish a user login on Great Lakes by filling out this form.