Partition Policies
Slurm partitions represent collections of nodes for a computational purpose, and are equivalent to Torque queues.
Each PI’s standard compute nodes are identified by the PI’s uniqname and have a maximum job walltime of 14 days (can be increased up to 4 weeks at the PI’s request; ARC reserves the right to interrupt jobs for maintenance purposes).
Account/Association Limits
Limits can be set on a Slurm association or on an Slurm account. This allows a PI to limit individual users or the collective set of users in an account as the PI sees fit. The following values can be used to limit either an account or user association, unless noted otherwise below:
Current Lighthouse limits:
- MaxJobs
- Maximum number of jobs allowed to run at one time
- Account example: testaccount can have 10 simultaneously running jobs (testuser1 has 8 running jobs and testuser2 has 2 running jobs for a total of 10 running jobs)
- Association example: testuser can have 2 simultaneously running jobs
- MaxWall
- Maximum duration of a job
- Account example: all users on testaccount can run jobs for up to 3 days
- Association example: testuser’s jobs can run up to 3 days
- MaxTRES (CPU, Memory, or GPU)
- Maximum number of TRES the running jobs can simultaneously use
- NOTE: CPU, Memory, and GPU can also be limited on a user’s individual job
- Account example: testaccount’s running jobs can collectively use up to 5 GPUs (testuser1’s jobs are using 3 GPUs and testuser2’s jobs are using 2 GPUs for a total of 5 GPUs)
- Association example: testuser’s running jobs can collectively use up to 10 cores
- Job example: testuser can run a single job using up to 10 cores
Please contact ARC if you would like to implement any of these limits.
Terms of Usage and User Responsibility
- Data is not backed up. None of the data on Lighthouse is backed up. The data that you keep in your home directory, /tmp or any other filesystem is exposed to immediate and permanent loss at all times. You are responsible for mitigating your own risk. ARC provides more durable storage on Turbo, Locker, and Data Den. For more information on these, look here.
- Your usage is tracked and may be used for reports. We track a lot of job data and store it for a long time. We use this data to generate usage reports and look at patterns and trends. We may report this data, including your individual data, to your adviser, department head, dean, or other administrator or supervisor.
- Maintaining the overall stability of the system is paramount to us. While we make every effort to ensure that every job completes with the most efficient and accurate way possible, the stability of the cluster is our primary concern. This may affect you, but mostly we hope it benefits you. System availability is based on our best efforts. We are staffed to provide support during normal business hours. We try very hard to provide support as broadly as possible, but cannot guarantee support on a 24 hours a day basis. Additionally, we perform system maintenance on a periodic basis, driven by the availability of software updates, staffing availability, and input from the user community. We do our best to schedule around your needs, but there will be times when the system is unavailable. For scheduled outages, we will announce them at least one month in advance on the ARC home page; for unscheduled outages we will announce them as quickly as we can with as much detail as we have on that same page. You can also track ARC on Twitter (@umichARC).
- Lighthouse is intended only for non-commercial, academic research and instruction. Commercial use of some of the software on Lighthouse is prohibited by software licensing terms. Prohibited uses include product development or validation, any service for which a fee is charged, and, in some cases, research involving proprietary data that will not be made available publicly. Please contact [email protected] if you have any questions about this policy, or about whether your work may violate these terms.
- You are responsible for the security of sensitive codes and data. If you will be storing export-controlled or other sensitive or secure software, libraries, or data on the cluster, it is your responsibility that is is secured to the standards set by the most restrictive governing rules. We cannot reasonably monitor everything that is installed on the cluster, and cannot be responsible for it, leaving the responsibility with you, the end user.
- Data subject to HIPAA regulations may not be stored or processed on the cluster.
- For more information on HIPAA, see the ITS Guide
- For questions about Protected Health Information (PHI), contact Michigan Medicine Corporate Compliance at [email protected].
User Responsibilities
Users must manage data appropriately in their various locations:
- /home
- /scratch (more information below)
- /tmp
- customer-provided NFS
Scratch Storage Policies
Every user has a /scratch directory for every Slurm account they are a member of. Additionally for that account, there is a shared data directory for collaboration with other members of that account. The account directory group ownership is set using the Slurm account-based UNIX groups, so all files created in the /scratch directory are accessible by any group member, to facilitate collaboration.
Example:
/scratch/msbritt_root/msbritt /scratch/msbritt_root/msbritt/bob /scratch/msbritt_root/msbritt/shared_data
Users are able to use /scratch with a size-quota of 10 TB and with an auto-purge policy on unaccessed files, which means that any unaccessed data will be automatically deleted by the system after 60 days. Scratch file systems are not backed up. Critical files should be backed up to another location.
If you are in need of more scratch space for your account please email us at [email protected]. Please note that these requests need to come from an administrator on the account and should include an explanation of why the increase is required.
Security on Lighthouse & Use of Sensitive Data
The Lighthouse high-performance computing system at the University of Michigan has the same security stance as the Great Lakes cluster.
Applications and data are protected by secure physical facilities and infrastructure as well as a variety of network and security monitoring systems. These systems provide basic but important security measures including:
- Secure access – All access to Lighthouse is via SSH or Globus. SSH has a long history of high-security.
- Built-in firewalls – All of the Lighthouse computers have firewalls that restrict access to only what is needed.
- Unique users – Lighthouse adheres to the University guideline of one person per login ID and one login ID per person.
- Multi-factor authentication (MFA) – For all interactive sessions, Lighthouse requires both a U-M Kerberos password and Duo authentication. File transfer sessions require a Kerberos password.
- Private Subnets – Other than the login and file transfer computers that are part of Lighthouse, all of the computers are on a network that is private within the University network and are unreachable from the Internet.
- Flexible data storage – Researchers can control the security of their own data storage by securing their storage as they require and having it mounted via NFSv3 or NFSv4 on Lighthouse. Another option is to make use of Lighthouse’s local scratch storage, which is considered secure for many types of data. Note: Lighthouse is not considered secure for data covered by HIPAA.