Please note: gpus are not available with RStudio at this time.
After choosing RStudio from the Interactive Apps menu you’ll need to specify your desired version, account, hours, cores, partition (standard or largemem), and memory (2GB minimum). You also need to choose the version of RStudio version you want to use from a drop down list of choices.
Great Lakes HIgh Performance Computing Cluster
Slurm is a combined batch scheduler, billing, and resource manager that uses slurm accounts to allow users with a login to the High Performance Computing clusters to run their jobs for a fee.
Spark and Pyspark are available via the Jupyter + Spark Basic and Jupyter + Spark Advanced interactive applications on Open on Demand for Armis2, Great Lakes, and Lighthouse.
The Basic application provides a starter Spark cluster of 16 cpu cores and 90 GB of RAM with a one day (24 hour) walltime limit designed for beginner Spark users. This is especially useful for python novices who have not spent time customizing their environment, as well as for newcomers to the Spark ecosystem.
The sbatch command is used to submit a batch script to Slurm. Slurm will reject the job at submission time if there are requests or constraints that Slurm cannot fulfill as specified. This gives the user the opportunity to examine the job request and resubmit it with the necessary corrections.
To submit a batch script, while logged into the cluster (i.e. ssh to the cluster and be on a login node), simply run: sbatch <jobScriptFile>
Known Issues
Moving items into the trash, does not remove the data from your home directory. It gets stored in ./local/Trash and has to be removed manually by emptying the trash or through the command line.
- « first
- ‹ previous
- 1
- 2
- 3