Getting Started

Overview
If you are new to high performance computing (HPC), we have some resources to help you get access and use High Performance Computing resources at the Advanced Research Computing Center (ARCC). If you are experienced with HPC and looking to get started at University of Wyoming, you can jump directly to the Getting Connected pages.

This portion of the website contains basic information about HPC, high performance storage (HPS), and topics that are common across multiple high performance computing systems. For information specific to a particular UW cluster please see the documentation in the ARCC Systems pages.

HPC Basics
Computer cluster
A computer cluster consists of a set of interconnected computers that work together so that they can be viewed as a single system; a super computer.

SuperComputers
Supercomputer is a term that refers to a cluster of tightly coupled computer systems that perform parallel processes at a level above several teraflop (1012 floating-point operations per second). Super-computing, in contrast, refers to parallel processing systems that function at or near the current performance peak for computers. Currently this equates to a petaflop (1015 floating-point operations per second) or greater.

High Performance Storage
High Performance Storage (HPS) systems are fast, scalable aggregates of storage media, usually consisting of several types of spinning disk typically manage by Hierarchical Storage Management (HSM) software. HPS systems are designed to meet high performance demands on total storage capacity (in the peta (1015) to exabyte (1018) range), file sizes, high aggregate data rates (greater than 1 gigabyte per second), and number of objects stored.

Usage Basics
Using an HPC or “cluster” is different from running programs on your desktop. When you login you’ll be connected to one of the system’s “login nodes”. These nodes serve as a staging area for you to marshall your data and submit jobs to the batch scheduler. Once submitted, your job will then wait in a queue, along with other researchers' jobs. Once appropriate resources become available, the batch scheduler will run your job on a subset of “compute nodes” that meet the requirements specified by your job submission script. This overall structure is shown in the diagram below. The green line representing a hypothetical path for your job from submission to running:

HPC Etiquette
The diagram above represents a common HPC architecture. When you connect to a login node (via ssh) you are sharing this node with many other researchers. Each login session consumes a portion of that node's resources (CPU cycles, memory, network bandwidth, etc.) . Resources are also shared when you access the file servers for your home or project directories.

While we have taken steps to ameliorate the impact other users might have on your experience, there are still some precautions to take in order to be a good citizen in our user community:

  • The login nodes should not be used for computation because it ties up significantly more resources. In addition, login nodes may be configured differently than compute nodes, resulting in code that simply doesn't run or returns false positives. This is especially true for Mount Moran.
  • Most ARCC clusters have attached storage pools (such as Bighorn) so I/O-intensive jobs should move their files to from slower/remote storage to relevant parallel filesystems. This can be done using the Globus data movement tools (reference?)
  • When running memory-intensive or potentially unstable jobs, we highly recommend requesting entire nodes so that your job does not impact other users of the same node.
  • If requesting a partial node, please consider the amount of memory available per core. For example, in an 8 core node with 64 Gb of RAM, each core has 8 Gb available to it. If you need more memory, request more cores. It is acceptable to leave cores idle in this situation; memory is just as valuable as cores.

Like all HPC centers, we ask that users to remember that our HPC systems are shared resources; please be considerate of other users on the systems. New and prospective users are strongly encouraged to read the Services and Policies pages as failure to understand and comply may, in severe cases, result in denial of access or loss of allocation.

In general, if you think what you are doing, or what you want to try, might negatively impact other users, please contact us at ARCC-Info@uwyo.edu. We are happy to help you.