Compute cluster overview
When you SSH into stampede , your session is assigned to one of a small set of login nodes (also called head nodes). These are not the compute nodes that will run your jobs.
Think of a node as a computer, like your laptop, but probably with more cores and memory. Now multiply that computer a thousand or more, and you have a cluster.
The small set of login nodes are a shared resource (type the users command to see everyone currently logged in) and are not meant for running interactive programs – for that you submit a description of what you want done to a batch system, which farms the work out to one or more compute nodes.
On the other hand, the login nodes are intended for copying files to and from TACC, so they have a lot of network bandwidth while compute nodes have limited network bandwidth. So follow these guidelines:
- Never run substantial computation on the login nodes. They are closely monitored, and you will get warnings from the TACC admin folks!
- Code is usually developed and tested somewhere other than TACC, and only moved over when pretty solid.
- Do not perform significant network access from your batch jobs. Instead, stage your data onto $SCRATCH from a login node before submitting your job.
Stampede and Lonestar overview and comparison
Here is a comparison of the configurations at stampede and lonestar. As you can see, stampede is the larger cluster (and newer, just launched last year).
lonestar | stampede | |
---|---|---|
login nodes | 2 12 cores | 6 16 cores each |
standard compute nodes | 1,088 12 cores each | 6,400 16 cores each |
large memory nodes | 5 24 cores each | 16 32 cores each |
batch system | SGE | SLURM |
maximum job run time | 24 hours | 48 hours |
User guides can be found at:
- https://portal.tacc.utexas.edu/user-guides/lonestar
- https://portal.tacc.utexas.edu/user-guides/stampede
Unfortunately, the TACC user guides are aimed towards a different user community – the weather modelers and aerodynamic flow simulators who need very fast matrix manipulation and other high performance computing (HPC) features. The usage patterns for bioinformatics – generally running 3rd party tools on many different datasets – is rather a special case for HPC.
Job Execution
Job execution is controlled by the SLURM batch system on stampede.
To run a job you prepare 2 files:
- a file of commands to run, one command per line (<job_name>.cmds)
- a job control file that describes how to run the job (<job_name>.sge)
The process of running the job then involves these steps:
- You submit the job control file to the batch system. The job is then said to be queued to run.
- The batch system prioritizes the job based on the number of compute nodes needed and the job run time requested
- When compute nodes become available, the job tasks (command lines in the <job_name>.cmds file) are assigned to one or more compute nodes, begin to run in parallel.
- The job completes when either:
- you cancel the job manually
- all tasks in the job complete (successfully or not!)
- the requested job run time has expired
Simple example
Let's go through a simple example. Execute the following commands to copy a pre-made simple.cmds commands file:
launcher_maker.py
What are the tasks we want to do? Each task corresponds to a line in the simple.cmds file, so let's take a look at it using the cat (concatenate) command that simply reads a file and writes each line of content to standard output (the terminal):
cat simple.cmds
The tasks we want to perform look like this:
echo "Command 1 `date`" > cmd1.log 2>&1 echo "Command 2 `date`" > cmd2.log 2>&1 echo "Command 3 `date`" > cmd3.log 2>&1 echo "Command 4 `date`" > cmd4.log 2>&1 echo "Command 5 `date`" > cmd5.log 2>&1 echo "Command 6 `date`" > cmd5.log 2>&1
There are 6 tasks. Each is a simple echo command that just writes a string containing the task number and date to a different file.
Use the handy launcher_creator.py program to create the job submission script.
launcher_creator.py -n simple -j simple.cmds -t 00:05:00
You should see output something like the following, and you should see a simple.slurm batch submission file in the current directory.
PProject simple. Using job file simple.cmds. Using development queue. For 00:05:00 time. Using genomeAnalysis allocation. Not sending start/stop email. Using 1 nodes. Writing to simple.slurm. Launcher successfully created. Type "sbatch simple.slurm" to queue your job.
SLURM vs SGE
The batch system on lonestar has slightly different commands and filenames, but the functionality is equivalent.
lonestar | stampede | |
---|---|---|
batch system | SGE | SLURM |
batch control file name | <job_name>.sge | <job_name>.slurm |
job submission command | qsub <job_name>.sge | sbatch <job_name>.sge |
job monitoring command | qstat | showq -u |
job stop command | qdel <job name or id> | scancel -n <job name> |
a closer look at the simple commands
Lets take a closer look at a typical task in the simple.cmds file.
echo "Command 3 `date`" > cmd3.log 2>&1
echo is like a print statement in the bash shell
Job parameters
queues
wayness
launcher_maker.py details
wayness example
Software at TACC
The module system
Adding to your $PATH
$PATH caveat