You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Compute cluster overview

When you SSH into stampede , your session is assigned to one of a small set of login nodes (also called head nodes). These are not the compute nodes that will run your jobs.

Think of a node as a computer, like your laptop, but probably with more cores and memory. Now multiply that computer a thousand or more, and you have a cluster.

The small set of login nodes are a shared resource (type the users command to see everyone currently logged in) and are not meant for running interactive programs – for that you submit a description of what you want done to a batch system, which farms the work out to one or more compute nodes.

On the other hand, the login nodes are intended for copying files to and from TACC, so they have a lot of network bandwidth while compute nodes have limited network bandwidth. So follow these guidelines:

  • Never run substantial computation on the login nodes. They are closely monitored, and you will get warnings from the TACC admin folks!
    • Code is usually developed and tested somewhere other than TACC, and only moved over when pretty solid.
  • Do not perform significant network access from your batch jobs. Instead, stage your data onto $SCRATCH from a login node before submitting your job.

Stampede and Lonestar overview and comparison

Here is a comparison of the configurations at stampede and lonestar. As you can see, stampede is the larger cluster (and newer, just launched last year).

 lonestarstampede
login nodes

2

12 cores
24 GB memory

6

16 cores each
32 GB memory

standard compute nodes

1,088

12 cores each
24 GB memory

6,400

16 cores each
32 GB memory

large memory nodes

5

24 cores each
1 TB memory

16

32 cores each
1 TB memory

batch systemSGESLURM
maximum job run time24 hours48 hours

User guides can be found at:

Unfortunately, the TACC user guides are aimed towards a different user community – the weather modelers and aerodynamic flow simulators who need very fast matrix manipulation and other high performance computing (HPC) features. The usage patterns for bioinformatics – generally running 3rd party tools on many different datasets – is rather a special case for HPC.

Job Execution

Job execution is controlled by the SLURM batch system on stampede.

To run a job you prepare 2 files:

  1. a file of commands to run, one command per line (<job_name>.cmds)
  2. a job control file that describes how to run the job (<job_name>.sge)

The process of running the job then involves these steps:

  1. You submit the job control file to the batch system. The job is then said to be queued to run.
  2. The batch system prioritizes the job based on the number of compute nodes needed and the job run time requested
  3. When compute nodes become available, the job tasks (command lines in the <job_name>.cmds file) are assigned to one or more compute nodes, begin to run in parallel.
  4. The job completes when either:
    1. you cancel the job manually
    2. all tasks in the job complete (successfully or not!)
    3. the requested job run time has expired

Simple example

Let's go through a simple example. Execute the following commands to copy a pre-made simple.cmds commands file:

launcher_maker.py

What are the tasks we want to do? Each task corresponds to a line in the simple.cmds file, so let's take a look at it using the cat (concatenate) command that simply reads a file and writes each line of content to standard output (the terminal):

cat simple.cmds

The tasks we want to perform look like this:

echo "Command 1 `date`" > cmd1.log 2>&1
echo "Command 2 `date`" > cmd2.log 2>&1
echo "Command 3 `date`" > cmd3.log 2>&1
echo "Command 4 `date`" > cmd4.log 2>&1
echo "Command 5 `date`" > cmd5.log 2>&1
echo "Command 6 `date`" > cmd5.log 2>&1

There are 6 tasks. Each is a simple echo command that just writes a string containing the task number and date to a different file.

Use the handy launcher_creator.py program to create the job submission script.

launcher_creator.py -n simple -j simple.cmds -t 00:05:00

You should see output something like the following, and you should see a simple.slurm batch submission file in the current directory.

PProject simple.
Using job file simple.cmds.
Using development queue.
For 00:05:00 time.
Using genomeAnalysis allocation.
Not sending start/stop email.
Using 1 nodes.
Writing to simple.slurm.
Launcher successfully created. Type "sbatch simple.slurm" to queue your job.

SLURM vs SGE

The batch system on lonestar has slightly different commands and filenames, but the functionality is equivalent.

 lonestarstampede
batch systemSGESLURM
batch control file name<job_name>.sge<job_name>.slurm
job submission commandqsub <job_name>.sgesbatch <job_name>.sge
job monitoring commandqstatshowq -u
job stop commandqdel <job name or id>scancel -n <job name>

a closer look at the simple commands

Lets take a closer look at a typical task in the simple.cmds file.

echo "Command 3 `date`" > cmd3.log 2>&1

echo is like a print statement in the bash shell

Job parameters

queues

wayness

launcher_maker.py details

wayness example

Software at TACC

The module system

Adding to your $PATH

$PATH caveat

 

 

  • No labels