You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

Compute cluster overview

When you SSH into stampede , your session is assigned to one of a small set of login nodes (also called head nodes). These are not the compute nodes that will run your jobs.

Think of a node as a computer, like your laptop, but probably with more cores and memory. Now multiply that computer a thousand or more, and you have a cluster.

The small set of login nodes are a shared resource (type the users command to see everyone currently logged in) and are not meant for running interactive programs – for that you submit a description of what you want done to a batch system, which farms the work out to one or more compute nodes.

On the other hand, the login nodes are intended for copying files to and from TACC, so they have a lot of network bandwidth while compute nodes have limited network bandwidth. So follow these guidelines:

  • Never run substantial computation on the login nodes. They are closely monitored, and you will get warnings from the TACC admin folks!
    • Code is usually developed and tested somewhere other than TACC, and only moved over when pretty solid.
  • Do not perform significant network access from your batch jobs. Instead, stage your data onto $SCRATCH from a login node before submitting your job.

Stampede and Lonestar overview and comparison

Here is a comparison of the configurations at stampede and lonestar. As you can see, stampede is the larger cluster (and newer, just launched last year).

 lonestarstampede
login nodes

2

12 cores
24 GB memory

6

16 cores each
32 GB memory

standard compute nodes

1,088

12 cores each
24 GB memory

6,400

16 cores each
32 GB memory

large memory nodes

5

24 cores each
1 TB memory

16

32 cores each
1 TB memory

batch systemSGESLURM
maximum job run time24 hours48 hours

User guides can be found at:

Unfortunately, the TACC user guides are aimed towards a different user community – the weather modelers and aerodynamic flow simulators who need very fast matrix manipulation and other high performance computing (HPC) features. The usage patterns for bioinformatics – generally running 3rd party tools on many different datasets – is rather a special case for HPC.

Job Execution

Job execution is controlled by the SLURM batch system on stampede.

To run a job you prepare 2 files:

  1. a file of commands to run, one command per line (<job_name>.cmds)
  2. a job control file that describes how to run the job (<job_name>.sge)

The process of running the job then involves these steps:

  1. You submit the job control file to the batch system. The job is then said to be queued to run.
  2. The batch system prioritizes the job based on the number of compute nodes needed and the job run time requested
  3. When compute nodes become available, the job tasks (command lines in the <job_name>.cmds file) are assigned to one or more compute nodes, begin to run in parallel.
  4. The job completes when either:
    1. you cancel the job manually
    2. all tasks in the job complete (successfully or not!)
    3. the requested job run time has expired

SLURM vs SGE

The batch system on lonestar has slightly different commands and filenames, but the functionality is equivalent.

 lonestarstampede
batch systemSGESLURM
batch control file name<job_name>.sge<job_name>.slurm
job submission commandqsub <job_name>.sgesbatch <job_name>.sge
job monitoring commandqstatshowq -u
job stop commandqdel <job name or id>scancel -n <job name>

Simple example

Let's go through a simple example. Execute the following commands to copy a pre-made simple.cmds commands file:

mkdir -p $SCRATCH/slurm/simple
cd $SCRATCH/slurm/simple
cp $CLASSDIR/common/simple.cmds .

What are the tasks we want to do? Each task corresponds to one line in the simple.cmds file, so let's take a look at it using the cat (concatenate) command that simply reads a file and writes each line of content to standard output (the terminal):

cat simple.cmds

The tasks we want to perform look like this:

echo "Command 1 `date`" > cmd1.log 2>&1
echo "Command 2 `date`" > cmd2.log 2>&1
echo "Command 3 `date`" > cmd3.log 2>&1
echo "Command 4 `date`" > cmd4.log 2>&1
echo "Command 5 `date`" > cmd5.log 2>&1
echo "Command 6 `date`" > cmd5.log 2>&1

There are 6 tasks. Each is a simple echo command that just outputs string containing the task number and date to a different file.

Use the handy launcher_creator.py program to create the job submission script.

launcher_creator.py -n simple -j simple.cmds -t 00:05:00

You should see output something like the following, and you should see a simple.slurm batch submission file in the current directory.

Project simple.
Using job file simple.cmds.
Using development queue.
For 00:05:00 time.
Using G-815525 allocation.
Not sending start/stop email.
Using 1 nodes.
Writing to simple.slurm.
Launcher successfully created. Type "sbatch simple.slurm" to queue your job.

Submit your batch job like this, then check the batch queue to see the job's status

sbatch simple.slurm
showq -u

If you're quick, you'll see a queue status something like this:

SUMMARY OF JOBS FOR USER: <abattenh>
ACTIVE JOBS--------------------
JOBID     JOBNAME    USERNAME      STATE   CORE   REMAINING  STARTTIME
================================================================================
3349328   simple     abattenh      Running 16       0:04:54  Wed May 14 15:58:11
WAITING JOBS------------------------
JOBID     JOBNAME    USERNAME      STATE   CORE     WCLIMIT  QUEUETIME
================================================================================
Total Jobs: 1     Active Jobs: 1     Idle Jobs: 0     Blocked Jobs: 0

Notice in my queue status, where the STATE is Running, there are 16 COREs assigned. Why is this, since there were only 6 tasks?

The answer is that the jobs cannot share a node – every job, no matter how few tasks requested, will be assigned at least one node. And stampede nodes have 16 cores each. So the number of cores used will always be an even multiple of 16.

If you don't see your simple job in either the ACTIVE or WAITING sections of your queue, it probably already finished – it should only run for a second or two!

ls should show you something like this:

cmd1.log  cmd3.log  cmd5.log  simple.cmds      simple.o3349441
cmd2.log  cmd4.log  cmd6.log  simple.e3349441  simple.slurm

The newly created files are the .log files, as well as simple.e3349441 and simple.o3349441.

filename wildcarding

Here's a cute trick for viewing the contents all your output files at once, using the cat command and filename wildcarding.

cat cmd*.log

The cat command actually takes a list of one or more files (if you're giving it files rather than standard input – more on this shortly) and outputs the concatenation of them to standard output. The asterisk in cmd*.log is a multi-character wildcard that matches any filename starting with cmd then ending with .log. So it would match cmd_hello_world.log. You can also specify single-character matches in either of these ways, this time using the ls command so you can better see what is matching:

ls cmd[123456].log
ls cmd[1-6].log

This technique is sometimes called filename globbing, and the pattern a glob. Don't ask me why – it's a Unix thing. Globbing – translating a glob pattern into a list of files – is one of the handy thing the bash shell does for you.

ls simple*

Here's what my cat output looks like. Notice the times are all the same, because all the tasks ran in parallel. That's the power of cluster computing!

Command 1 Wed May 14 16:15:40 CDT 2014
Command 2 Wed May 14 16:15:40 CDT 2014
Command 3 Wed May 14 16:15:40 CDT 2014
Command 4 Wed May 14 16:15:40 CDT 2014
Command 5 Wed May 14 16:15:40 CDT 2014
Command 6 Wed May 14 16:15:40 CDT 2014

echo

Lets take a closer look at a typical task in the simple.cmds file.

echo "Command 3 `date`" > cmd3.log 2>&1

The echo command is like a print statement in the bash shell. Echo takes its arguments writes them to one line of standard output. While not absolutely required, it is a good idea to put the output string in double quotes.

So what does this funny looking `date` do? Well, date is just another Linux command (try just typing it in). Here we don't want the shell to put the string "date" in the output, we want it to execute the date command and put that result into the output. The backquotes around the date command tell the shell we want that command executed and its output substituted into the string.

# These are equivalent:
date
echo `date`

# But different from this:
echo date

output redirection

There's still more to learn from one of our simple tasks.

echo "Command 3 `date`" > cmd3.log 2>&1

 

Job parameters

queues

wayness

launcher_maker.py details

wayness example

Software at TACC

The module system

Adding to your $PATH

$PATH caveat

 

 

  • No labels