You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 35 Next »

Compute cluster overview

When you SSH into stampede , your session is assigned to one of a small set of login nodes (also called head nodes). These are not the compute nodes that will run your jobs.

Think of a node as a computer, like your laptop, but probably with more cores and memory. Now multiply that computer a thousand or more, and you have a cluster.

The small set of login nodes are a shared resource (type the users command to see everyone currently logged in) and are not meant for running interactive programs – for that you submit a description of what you want done to a batch system, which farms the work out to one or more compute nodes.

On the other hand, the login nodes are intended for copying files to and from TACC, so they have a lot of network bandwidth while compute nodes have limited network bandwidth.

So follow these guidelines:

  • Never run substantial computation on the login nodes.
    • They are closely monitored, and you will get warnings from the TACC admin folks!
    • Code is usually developed and tested somewhere other than TACC, and only moved over when pretty solid.
  • Do not perform significant network access from your batch jobs.
    • Instead, stage your data onto $SCRATCH from a login node before submitting your job.

Stampede and Lonestar overview and comparison

Here is a comparison of the configurations at stampede and lonestar. As you can see, stampede is the larger cluster (and newer, launched in 2013).

 lonestarstampede
login nodes

2

12 cores
24 GB memory

6

16 cores each
32 GB memory

standard compute nodes

1,088

12 cores each
24 GB memory

6,400

16 cores each
32 GB memory

large memory nodes

5

24 cores each
1 TB memory

16

32 cores each
1 TB memory

batch systemSGESLURM
maximum job run time24 hours48 hours

User guides can be found at:

Unfortunately, the TACC user guides are aimed towards a different user community – the weather modelers and aerodynamic flow simulators who need very fast matrix manipulation and other high performance computing (HPC) features. The usage patterns for bioinformatics – generally running 3rd party tools on many different datasets – is rather a special case for HPC.

Job Execution

Job execution is controlled by the SLURM batch system on stampede.

To run a job you prepare 2 files:

  1. a file of commands to run, one command per line (<job_name>.cmds)
  2. a job control file that describes how to run the job (<job_name>.slurm)

The process of running the job involves these steps:

  1. Create a commands file containing one command per line.
  2. Prepare a job control file for the commands file that describes how the job should be run.
  3. You submit the job control file to the batch system. The job is then said to be queued to run.
  4. The batch system prioritizes the job based on the number of compute nodes needed and the job run time requested.
  5. When compute nodes become available, the job tasks (command lines in the <job_name>.cmds file) are assigned to one or more compute nodes and begin to run in parallel.
  6. The job completes when either:
    1. you cancel the job manually
    2. all tasks in the job complete (successfully or not!)
    3. the requested job run time has expired

SLURM vs SGE

The batch system on lonestar has slightly different commands and filenames, but the functionality is equivalent.

 lonestarstampede
batch systemSGESLURM
batch control file name<job_name>.sge<job_name>.slurm
job submission commandqsub <job_name>.sgesbatch <job_name>.slurm
job monitoring commandqstatshowq -u
job stop commandqdel <job name or id>scancel -n <job name>

Simple example

Let's go through a simple example. Execute the following commands to copy a pre-made simple.cmds commands file:

Copy simple commands
mkdir -p $SCRATCH/slurm/simple
cd $SCRATCH/slurm/simple
cp $CLASSDIR/common/simple.cmds .

What are the tasks we want to do? Each task corresponds to one line in the simple.cmds file, so let's take a look at it using the cat (concatenate) command that simply reads a file and writes each line of content to standard output (here, your Terminal):

View simple commands
cat simple.cmds

The tasks we want to perform look like this:

echo "Command 1 `date`" > cmd1.log 2>&1
echo "Command 2 `date`" > cmd2.log 2>&1
echo "Command 3 `date`" > cmd3.log 2>&1
echo "Command 4 `date`" > cmd4.log 2>&1
echo "Command 5 `date`" > cmd5.log 2>&1
echo "Command 6 `date`" > cmd5.log 2>&1

There are 6 tasks. Each is a simple echo command that just outputs string containing the task number and date to a different file.

Use the handy launcher_creator.py program to create the job submission script.

Create batch submission script for simple commands
launcher_creator.py -n simple -j simple.cmds -t 00:05:00

You should see output something like the following, and you should see a simple.slurm batch submission file in the current directory.

Project simple.
Using job file simple.cmds.
Using development queue.
For 00:05:00 time.
Using G-815525 allocation.
Not sending start/stop email.
Using 1 nodes.
Writing to simple.slurm.
Launcher successfully created. Type "sbatch simple.slurm" to queue your job.

Submit your batch job like this, then check the batch queue to see the job's status

Submit simple job to batch queue
sbatch simple.slurm
showq -u

If you're quick, you'll see a queue status something like this:

SUMMARY OF JOBS FOR USER: <abattenh>
ACTIVE JOBS--------------------
JOBID     JOBNAME    USERNAME      STATE   CORE   REMAINING  STARTTIME
================================================================================
3349328   simple     abattenh      Running 16       0:04:54  Wed May 14 15:58:11
WAITING JOBS------------------------
JOBID     JOBNAME    USERNAME      STATE   CORE     WCLIMIT  QUEUETIME
================================================================================
Total Jobs: 1     Active Jobs: 1     Idle Jobs: 0     Blocked Jobs: 0

Notice in my queue status, where the STATE is Running, there are 16 COREs assigned. Why is this, since there were only 6 tasks?

The answer is that the jobs cannot share a node – every job, no matter how few tasks requested, will be assigned at least one node. And stampede nodes have 16 cores each. So the number of cores used will always be an even multiple of 16.

If you don't see your simple job in either the ACTIVE or WAITING sections of your queue, it probably already finished – it should only run for a second or two!

Exercise: What files were created by your job?

ls should show you something like this:

cmd1.log  cmd3.log  cmd5.log  simple.cmds      simple.o3349441
cmd2.log  cmd4.log  cmd6.log  simple.e3349441  simple.slurm

The newly created files are the .log files, as well as simple.e3349441 and simple.o3349441.

filename wildcarding

Here's a cute trick for viewing the contents all your output files at once, using the cat command and filename wildcarding.

Multi-character filename wildcarding
cat cmd*.log

The cat command actually takes a list of one or more files (if you're giving it files rather than standard input – more on this shortly) and outputs the concatenation of them to standard output. The asterisk ( * ) in cmd*.log is a multi-character wildcard that matches any filename starting with cmd then ending with .log. So it would match cmd_hello_world.log. You can also specify single-character matches inside brackets ( [ ] ) in either of these ways, this time using the ls command so you can better see what is matching:

Single character filename wildcarding
ls cmd[123456].log
ls cmd[1-6].log

This technique is sometimes called filename globbing, and the pattern a glob. Don't ask me why – it's a Unix thing. Globbing – translating a glob pattern into a list of files – is one of the handy thing the bash shell does for you.

Exercise: How would you list all files starting with "simple"?

ls simple*

Here's what my cat output looks like. Notice the times are all the same, because all the tasks ran in parallel. That's the power of cluster computing!

Command 1 Wed May 14 16:15:40 CDT 2014
Command 2 Wed May 14 16:15:40 CDT 2014
Command 3 Wed May 14 16:15:40 CDT 2014
Command 4 Wed May 14 16:15:40 CDT 2014
Command 5 Wed May 14 16:15:40 CDT 2014
Command 6 Wed May 14 16:15:40 CDT 2014

echo

Lets take a closer look at a typical task in the simple.cmds file.

An echo command
echo "Command 3 `date`" > cmd3.log 2>&1

The echo command is like a print statement in the bash shell. Echo takes its arguments writes them to one line of standard output. While not absolutely required, it is a good idea to put the output string in double quotes.

backtick evaluation

So what is this funny looking `date` bit doing? Well, date is just another Linux command (try just typing it in). Here we don't want the shell to put the string "date" in the output, we want it to execute the date command and put that result into the output. The backquotes ( ` ` also called backticks) around the date command tell the shell we want that command executed and its output substituted into the string.

Backtick evaluation
# These are equivalent:
date
echo `date`

# But different from this:
echo date

output redirection

There's still more to learn from one of our simple tasks, something called output redirection:

echo "Command 3 `date`" > cmd3.log 2>&1

Normally echo writes is string to standard output. If you invoke echo in an interactive shell like Terminal, standard output is displayed to the Terminal window.

So what happens when output is generated by tasks in a batch job? Well, you may have noticed two files like simple.o3349441 and simple.e3349441 were created by your job. These contain all standard output and standard error, respectively, generated by your tasks that was not redirected elsewhere (o = output, e = error).

Usually we want to separate the outputs of all our commands. Why is this important? Suppose we run a job with 100 commands, each one a whole pipeline (alignment, for example). 88 finish fine but 12 do not. Just try figuring out which ones had the errors, and where the errors occurred, if all the output is in one intermingled file and all the error in another intermingled file!

So in the above example the first '>' says to redirect the standard output of the echo command to the cmd3.log file. The '2>&1' part says to redirect standard error to the same place. Technically, it says to redirect standard error (built-in Linux stream 2) to the same place as standard output (built-in Linux stream 1); and since standard output is going to cmd3.log, any standard error will go there also.

For more on streams and redirection, see Standard streams in our Linux fundamentals page.

Job parameters

Now that we've executed a really simple job, let's take a look at some important job submission parameters. These correspond to arguments to the launcher_creator.py script.

A bit of background. Historically, TACC was set up to cater to researchers writing their own C or Fortran codes highly optimized to exploit parallelism (the HPC crowd). Much of TACC's documentation is aimed at this audience, which makes it difficult to pick out the important parts for us.

The kind of programs we biologists generally run is relatively new to TACC. They even have special names for it: "parametric serial jobs" or "parametric sweeps", by which they mean the same program running on different data sets.

In fact there is a special software module required to run our jobs, called the launcher module. You don't need to worry about activating the launcher module; that's done by the <job_name>.slurm script created by launcher_creator.py like this:

module load launcher

The launcher module knows how to interpret various job parameters in the <job_name>.slurm batch SLURM submission script and use them to create your job and assign its tasks to compute nodes. Our launcher_creator.py program is a simple Python script that lets you specify job parameters and writes out a valid <job_name>.slurm submission script.

launcher_creator.py

If you call launcher_creator.py with no arguments, it gives you a brief usage description:

usage: launcher_creator.py [-h] -n NAME -t TIME [-j JOB] [-b BASH_COMMANDS]
                           [-q QUEUE] [-a [ALLOCATION]] [-m MODULES]
                           [-w WAYNESS] [-N NUM_NODES] [-e [EMAIL]]
                           [-l LAUNCHER] [-s] [-H HOLD]
launcher_creator.py: error: argument -n is required

And if you invoke launcher_creator.py with the -h option it gives you more extensive help.

Because it is a long help message, we pipe output to more, a "pager" that displays one screen of text at a time. Type the spacebar to advance to the next page, and Ctrl-c to exit from more.

Getting long help information for launcher_creator.py
# Use spacebar to page forward; Ctrl-c to exit
launcher_creator.py -h | more

The launcher_creator.py script does not handle every job control parameter you might ever want to set. For that, make a copy of the default script, found at $TACC_LAUNCHER_DIR/launcher.slurm, and edit it appropriately.

The official description of the job control parameters is found here:

To read more about the launcher module:

module load launcher
module help launcher
more $TACC_LAUNCHER_DIR/README.launcher

job name and commands file

Recall how the simple.slurm batch file was created:

Create batch submission script for simple commands
launcher_creator.py -n simple -j simple.cmds -t 00:05:00
  • The name of your commands file is given with the -j <commands_file> argument.
  • The -n <job_name> required argument specifies the joab name.
    • This is the name you will see in your queue.
    • By default a corresponding <job_name>.slurm batch file is created for you.
    • It contains the name of the commands file that the batch system will execute.

queues and runtime

TACC resources are partitioned into queues: a named set of compute nodes with different characteristics. The major ones on stampede are listed below (lonestar is similar). Generally you use development when you are writing and testing your code, then normal once you're sure your commands will execute properly.

queue namemaximum runtimeSU charge rate per corepurpose
development1 hr1development (short queue wait times)
normal48 hrs1normal priority (queue waits can sometimes be long)
largemem48 hrs4large memory jobs
  • In launcher_creator.py, the queue is specified by the -q argument.
    • The default queue is development. Specify -q normal for longer jobs.
  • The maximum runtime you are requesting for your job is specified by the -t argument.
    • Format is hh:mm:ss
    • Note that your job will be terminated (without warning!) at the end of its time limit!

allocation and SUs

You may be a member of a number of different projects, hence have a choice which allocation to run your job under.

  • You specify that allocation name with the -a argument of launcher_creator.py.
  • If you have set an $ALLOCATION environment variable to an allocation name, it will be used.

The .bashrc login script you've installed for this course specifies the class's allocation as shown below. Note that this allocation will expire after the course, so you should change that setting appropriately at some point.

ALLOCATION setting in .bashrc
# This sets the default project allocation for launcher_creator.py
export ALLOCATION=UT-2015-05-18
  • When you run a batch job, your project allocation gets "charged" for the time your job runs, in the currency of SUs (System Units).
  • For most queues, 1 SU = 1 core/hour of compute time.

Jobs tasks should have similar expected runtimes

Jobs should consist of tasks that will run for approximately the same length of time. This is because the total core hours for your job is calculated as the run time for your longest running task (the one that finishes last) time the number of cores.

For example, if you specify 64 commands and 99 finish in 2 seconds but one runs for 24 hours, you'll be charged for 64 x 24 core hours even though the total amount of work performed was only ~24 hours.

wayness (tasks per node)

One of the most confusing things in job submission is the parameter called wayness, which controls how many tasks are run on each computer node. Remember that there are 16 cores and 32 GB of memory on each compute node, so you can run up to 16 tasks on a node, each with ~2 GB available memory. But you can run fewer tasks, and if you do, each task gets more resources, as shown below:

tasks per node (wayness)cores available to each taskmemory available to each task
1612 GB
824 GB
448 GB
2816 GB
11632 GB

In launcher_creator.py, wayness is specified by the -w argument. The default is 16 (one task per core) for stampede.

A special case is when you have only 1 command in your job. In that case, it doesn't matter what wayness you request. Your job will run on one compute node, and have all 16 cores available.

Your choice of the wayness parameter will depend on the nature of the work you are performing: its computational intensity, its memory requirements and its ability to take advantage of threading (e.g. bwa -t option or tophat -p option).

wayness example

Let's use launcher_creator.py to explore wayness options. First copy over the wayness.cmds commands file:

Copy wayness commands
mkdir -p $SCRATCH/slurm/wayness
cd $SCRATCH/slurm/wayness
cp $CLASSDIR/common/wayness.cmds .

The wayness.cmds commands file consists of 16 identical lines that look like this:

echo "Task $TACC_LAUNCHER_JID of $TACC_LAUNCHER_NPROCS ran `date` on node `hostname` core $TACC_LAUNCHER_TSK_ID." > cmd.$TACC_LAUNCHER_JID.log 2>&1

The wayness commands take advantage of a number of envionment variables the batch system sets automatically for each task:

  • $TACC_LAUNCHER_JID – the task number of the running task (from 1 to total number of tasks)
  • $TACC_LAUNCHER_NPROCS – total number of tasks specified by the job
  • hostname – name of the task's compute node
  • $TACC_LAUNCHER_TSK_ID – number of the core running the task (0 to number of tasks - 1)

For more information

more $TACC_LAUNCHER_DIR/README.launcher

Create the batch submission script specifying a wayness of 8 (8 tasks per node), then submit the job and monitor the queue:

Create batch submission script for wayness example
launcher_creator.py -n wayness -j wayness.cmds -t 00:05:00 -w 8
sbatch wayness.slurm
showq -u

Exercise: With 16 tasks requested and wayness of 8, how many nodes will this job require? How much memory will be allocated to each task?

2 nodes (16 tasks x 1 node/8 tasks)
4 GB (32 GB/node * 1 node/8 tasks)

 Exercise: If you specified a wayness of 2, how many nodes would this job require? How much memory can each task use?

8 nodes (16 tasks x 1 node/2 tasks)
16 GB (32 GB/node * 1 node/2 tasks)

Look at the output file contents once the job is done.

cat cmd.*.log

Software at TACC

Programs and your $PATH

When you type in the name of an arbitrary progam (ls for example), how does the shell know where to find that program? The answer is your $PATH. $PATH is a pre-defined environment variable whose value is a list of directories.The shell looks for program names in that list, in the order the directories appear.

To determine where the shell will find a particular program, use the which command:

Using which to search $PATH
which rsync
which cat

The module system

The module system is an incredibly powerful way to have literally thousands of software packages available, some of which are incompatible with each other, without causing complete havoc. The TACC staff builds the desired package from source code in well-known locations that are NOT on your $PATH. Then, when a module is loaded, its binaries are added to your $PATH.

For example, the following module load command makes the bwa aligner available to you:

How module load affects $PATH
# first type "bwa" to show that it is not present in your environment:
bwa
# it's not on your $PATH either:
which bwa

# now add bwa to your environment and try again:
module load bwa
bwa
# and see how it's now on your $PATH:
which bwa
# you can see the new directory at the front of $PATH
echo $PATH

# to remove it, use "unload"
module unload bwa
bwa
# gone from $PATH again...
which bwa

module spider

These days the TACC module system includes hundreds of useful bioinformatics programs. To see if your favorite software package has been installed at TACC, use module spider:

module spider samtools
module spider tophat
module spider bedtools
module spider GATK

installing custom software

Even with all the tools available at TACC, inevitably you'll need something they don't have. In this case you can build the tool yourself and install it in a local TACC directory. While building 3rd party tools is beyond the scope of this course, it's really not that hard. The trick is keeping it all organized.

For one thing, remember that your $HOME directory quota is fairly small (5 GB on stampede), and that can fill up quickly if you install many programs. We recommend creating an installation area in your $WORK directory and installing programs there. You can then make symbolic links to the binaries you need in your $HOME/local/bin directory (which was added to your $PATH in your .profile_user).

See how we used a similar trick to make the launcher_creator.py program available to you. Using the ls -l option shows you where symbolic links point to:

Real location of launcher_creator.py
ls -l $HOME/local/bin

 /corral-repl/utexas/BioITeam/bin/launcher_creator.py

$PATH caveat

Remember that the order of locations in the $PATH environment variable is the order in which the locations will be searched. In particular, the module load command adds to the front of your path. This can mask similarly-named programs, for example, in your $HOME/local/bin directory.

Some best practices

Redirect task output and error streams

We've already touched on the need to redirect standard output and standard error for each task. Just remember that funny redirection syntax:

my_program input_file1 output_file1 > file1.log 2>&1

Combine serial workflows into scripts

Another really good way to work is to "bundle" a complex set of steps into a shell script that sets up its own environment, loads its own modules, then executes a series of program steps. You can then just call that script, probably with data-specific arguments, in your commands file. This multi-program script is sometimes termed a pipeline, although complex pipelines may involve several such scripts.

For example, you might have a script called align_bwa.sh (a bash script) or align_bowtie2.py (writtin in python) that performs multiple steps needed during the alignment process:

  • quality checking the input fastq file
  • trimming or removing adapters from the sequences
  • performing the alignment step(s) to create a bam file
  • sort the bam file
  • index the bam file
  • gather alignment statistics from the bam file

There are some example scripts in the /corral-repl/utexas/BioITeam/bin directory. Take a look at some of them after you feel more comfortable with initial NGS processing steps.

Use one directory per job

You may have noticed that all the files involved in our job were in one directory – the batch submissions file, commands file, log files our tasks wrote, and the launcher job output and error files. Of course you'll probably need input files too (wink) as well as output files. Because a single job can create a lot of files, it is a good idea to use a different directory for each job or set of closely related jobs, maybe with a name similar to the job being performed. This will help you stay organized.

Here's an example directory structure

$SCRATCH/my_project
             /original_fq   # contains or links to original fastq files
             /process_fq    # run fastq QC and trimming jobs here
             /alignment     # run alignment jobs here
             /gene_counts   # analyze gene overlap here
             /test1         # play around with stuff here
             /test2         # play around with other stuff here

Command files in each directory can refer to files in other directories using relative path syntax, e.g.:

Relative path syntax
cd $SCRATCH/my_project/process_fq
ls ../original_fq/my_raw_sequences.fastq.gz

Or create a symbolic link to the directory and refer to it as a sub-directory:

Symbolic link to relative path
cd $SCRATCH/my_project/process_fq
ln -s ../original_fq fq
ls ./fq/my_raw_sequences.fastq.gz

relative path syntax

As we have seen, there are several special "directory names" the bash shell understands:

  • "dot directory" ( . ) refers to "here" or "the current directory"
  • "dot dot directory" ( .. ) refers to "one directory up"
  • "tilde directory" ( ~ ) refers to your home directory

Try these relative path examples:

Relative path exercise
cd $SCRATCH/slurm/simple
ls ../wayness
ls ../..
ls -l ~/.profile_user

 

 

  • No labels