Overview

The main point of using stampede2 is that it is a massive computer cluster. If we run a command when logged into stampede2, we are running it on one of the four low memory, low power  "head" or "login" nodes on TACC. When we do serious computations that are going to take more than a few minutes or use a lot of RAM, we need to submit them to one of the other 4,200 KNL nodes (computers) or 285,600 cores (processors).

In this section we are going to learn how to submit a job to the stampede2 cluster.

Diagram of how a job gets run on Stampede2

Explanation

Start at the bottom - that's what you want: one of stampede2's 4200 compute nodes running your specific program (bowtie mapping in this case).

To get there, you must go through a "Queue Manager" program running on a different computer - the login (or "head") node. This program keeps track of what's running on those 4200 nodes and what's in line to run next. It's very good at doing this.

You tell the Queue Manager what you want done via "bowtie.slurm" - your job submission script. That specifies how many nodes you need, what allocation to use, the maximum run time of the job, etc. The Queue Manager doesn't really care what you're running, just how you want it run. It needs to pass info on what you're running off to the compute node - you do that with the line setenv CONTROL_FILE commands.

The Queue Manager sends off the commands in the file commands to the compute nodes; so commands is really the first thing to start with.

The launcher_creator.py script just helps you by creating bowtie.slurm easily - saves you some time editing a file (and potentially messing it up).

Launcher

In the examples we tend to say that a job can be "interactive" or should be "submitted to the TACC queue". The first means that you can type it and run it directly. It should be short enough that it does not tie up the TACC head node. The second means that you should go through the launcher submission process described here.

If you do try to run a long job in interactive mode. It will be killed after 10-15 minutes and you may see a message like this:

Message from root@login1.stampede2.tacc.utexas.edu on pts/127 at 09:16 ...
Please do not run scripts or programs that require more than a few minutes of
CPU time on the login nodes.  Your current running process below has been
killed and must be submitted to the queues, for usage policy see
http://www.tacc.utexas.edu/user-services/usage-policies/
If you have any questions regarding this, please submit a consulting ticket.

Commands file

A launcher file tells stampede2 which executables to run with your desired options and for how long. It requests a certain amount of resources (cores and time) so that stampede2's scheduling program figure out where to fit your job in.

All we need to do is create a text file. Each line in this text file, which we will call simply commands, is a command exactly as you would type it into the terminal yourself to have it run.

commands file
nano commands
    date > date.out
    ls > ls.out
  • The minimum number of processors that you can request on stampede2 is 68, so you might as well add up to  more lines to this file that are different shell commands that will give some sort of output. Each will be run on a different core in parallel.

Launcher script

Two ways to skin this cat

1.TACC has supplied a sample launcher script which we can modify to queue and execute our job. Here's how:

 module load launcher
 cp $TACC_LAUNCHER_DIR/extras/batch-scripts/launcher.slurm  .
 nano launcher.slurm

Typically, we would change:

The -J line specifies the name of the job. 

The -o line specifies the names of the output files that stampede2 makes.We would change that to the name of this job.

The -t line specifies the length of time given to the job. The more time we give our job, the longer in the queue our job will wait to be run. When the time is up, stampede2 will terminate our job whether or not it's finished. So it's best to give our job slightly more time than it'll take.

We can also, optionally, add a few lines to have stampede2 send an email to your email address when the job starts and finishes.

To do that, under -V, we would add 2 new lines like so:

 #SBATCH -M my_email@something.com
 #SBATCH -m be

Also, if we are part of multiple allocations, we'll need to specify which allocation to use (Case sensitive).

 #SBATCH -A UT-2015-05-18

Lastly, we need to specify the job file.

export LAUNCHER_JOB_FILE=commands


2. We can use launcher_creator.py to edit the launcher file without even opening it.

We have created a Python script called launcher_creator.py that makes creating a slurm file a breeze. You will probably want to use this for the rest of the course.

Now run the script with the -h option to show the help message:

 module load python
 launcher_creator.py -h

-n

name

The name of the job.

-a

allocation

The allocation you want to charge the run to.

-q

queue

The queue to submit to, like 'normal' or 'development', etc.

-w

wayness

Optional The number of jobs in a job list you want to give to each node. (Default is the number of cores per node.)

-N

number of nodes

Optional Specifies a certain number of nodes to use. You probably don't need this option, as the launcher calculates how many nodes you need based on the job list (or Bash command string) you submit. It sometimes comes in handy when writing pipelines.

-t

time

Time allotment for job, format must be hh:mm:ss.

-e

email

Optional Your email address if you want to receive an email from stampede2 when your job starts and ends.

-l

launcher

Optional Filename of the launcher. (Default is <name>.slurm)

-m

modules

Optional String of module management commands. module load launcher is always in the launcher, so there's no need to include that.

-b

Bash commands

Optional String of Bash commands to execute.

-j

Command list

Optional Filename of list of commands to be distributed to nodes.

-s

stdout

Optional Setting this flag outputs the name of the launcher to stdout.

We should mention that launcher_creator.py does some under-the-hood magic for you and automatically calculates how many cores to request on stampede2, assuming you want one core per process. You don't know it, but you should be grateful that this saves you from ever having to think about a confusing calculation.

launcher_creator command example
 launcher_creator.py -n testjob -t 01:00:00 -j commands -q normal -a UT-2015-05-18 -l launcher.slurm 

Wayness (commands/tasks per node)

Wayness sets how many commands/tasks are run on each compute node. By default, wayness will be 68 (equal to the number of physical cores per node on stampede2). Each task will then get 1/68th of the memory available for one node (96 GB of memory) = ~1.4 GB per task. Often, that is not enough memory per task or you may not even have 68 tasks in your commands file. Setting wayness to a number smaller than the default will allocate more memory per task. 

tasks per node (wayness)

cores available to each task

memory available to each task

16896 GB
23448 GB
41724 GB
1745.6 GB
3422.8 GB
6811.4 GB

Stampede2 Queue

Some of Stampede2 queues are listed below. For most typical use cases, you will use the normal queue.

Queue NameNode TypeMax Nodes per Job
(assoc'd cores)*
Max DurationMax Jobs in Queue *Charge Rate
(per node-hour)
developmentKNL cache-quadrant16 nodes
(1,088 cores)*
2 hrs1*0.8 Service Unit (SU)
normalKNL cache-quadrant256 nodes
(17,408 cores) *
48 hrs50 *0.8 SU
large **KNL cache-quadrant2048 nodes
(139,264 cores) **
48 hrs5 **0.8 SU
longKNL cache-quadrant32 nodes
(2,176 cores) *
120 hrs2 *0.8 SU

Next step would be to submit the job to the queue by using the launcher file.

 sbatch launcher.slurm

Stampede2 will make sure that everything specified in the launcher file  is correct and if it is, the job will be queued.

To check the status of the job, the command is:

 showq -u <username>

This will tell you its job priority and what state it is in.

If the job is in the list of "waiting jobs", this means the job has been queued and is waiting to start.

If the job is in the list of  "active jobs", this means the job is running.

FieldDescription
JOBIDjob id assigned to the job
USERuser that owns the job
STATEcurrent job status, including, but not limited to:
CD(completed)
CF(cancelled)
F(failed)
PD(pending)
R(running)

In case we notice something wrong with the job, we can delete it like so:

 scancel job-ID

To obtain the job-ID, look at the  "showq" output.


You can create a job that is dependent on another job finishing only start after the first job has completed using this command:

 sbatch --dependency=afterok:<job-ID> launcher.slurm


If you are part of a reservation (like for this class), your jobs will have higher priority in the queue. You will need to specify the reservation in this manner:

 sbatch --reservation=<reservation_name> launcher.slurm


TACC Output Files

While your job is running, TACC creates 3 different files with names based on the -o field in the launcher. These files are named like so:

 (job_name).e(job-ID)
 (job_name).pe(job-ID)
 (job_name).o(job-ID)

These files have the output of your job that would have been sent to standard output or standard error and messages from TACC about your job. These files will be useful if your job fails.

An exception to submitting jobs: IDEV 

Idev sessions are interactive sessions that you can start from a login node.  In this case, you are requesting the queue to provide you with a certain amount of time on a compute node. Once the request goes through, you will be logged into a compute node on which you can directly run commands (like bwa,bowtie etc). You will not need to submit a job since you are already logged into a compute node.  

Idev sessions are useful for short, quick analyses that you'd like to run or some development/testing that you'd like to do. Such processes could benefit from an interactive nature rather than from packaging into a big job. Idev sessions can be requested using the following command:

idev -m <timeinminutes> -q <queuename> -A <yourallocation> -r <reservationifany>

Now let's go try all new skills out with a simple exercise...


  • No labels