Page tree
Skip to end of metadata
Go to start of metadata

Available PODs

The table below describes the available BRCF PODs, servers and currently available groups.

Anyone with access to a POD may use any of the available compute servers, regardless of the server names. For example, both Georgiou and WCAAR users can access wcarcomp01 and wcarcomp02, and both Lambowitz and CCBB users can access lambcomp01, ccbbcomp01 and ccbbcomp02.

POD nameDescriptionBRCF delegatesCompute serversStorage serverUnix Groups
CBRS PODShared POD for CBRS core facilitiesAnna Battenhouse
    • Dell PowerEdge R640
    • dual 26-core/52-thread CPUs
    • 768 GB RAM
    • 960 GB SATA SSD for ultra-high-speed local I/O (mounted as /ssd1)

  • 24 16-TB disks
  • 384 TB raw, 220 TB usable
BCG, CBRS_BIC, CBRS_CryoEM, CBRS_microscopy, CBRS_org, CBRS_proteomics
Chen/Wallingford PODShared POD for members of the Jeffrey Chen and John Wallingford labs
  • Qingxin Song (Chen lab)
  • Jaime Hibbard (Wallingford lab)
  • (a.k.a.
    • Dell PowerEdge R410
    • dual 4-core/8-thread CPUs
    • 64 GB RAM

  • 24 8-TB disks
  • 192 TB raw, 106 TB usable

Chen, Wallingford
Dickinson/Cambronne PODShared POD for members of the Dan Dickinson and Lulu Cambronne labs
  • Dan Dickinson
  • Lulu Cambronne
    • Dell PowerEdge R410
    • dual 4-core/8-thread CPUs
    • 64 GB RAM

  • 24 8-TB disks
  • 192 TB raw, 106 TB usable

Dickinson, Cambronne
Educational (EDU) PODDedicated instructional POD

Course instructors.

See The Educational POD

    • virtual host for pool of 3 physical servers listed below
    • Dell PowerEdge R640
    • dual 28-core/52-thread CPUs
    • 1 TB RAM

  • 24 4-TB disks
  • 96 TB raw, 53 TB usable

Per course. See The Educational POD
Georgiou/WCAAR PODShared POD for members of the Georgiou lab and the Waggoner Center for Alcoholism & Addiction Research (WCAAR)
  • Russ Durrett (Georgiou lab)
  • Dayne Mayfield (WCAAR)
    • Dell PowerEdge R430
    • dual 16-core/32-thread CPUs
    • 256 GB RAM
    • Dell PowerEdge R430
    • dual 18-core/36-thread CPUs
    • 384 GB RAM
    • Dell PowerEdge R640
    • dual 26-core/52-thread CPUs
    • 1 TB RAM
    • 1.8 TB SATA SSD for ultra-high-speed local I/O (mounted as /ssd1)

  • 12 8-TB disks + 12 14-TB disks
  • 264 TB raw, 158 TB usable

Georgiou, WCAAR


Shared POD for use by GSAF customers. 2TB Work area allocation available for participating groups.

Contact Anna Battenhouse, for more information.

  • Anna Battenhouse
  • Dhivya Arasappan
    • Dell PowerEdge R410
    • dual 4-core/8-thread CPUs
    • 64 GB RAM
    • Dell PowerEdge R720
    • dual  6-core/12-thread CPUs
    • 192 GB RAM

  • 24 6-TB disks
  • 144 TB raw, 90 TB usable

GSAF customer groups:
Alper, Atkinson, Baker, Barrick, Bolnick, Bray,  Browning, Cannatella, Contrearas, Crews, Drew, Dudley, Eberhart, Ellington, GSAFGuest, Hawkes, HoWinson, HyunJunKim, Kirisits, Leahy, Leibold, LiuHw, Lloyd, Manning, Matz, Mueller, Paull, Press, SSung, ZhangYJ

GSAF internal & instructional groups:
BioComputing2017, CCBB_Workshops_1,   FRI-BigDataBio

Hopefog (Ellington) PODShared POD for Ellington & Marcotte lab special projects
  • Anna Battenhouse
  • Danny Diaz
    • Dell PowerEdge R730xd
    • dual 10-core/20-thread CPUs
    • 250 GB RAM
    • 37 TB local RAID storage (mounted as /raid)
    • AMD GPU servers
    • 48-core/96-hyperthread EPYC CPU
    • 512 GB RAM
    • 8 AMD Radeon Instinct MI50 GPUs

  • 24 6-TB disks
  • 144 TB raw, 90 TB usable
Ellington, Marcotte
Iyer/Kim PODShared POD for members of the Vishy Iyer and Jonghwan Kim labs
  • Anna Battenhouse
  • (aka
    • Dell PowerEdge R410
    • dual 4-core/8-thread CPUs
    • 64GB RAM
  • (aka
    • Dell PowerEdge R720
    • dual  6-core/12-thread CPUs
    • 192 GB RAM

  • 24 6-TB disks
  • 144 TB raw, 90 TB usable

Iyer, JKim
Lambowitz /CCBB POD

Shared POD for use by CCBB affiliates and the Alan Lambowitz lab.

  • Hans, Hofmann, Rebecca Young Brim (Hofmann lab & CCBB affiliates)
  • Jun Yao (Lambowitz lab)
    • Dell PowerEdge R410
    • dual 4-core/8-thread CPUs
    • 64 GB RAM
    • Dell PowerEdge R420
    • dual 4-core CPUs
    • 96 GB RAM
    • Dell PowerEdge R720
    • dual  6-core/12-thread CPUs
    • 192 GB RAM

  • 18 16-TB disks
  • 288 TB raw, 170 TB usable

Lambowitz groups:
Lambowitz, LambGuest

CCBB groups:
Cannatella, Hawkes, Hillis, Hofmann, Jansen


Instructional groups:

LiveStrong DT PODPOD for members of Dell Medical School's LiveStrong Diagnostic Theraputics group
  • Jeanne Kowalski
  • Song (Stephen) Yi
    • Dell PowerEdge R440
    • dual 14-core/28-thread CPUs
    • 192 GB RAM
    • 480 GB SATA SSD for ultra-high-speed local I/O (mounted as /ssd1)
    • AMD GPU server
    • 48-core/96-hyperthread EPYC CPU
    • 512 GB RAM
    • 8 AMD Radeon Instinct MI50 GPUs

  • 24 10-TB disks
  • 240 TB raw, 132 TB usable

Jeanne Kowalski groups:
CancerClinicalGenomics, ColoradoData, MultipleMyeloma

Stephen Yi groups:

Lauren Ehrlich groups:
Ehrlich_COVID19, Ehrlich

Instructional groups:

Marcotte PODSingle-lab POD for members of the Edward Marcotte lab
  • Anna Battenhouse
  • (aka
    • Dell PowerEdge R730
    • dual 18-core/36-thread CPUs
    • 768 GB RAM
  • (aka
    • Dell PowerEdge R610
    • dual 4-core/8-thread CPUs
    • 96 GB RAM
  • (aka
    • Dell PowerEdge R610
    • dual 4-core/8-thread CPUs
    • 96 GB RAM

  • 24 12-TB disks
  • 288 TB raw, 160 TB usable

Ochman/Moran PODShared POD for members of the Howard Ochman and Nancy Moran labs
  • Howard Ochman
    • Dell PowerEdge R430
    • dual 18-core/36-thread CPUs
    • 384 GB RAM
    • Dell PowerEdge R640
    • dual 26-core/52-hyperthread CPUs
    • 1024 GB RAM
    • 1.9 TB SSD for high-speec local I/O (mounted as /ssd1)

  • 24 8-TB disks
  • 192 TB raw, 106 TB usable

Ochman, Moran
Rental PODShared POD for POD rental customers
  • Anna Battenhouse (overall)
  • Daylin Morgan (Brock)
    • Dell PowerEdge R640
    • dual 18-core/36-thread CPUs
    • 768 GB RAM
    • 900 GB SATA SSD for ultra-high-speed local I/O (mounted as /ssd1)
    • Dell PowerEdge R640
    • dual 18-core/36-thread CPUs
    • 256 GB RAM
    • 450 GB SATA SSD for ultra-high-speed local I/O (mounted as /ssd1)

  • 12 12-TB disks
  • 144 TB raw, 90 TB usable
Brock, Calder, Curley, Champagne, Gaydosh, Manning, Sullivan
Wilke PODFor use by members of the Claus Wilke lab and the AG3C collaboration
  • Adam Hockenberry
  • Alexis Hill
    • Dell PowerEdge R930
    • quad 14-core/28-thread CPUs
    • 1 TB RAM

  • 18 16-TB disks
  • 288 TB raw, 170 TB usable


Multiple POD group membership

Depending on your affiliations, you may have access to more than one POD. For example, you may have active accounts on both the Lambowitz/CCBB POD and the GSAF POD.

You may also belong to more than one group on a given POD. For example, you may belong to both the Hofmann and GSAF groups on the GSAF POD. These POD groups control which shared Work and Scratch areas you can access.

To see what groups you are a member of, use the groups shell command from any POD compute server. For example:

$ groups
Hofmann GSAF

The first group in this list is your current group, which determines the group ownership for files you create. To change your current group so that new files are marked with a different group, use the newgrp shell command. For example:

$ groups
Hofmann GSAF

$ newgrp - GSAF
$ groups
GSAF Hofmann

Your primary POD group

Note that your primary (default) POD group is the group that is active (first in the groups list) when you first log on to a POD server.

If you would like to change your default POD group, please Contact Us.

You can also change the group assigned to existing files/directories using the chgrp command. For example, to make sure all files and directories under a particular directory are associated with a specific group, you would execute this command:

chgrp -R Hofmann <path_to_directory>

POD delegates

POD delegates act as local liaison to the BRCF for member organizations. Their responsibilities include:

  • Communicate and help enforce BRCF policies among their colleagues
  • Approve requests for user accounts
  • Recommend user and group quotas
  • Implement and monitor sub-directory organization in shared Work and Scratch areas
  • Evaluate requests for additional software installations and communicate such to BRCF
    • may also perform test installations to evaluate new software functionality
  • May have administrative rights on compute servers to assist their POD users with common issues (e.g. permissions)

POD access

ResourceDescriptionNetwork availabilityFor details
SSHRemote access to the bash shell's command line, and remote file transfer commands such as scp and rsync.
  • Standard ssh command unrestricted from the UT campus network (excluding Dell Medical School)
  • Off-campus ssh access:
    • UT VPN service active, or
    • Public key installed in ~/.ssh/authorized_keys
      • then ssh -p 222
  • Notes:
    • Direct storage server access for file transfers are only accessible from the UT campus network or with the UT VPN service active.
    • Do not specify the non-standard port 222 if the UT VPN service is active.
SambaAllows mounting of shared POD storage as a remote file system that can be browsed from your Windows or Mac desktop/laptop computer
  • Unrestricted from the UT campus network (excluding Dell Medical School)
  • Off-campus access requires the UT VPN service to be active
HTTPSAccess to web-based R Studio server and JupyterHub applications
  • Unrestricted

POD compute server shell access

Compute servers can be accessed via ssh using either their formal BRCF name or their alias. For Mac and Linux users, ssh is available from any Terminal window. For Windows users, any SSH client program can be used, such as PuTTY (

ssh access is available from UT campus network addresses, with public key encryption (see below) or by using the UT VPN service.

Networks at Dell Medical School are not part of the UT campus network, so require or public keys or the UT VPN service.

POD storage server file transfer access

BRCF storage servers do not offer interactive shell (ssh) access. However, they provide direct file transfer capability via scp or rsync. Using the storage server as a file transfer target is useful when you have many files and/or large files, as it provides direct access to the shared storage.

See this FAQ for usage tips: Transferring data to/from PODs

Direct storage server file transfers are available from UT campus network addresses or using the UT VPN service.

Passwordless Access via SSH/SFTP

You can set up password-less access to the pod nodes via ssh from a specific, trusted machine (your office machine, laptop, another POD, TACC, etc). 

To set up password-less ssh access from any Linux-like environment (e.g. Mac Terminal, cygwin on Windows, Windows 10 Linux subsystem), follow the steps below. If you are using Windows PuTTY see this documentation: 

  1. On the machine and account you want to ssh/sftp from (e.g. your laptop), generate a SSH key pair if you don't already have one:

    mkdir -p ~/.ssh
    chmod 700 ~/.ssh
    cd ~/.ssh
    if [ ! -f id_rsa ]; then ssh-keygen -b 4096 -t rsa; fi

    Use the default answers for ssh-keygen, and do not specify a password. This creates a public/private key pair in your local ~/.ssh directory ( and id_rsa, respectively).

  2. Install your public key on the server you want to login to (e.g. any one of your POD compute nodes)
    1. If you are off campus and do not have access to the UT VPN service, Contact Us via email, so we can install it for you.
      • include your BRCF account name and attach your public key (~/.ssh/
    2. If you are on campus or have access to the UT VPN service, you can use the ssh-copy-id command:

      ssh-copy-id user@hostname
      • If you are setting this up from off campus, you need to have the UT VPN service active for this command to work remotely.
      • If you are prompted to accept the SSH Host Key for the node you are connecting to, type "yes" to do so.

      • You will be asked for your password for the user@hostname, so enter it when prompted.

  3. Login to the machine to make sure it is working properly:

    ssh user@hostname
    # or from off campus
    ssh -p 222 user@hostname
    • If you are prompted for a password, then something went wrong with the setup. 

      • Most likely, it is file permissions on your local or remove home directory

        • You must not have group or world write access to your home directory or your ~/.ssh directory).

      • If you have multiple SSH keys (RSA, DSA, etc) on the machine you are connecting from then it could also be using the wrong key to connect.

Samba remote file system access

The Samba remote file system protocol allows you to mount POD storage from your desktop or laptop as if it were a local file system. Samba access is available from UT network addresses or using the UT VPN service. In addition, this remote_computing_software_download_instructions.pdf PDF provides detailed information about how to configure the UT VPN service, set up Duo 2-factor authenticaion, and installing software for remote SSH access in Windows.

Networks at Dell Medical School are not part of the UT campus network, so require use of the UT VPN service.

Samba access to your Home directory and to shared Work areas is available on most PODs for most POD groups.

  • The Samba server name is the POD's storage server name as listed above.
  • The Samba share name for your Home directory is always users.
  • The Samba share name for shared Work areas is the group name.
  • Be sure to provide your BRCF account credentials to authenticate (by default your laptop or desktop account is uses)

For Mac users, Samba resource access syntax is of the form smb://<server_name>/<share_name>. For example:

  • smb:// – Samba share for an individual Home directory on the GSAF POD.
  • smb:// – Samba share for an the GSAF group's shared Work directory on the GSAF POD.
 Detailed instructions for Macs

To connect to your Group's Work area as a network volume on a Mac:

  • Go To Finder (click on the desktop or on the clown icon in the taskbar)
  • In Finder:
    • Select Go menu item, then Connect to Server
    • Enter URL: smb://
    • Connect
    • You'll see a dialog asking "You are attempting to connect to the server "xxx"
      • Select Connect
      • You'll see an "Enter your name and password" dialog
      • In "Enter your name and password" dialog, enter your BRCF account name and password
      • Select Connect
  • To return to this folder later, go back to Finder
    • In the sidebar, scroll past Favorites to the Locations section
    • Select your server name from there

For Windows users, Samba resource access syntax is of the form \\<server_name>\<share_name>. For example:

  • \\\users – Samba share for an individual Home directory on the GSAF POD.
  • \\\GSAF – Samba share for an the GSAF group's shared Work directory on the GSAF POD.
 Detailed instructions for Windows

To connect to your Group's Work area as a network drive in Windows:

  • Bring up Windows Explorer (Windows key-E)
  • On Windows 10:
    • You'll see "Computer" in the menu
    • You'll see "Map network drive" in the sub-menu
    • Select "This PC" icon
    • Select "Computer" menu item
  • On Window 7:
    • You'll see "Map network drive" in the menu
    • Select "Computer" icon
  • Click "Map network drive". 
  • In the "Map Network Drive" dialog
    • Select a drive letter
    • In the "Folder" text box, enter your Group area URL
      • e.g. for the Sullivan group on the GSAF pod:
    • Check the "Connect using different credentials" checkbox
    • Click "Finish". This will bring up the "Enter Network Password" dialog
  • In the "Enter Network Password" dialog 
    • Select "Use another account"
    • Enter your BRCF user name and password
    • Check the "Remember my credentials" checkbox if desired
    • Click "OK"
    • A new Windows Explorer will appear with your Work area in focus

POD file systems

All of the POD compute servers have access to their own shared storage, where each user has an individual Home directory and shared Work and Scratch areas.

Home directory

Your Home directory on a POD is located under /stor/home. Home directories are meant for storing small files and all home directories have a 100 GB quota.

By default you are only allowed access to your own Home directory, although you may be able to view Home directory contents for other members of your group depending on the group's permissions policy.

Home directories are backed up weekly, and have snapshots enabled.

Home directory snapshots

Read-only snapshots are periodically taken of your home directory contents. Like Windows backups or Macintosh Time Machine, these backups only consume disk space for files that change (are updated or deleted), in which case the previous file state is "saved" in a snapshot.

NFS access to the snapshots (as documented in the following paragraphs) is currently not working correctly.   You can, however, access them via Samba, although your may get an error when you first try to access them, and then have success on the second access attempt.  You can also contact to request that we recover files from either your snapshots or backups if needed.

Snapshots are stored in a .zfs/snapshot directory under your home directory. To see a list of the snapshots you currently have:

ls ~/.zfs/snapshot

To recover a changed or deleted file, first identify the snapshot it is in, then just copy the file from that snapshot directory to its desired location.

Home directory quotas

Your 100 GB Home directory includes snapshot data. These snapshot backups only consume disk space for files that change (are updated or deleted), in which case the previous file state is "saved" in the snapshot. Snapshots are taken frequently, so their data persists for several months even if the associated Home directory file has been deleted.

The main consequence of this snapshot behavior is that they can cause your 100 GB Home directory quota to be exceeded, even after non-snapshot files have been removed.

At the moment the only way to remove snapshots is to Contact Us, although we hope to implement a way for you to see their sizes and remove them directly in the future.

Two cases merit special mention. First, if large files are first copied to Home (e.g. when transferred from TACC), then moved to Work or Scratch, they may still take Home directory snapshot space. To avoid this issue, always transfer files directly to Work or Scratch. You can create a symbolic link to these areas in your Home directory to help with this. For example, if you are in the Hofmann group on the Lambowitz/CCBB POD, you can create Work and Scratch area symlinks like this:

cd # change to your home directory where the symlinks will be created
ln -s -f /stor/work/Hofmann work
ln -s -f /stor/scratch/Hofmann scratch

A second case involves using  R Studio Server or JupyterHub Server on a POD. Both these web-based applications by default use Home directories as the default working directory. This is not an issue as long as files created there are relatively small, but data directories for larger projects should be located in Work or Scratch. Again, navigation to these areas can be simplified using symbolic links as shown above.

Shared Work and Scratch areas

Shared Work and Scratch areas are available for each POD group under /stor/work/<GroupName> and /stor/scratch/<GroupName> (for example, /stor/work/Hofmann, /stor/scratch/Hofmann). These areas are accessible only by members of the named group. Users can find out which group or groups they belong to by typing the groups command on the command line.

These Work and Scratch areas are designed for storage of shared project artifacts, so they have no predefined structure (i.e. user directories are not automatically created). Group members may create any directory structure that is meaningful to the group's work.

Shared Work areas are backed up weekly. Scratch areas are not backed up. Both Work and Scratch areas may have quotas, depending on the POD, generally in the multi-terabyte range.

Because it has a large quota and is regularly backed up and archived, your group's Work area is where large research artifacts that need to be preserved should be located.

Scratch, on the other hand, can be used for artifacts that are transient or can easily be re-created (such as downloads from public databases).

Weekly backups

All Home and Work directories are backed up weekly to a separate backup storage server (spinning disk). The backups take place sometime between Friday and Monday mornings according to the schedule below. This is currently not an incremental backup.

  • Friday 1 AM start
    • Chen/Wallingford POD
  • Saturday 1 AM start
    • Georgiou/WCAAR POD
    • Lambowitz/CCBB POD
    • Marcotte POD
  • Sunday 1 AM start
    • Iyer/Kim POD
    • Ochman/Moran POD
    • Wilke POD
  • Sunday 4 AM start
    • Dickinson/Cambronne POD
  • Monday 1 AM start
    • GSAF POD

Note that any directory in any file system tree named tmp, temp, or backups is not backed up. Directories with these names are intended for temporary files, especially large numbers of small temporary files. See "Cannot create tempfile" error and Avoid having too many small files.

Periodic archiving

Data on the backup server are archived to TACC's Ranch tape archive roughly once every 9 months. Current archives are as of:

  • 2018 - 4/11
  • 2017 - 7/20
  • 2016 - 12/06
  • legacy data from mid-2016 and earlier

Please Contact Us if you need something retrieved from archives.

Using POD resources wisely

Remember that PODs are shared resources, and it is important to be aware of how your work can affect others trying to use POD resources. Here are some tips for using POD resources wisely.

Computational considerations

Running multiple processes

While POD compute servers do not have a batch system, you can still run multiple tasks simultaneously in several different ways. 

For example, you can use terminal multiplexer tools like screen or tmux to create virtual terminal sessions that won't go away when you log off. Then, inside a screen  or  tmux  session you can create multiple sub-shells where you can run different commands.

You can also use the command line utility nohup to start processes in the background, again allowing you to log off and still have the process running.

 Here are some links on how to use these tools:

Having said all this, you should not run too many jobs at a time, because you are just using one compute server, and you're not the only one using the machine!
How many is "too many"? That really depends on what kind of job it is, what compute/input-output mix it has, and how much RAM it needs. As a general rule, don't run more simultaneous jobs on a POD compute server than you would run on a single TACC compute node.
Before running mutiple jobs, you should check RAM usage (free -g will show usage in GB) and see what is already running using the top program (press the 1 key to see per-hyperthread load), or with a command like this:

ps -ef | grep -v root | grep -v bash | grep -v sshd | grep -v screen | grep -v tmux | grep -v 'www-data'
Finally, be sure to lower the priority of your processes using renice as described below (e.g. renice -n 15 -u `whoami`).

Lower priority for large, long-running jobs

If you have one or more jobs that uses multiple threads, or does significant I/O, its execution can affect system responsiveness for other users.

To help avoid this, please use the renice tool to manipulate the priority of your tasks (a priority of 15 is a good choice). It's easy to do, and here's a quick tutorial:

For example, before you start any tasks, you can set the default priority to nice 15 as shown here. Anything you start from then on (from this shell) should inherit the nice 15 value.

renice +15 $$

Once you have tasks running, their priority can be changed for all of them by specifying your user name:

renice +15 -u `whoami`

or for a particular process id (PID):

renice +15 -p <some PID number>

Multi-processing: cores vs hyperthreads

Many programs offer an option to divide their work among multiple processes, which can reduce the total clock time the program will run. The option may refer to "processes", "cores" or "threads", but actually target the available computing units on a server. Examples include: samtools sort --threads option; bowtie2 -p/--threads option; in R, library(doParallel); registerDoParallel(cores = NN).

One thing to keep in mind here is the difference between cores and hyperthreads. Cores are physical computing units, while hyperthreads are virtual computing units -- kernel objects that "split" each core into two hyperthreads so that the single compute unit can be used by two processes.

The AvailablePODs table describes the compute servers that are associated with each BRCF pod, along with their available cores and (hyper)threads. (Note that most servers are dual-CPU, meaning that total core count is double the per-CPU core count, so a dual 4-core CPU machine would have 8 cores.) You can also see the hyperthread and core counts on any server via:

cat /proc/cpuinfo | grep -c 'core id'           # actually the number of hyperthreads!
cat /proc/cpuinfo | grep 'siblings' | head -1   # the real number of physical cores

(Yes, the fact that 'core id' gives hyperthreads and 'siblings' the number of cores is confusing. But what do you expect -- this is Unix (smile))

Since hyperthreads look like available computing units, parallel processing options that detect "cores" usually really detect hyperthreads. Why does this matter? 

The bottom line:

  • virtual Hyperthreads are useful if the work a process is doing periodically "yields", typically to perform input/output operations, since waiting for I/O allows the core to be used by other work. Many NGS tools fall into this category since they read/write sequencing files.
  • phycical Cores are best used when a program's work is compute-bound. When processing is compute bound -- as is typical of matrix-intensive machine learning algorithms -- hyperthreads actually degrade performance, because two compute-bound hyperthreads are competing for the same physical core, and there is OS-level overhead involved in process switching between the two.

So before you select a process/core/thread count for your program, consider whether it will perform significant I/O. If so, you can specify a higher count. If it is compute bound (e.g. machine learning), be sure to specify a count low enough to leave free hyperthreads for others to use.

Note that this issue with machine learning (ML) workflows being incredibly compute bound is the main reason ML processing is best run on GPU-enabled servers. While none of our current PODs have GPUs, GPU-enabled servers are available at TACC. Additionally, Austin's Advanced Micro Devices, who are trying to compete with NVIDIA in the GPU market, will soon be offering a "GPU cloud" that will be available to UT researchers. We're working with them on this initiative and will provide access information when it is available.

Input/Output considerations

Avoid heavy I/O load

Please be aware of the potential effects of the input/output (I/O) operations in your workflows.

Many common bioinformatics workflows are quasi I/O bound; in other words, they do enough input/output such that it is essentially the gating factor in execution time. This is in contrast to simulation or modeling type applications, which are essentially compute bound.

It is underappreciated that I/O is much more difficult to parallelize than compute. To add more compute power, one can generally just increase the number of processors, their speed, and optimize their CPU-to-memory architecture, which greatly affects compute-bound tasks.

I/O, on the other hand, is harder to parallelize. Large compute clusters such as TACC expose large single file system namespaces to users (e.g. Work, Scratch), but these are implemented using multiple redundant storage systems managed by a sophisticated parallel file system (Lustre, at TACC) to appear as one. Even so, file system outages at TACC caused by heavy I/O are not uncommon.

In the POD architecture, all compute servers share a common storage server, whose file system is accessed over a high-bandwidth local network (NFS over 10 Gbit ethernet). This means that heavy I/O initiated from any compute server can negatively affect users on all compute servers.

For example, as few as three simultaneous invocations of gzip or samtools sort on large files can degrade system responsiveness for other users. If you notice that doing an ls or command completion on the command line seems to be taking forever, this can be a sign of an excessive I/O load (although very high compute loads can occasionally cause similar issues).

Transfer large files directly to the storage server

BRCF storage servers are just Linux servers, but ones you access from compute servers over a high-speed internal network. While they are not available for interactive shell (ssh) access; they provide direct file transfer capability via scp or rsync.

Using the storage server as a file transfer target is useful when you have many files and/or large files, as it provides direct access to the shared storage. Going through a compute server is also possible, but involves an extra step in the path – from the compute-server to its network-attached storage-server.

The solution is to target your POD's storage server directly using scp or rsync. When you do this, you are going directly to where the data is physically located, so you avoid extra network hops and do not burden heavily-used compute servers.

Note that direct storage server file transfer access is only available from UT network addresses, from TACC, or using the UT VPN service.

Please see this FAQ for more information: I'm having trouble transferring files to/from TACC.

File System considerations

Avoid having too many small files

While the ZFS file system we use is quite robust, we can experience issues in the weekly backup and periodic archiving process when there are too many small files in a directory tree.

What is too many? Ten million or more.

If the files are small, they don't take up much storage space. But the fact that there are so many causes the backup or archiving to run for a really long time. For weekly backups, this can mean that the previous week's backup is not done by the time the next one starts. For archiving, it means it can take weeks on end to archive a single directory that has many millions of small files.

Backing up gets even worse when a directory with many files is just moved or renamed. In this case the files need to be deleted from the old location and added to the new one – and both of these operations can be extremely long-running.

To see how many files (termed "inodes" in Unix) there are under a directory tree, use the df -i command. For example:

df -i /stor/work/MyGroup/my_dir

The results might look something like this:

Filesystem               Inodes     IUsed        IFree IUse% Mounted on
stor/work/MyGroup  103335902213  28864562 103307037651    1% /stor/work/MyGroup

The IUsed column (here 28864562) is the number of inodes (files plus directories) in the directory tree listed under Filesystem (here /stor/work/MyGroup). Note that the reported Filesystem may be different from the one you queried, depending on the structure of the ZFS file systems.

There are a several work-arounds for this issue.

1) Move the files to a temporary directory.
The backup process excludes any sub-directory anywhere in the file system directory tree named tmp, temp, or backups. So if there are files you don't care about, just rename the directory to, for example, tmp. There will be a one-time deletion of the directory under its previous name, but that would be it. 

2) Move the directories to a Scratch area.
Scratch areas are not backed up, so will not cause an issue. The directory can be accessed from your Work area via a symbolic link. Please Contact Us if you would like us to help move large directories of yours to Scratch (we can do it more efficiently with our direct access to the storage server).

3) Zip or Tar the directory
If these are important files you need to have backed up, ziping or taring the directory is the way to go. This converts a directory and all its contents into a single, larger file that can be backed up or archived efficiently. Please Contact Us if you would like us to help with this, since with our direct access to the storage server we can perform zip and tar operations much more efficiently than you can from a compute server.

If your analysis pipeline creates many small files as a matter of course, you should consider modifying the processing to create small files in a tmp directory then ziping or taring the as a final step.

POD-wide load monitoring with Ganglia

Command line tools such as top, free, and ps -ef can give you a view of the processes and memory usage on a specific compute server.

However, POD responsiveness depends not only on what you are doing on one compute server, but also on work being performed on other compute servers and direct or indirect I/O to to the shared storage server. So it requires special tools to provide a view of POD resource loads as a whole.

As a result we have implemented the Ganglia load monitoring tool on all POD servers, with a web-accessible user interface that shows activity across POD resources. It has a web-accessible user interface here: (available only from UT subnets).

Note that this site uses a self-signed certificate, so you will need to add a security exception in your browser the first time you access it.

  • No labels