Page tree
Skip to end of metadata
Go to start of metadata

Welcome to the Biomedical Research Computing Facility (BRCF) Users wiki! Formerly known as the Research Computing Task Force (RCTF), the BRCF now has an official organizational home in the Center for Biomedical Research Support (CBRS).

Our mission: Biomedical Research Computing Facility (BRCF) "PODs" (small compute clusters) provide a standard hardware, software and storage architecture,​ ​suitable for local, interactive biomedical research computing,​ ​that can be efficiently managed.​

Upcoming Maintenance

Note: The BRCF will be upgrading the OS (operating system) software on all our POD servers over the next several months. See this page for important information about this change affects you: Winter 22-23 OS Upgrades. This page also provides a tentative schedule for which POD servers will be upgraded when.

In order to perform OS upgrades, the following pods will have an extended 2-day maintenance 8am Tuesday January 31, 8am through 6pm Wednesday February 1, 2023:

  • Hopefog/Ellington, Livestrong

Regular maintenance on all other pods will take place Tuesday January 31, 8am - 6pm

Change to SSH key access

After the Tuesday June 28, 2022 maintenance, remote SSH access to POD compute servers no longer uses non-standard port 222. Instead, uses standard port 22, so no port (-p) option is required in the ssh command.

Architecture Overview

BRCF provides local, centralized storage and compute systems called PODs. A POD consists of one or more compute servers along with a shared storage server. Files on POD storage can be accessed from any server within that POD.

A graphic illustration of the BRCF POD Compute/Storage model is shown below:

Features of this architecture include:

  • A large set of Bioinformatics software available on all compute servers
    • interactive (non-batch) environment supports exploratory analyses and long-running jobs
    • web-based R Studio and JupyterHub servers implemented upon request
  • Storage managed by the high-performance ZFS file system, with
    • built-in data integrity, superior to standard Unix file systems
    • large, contiguous address space
    • automatic file compression
    • RAID-like redundancy capabilities
    • works well with inexpensive, commodity disks
  • Appropriate organization and management of research data
    • weekly backup of Home and Work to spinning disk at the UT Data Center (UDC)
    • periodic archiving of backup data to TACC's ranch tape archive system (~once/year)
  • Centralized deployment, administration and monitoring
    • OS configuration and 3rd part software installed via the Puppet deployment tool
    • global RCTF user and group IDs, deployable to any POD
    • self-service Account Management web interface
    • OS monitoring via Nagios tool
    • hardware-level monitoring via Out-Of-Band Management (OOBM) interfaces
      • IMPI, iDRAC

BRCF Architecture goals

The Biomedical Research Computing Facility (BRCF) is a core facility in UT's Center for Biomedical Research Support (CBRS), under the Office of the Vice President of Research. Our small compute clusters, known as "PODs", provide a standard hardware, software storage architecture, suitable for local research computing, that can be efficiently managed.

The BRCF grew out of the earlier Research Computing Task Force (RCTF) working group of IT-knowledgeable UT staff and students from CSSB, GSAF and CCBB. Today OurTeam consists of staff from Molecular Biosciences, CBRS, and the College of Natural Sciences Office of Information Technology (CNS-OIT).

Broadly, our goals are to supplement TACC's offerings by providing extensive local storage, including backups and archiving, along with easy-access non-batch local compute.

Before the BRCF initiative, labs had their own legacy computational equipment and storage, as well as a hodgepodge of backup solutions (a common solution being "none"). This diversity combined with dwindling systems administration resources led to an untenable situation.

The Texas Advanced Computing Center (TACC) provides excellent computation resources for performing large-scale computations in parallel. However its batch system orientation is not well suited for running smaller, one-off computations, for developing scripts and pipelines, or for executing very long-running (> 2 days) computations. While TACC offers a no-cost tape archive facility (ranch), its persistent storage offerings (corral, global work file system) can be cumbersome to use for collaboration.

The BRCF POD architecture has been designed to address these issues and needs.

  • Provide adequate local storage (spinning disk) in a large, non-partitioned address space.
    • Implement some common file system structures to assist data organization and automation
  • Provide flexible local compute capability with both common and lab-specific bioinformatics tools installed.
    • Augment TACC offerings with non-batch computing environment
    • Be robust and "highly available" (but 24x7 uptime not required)
  • Provide automated backups to spinning disk and periodic data archiving to TACC's ranch tape system.
  • Target the "sweet spot" of cost-versus-function commodity hardware offerings
    • Aim for rolling hardware upgrades as technology evolves
  • Provide centralized management of POD equipment
    • Automate software and configuration deployment and system monitoring
    • Make it easy to deploy new equipment

Recent Changes

Change to POD SSH key access

After the Tuesday June 28, 2022 maintenance, remote SSH access to POD compute servers no longer uses non-standard port 222. Instead, it will use standard port 22, so no port (-p) option will be required in the ssh command.

Recently Updated

Navigate space

Space contributors


  • No labels