Page tree
Skip to end of metadata
Go to start of metadata

Below are example grant "Facilities and Resources" sections for BRCF POD resources, TACC, and the Bioinformatics Consulting Group. Wording that should be changed on a per-lab basis is marked in bold italics.

Local computational and storage infrastructure

Local research computing resources consist of <your number of compute servers> computational servers connected to a high-capacity shared storage server. The compute servers are <for example: a Dell PowerEdge R430 with dual 18-core/36-hyperthread CPUs and 384 GB RAM, and a Dell PowerEdge R410 with dual 4-core/8-hyperthread CPUs and 64 GB RAM; see this page for a list of servers on your POD and their configurations: POD Resources and Access#AvailablePODs>. The shared storage server is <for example, a 24-bay SuperMicro enclosure with 64GB RAM, or a 36-bay ThinkMate NearLine server with 128GB RAM> populated with <for example, 24 6-TB disks for 144 TB of raw storage and 80 TB usable.>. All <name of your group> data, including <for example, raw and processed data from NGS experiments>, user and administrative data, are stored on the shared storage server. The ZFS file system provides a large, contiguous address space, a high level of redundancy (2 of every 6 disks), automatic in-place data compression, and file system data integrity validation services. Compute servers communicate with the shared storage server over10-gigabit Ethernet, providing fast local storage for I/O intensive operations. Both compute and storage servers run Ubuntu Linux 18.04 server edition.

The compute and storage resources are housed at the University of Texas data center (UDC) and maintained and administered by our local Biomedical Research and Computing Facility (BRCF) group (see https://wikis.utexas.edu/display/RCTFusers). The BRCF provisions all compute servers with a wide variety of bioinformatics tools and utilities, along with web-accessible versions of R Studio and Python JupyterHub. Other services the BRCF provides include automated weekly backups, periodic archiving of backup system data to the Ranch tape archive system at the Texas Advaced Computing Center (TACC), automated software deployment, operating system and hardware monitoring, and ongoing user support services.

Data security is provided using Unix user and group permission settings. All user home directories are accessible only by the user. Shared Work areas (where most research project data and downstream analysis artifacts are stored) are each associated with a specific Unix group and are accessible only by members of that Unix group, as are shared per-group Scratch areas. Assignment of users to groups is controlled by BRCF administrators in conjunction with <name of your group> personnel. Compute servers are accessible via SSH from inside the UT campus network, from outside the UT campus network using per-user public key encryption, or using the UT VPN service. Storage servers are accessible via encrypted file transfer services such as SCP, or via group-only accessible Samba mounts inside the UT campus network. Physical security is provided by the UT data center, which has highly controlled access. No printers or removable media are accessible from the compute or storage servers.

Texas Advanced Computing Center

The Texas Advanced Computing Center (TACC; https://www.tacc.utexas.edu/) at The University of Texas at Austin is one of the leading centers of computational excellence in the United States. The center provides access to high-performance computing, data visualization, and storage resources that are critical for bandwidth-intensive computational research. Available resources include Lonestar5, with 1,252 Cray XC40 dual-CPU 24-core, 64G RAM compute nodes, and Stampede2, a hybrid cluster with 4,200 Knights Landing nodes (68 4-hyperthread cores, 96G RAM) and 1,736 Skylake nodes (24 2-hyperthread cores, 192G RAM). All TACC compute clusters have access to Stockyard, a global 20 petabyte high-performance Lustre parallel file system, along with cluster-local multi-PB scratch file systems for computation I/O.  These compute resources provide an extensive suite of software for bioinformatics and computational biology, including R/Bioconductor, Python, a variety of NGS aligners (Bowtie, BWA, STAR, HiSat2, kallisto, etc.) and ancillary software including bedtools, cufflinks, GATK, and more. TACC also provides storage for research data collections at Corral, a 6 petabyte online disk storage system, and at Ranch, a 100 petabyte magnetic tape archive. Also available for visualization of biological data are the resources of the 2900 sq. ft. Advanced Scientific Visualization Laboratory, which includes a 360 degree, wrap-around projection system for 3D stereo viewing, an editing suite, and the help of the visualization center staff. More information can be found at www.tacc.utexas.edu.

The <name of your group> has currently been granted a <your number of Stampede2 SUs> node-hour allocation on Stampede2, a <your number of Lonestar5 SUs> node-hour allocation on Lonestar5, and a <probably 5 TB> storage allocation on Corral.

Bioinformatics Consulting Group

The Bioinformatics Consulting Group (BCG) is part of the Center for Biomedical Research Support (CBRS) at UT Austin and provides support for students, postdoctoral fellows, and faculty interested in the use of computational approaches to solving biological problems. The BCG provides assistance and computational biology and bioinformatics services ranging from consultations and running standard computational pipelines for a set service fee, to customized pipeline development and analysis in collaboration with researchers. BCG researchers are experienced in many areas of bioinformatics and NGS analysis, including data from multiple NGS platforms, de novo genome assembly, transcriptome assembly, various alignment algorithms, annotations, RNA-seq, ChIP-seq, scRNA-seq, exome analysis, R, Bioconductor, Python, databases, statistical and machine learning algorithms, classification and regression modeling, pathway analysis, motif analysis, and more.

  • No labels