You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 18 Next »

Getting to a remote computer

The Terminal window

SSH

ssh is an executable program that runs on your local computer and allows you to connect securely to a remote computer.

On Macs, Linux and Windows Git-bash or Cygwin, you run it from a Terminal window. Answer yes to the SSH security question prompt.

SSH to access Lonestar at TACC
ssh your_TACC_userID@stampede.tacc.utexas.edu

If you're using Putty as your Terminal from Windows:

  • Double-click the Putty.exe icon
  • In the PuTTY Configuration window
    • make sure the Connection type is SSH
    • enter stampede.tacc.utexas.edu for Host Name
    • click Open button
    • answer Yes to the SSH security question
  • In the PuTTY terminal
    • enter your TACC user id after the login as: prompt, then Enter

The bash shell

You're now at a command line! It looks as if you're running directly on the remote computer, but really there are two programs communicating: your local Terminal and the remote Shell. There are many shell programs available in Linux, but the default is bash (Bourne-again shell). The Terminal is pretty "dumb" – just sending your typing over its secure sockets layer (SSL) connection to TACC, then displaying the text sent back by the shell. The real work is being done on the remote computer, by programs called by the bash shell.

Setting up your environment

First create a few directories and links we will use (more on these later).

You can copy and paste these lines from the code block below into your Terminal window. Just make sure you hit "Enter" after the last line.

cd 
ln -s -f $SCRATCH scratch
ln -s -f $WORK work
ln -s -f /corral-repl/utexas/BioITeam

mkdir -p $HOME/local/bin
cd $HOME/local/bin
ln -s -f /corral-repl/utexas/BioITeam/bin/launcher_creator.py

 

Now execute the lines below to set up a login script, called .profile_user. This script will be executed whenever you login to stampede.

cd
cp /work/01063/abattenh/seq/code/script/tacc/stampede_dircolors .dircolors
cp /work/01063/abattenh/seq/code/script/tacc/stampede_corengs_profile .profile_user
chmod 600 .profile_user

 

Finally, log off and log back in to stampede.tacc.utexas.edu. You should see a new command prompt:

stamp:~$

And nice directory colors when you list your home directory:

ls

So why don't you see the .profile_user file you copied to your home directory? Because all files starting with a period ("dot files") are hidden by default. To see them add the -a (all) option to ls:

ls -la

File systems at TACC

Local file systems

There are 3 local file systems available on any TACC cluster (stampede, lonestar, etc.), each with different characteristics. All these local file systems are very fast and set up for parallel I/O (Lustre file system).

On stampede these local file systems have the following characteristics:

 HomeWorkScratch
quota5 GB400 GB12+ PB (basically infinite)
policybacked upnot backed up,
not purged
not backed up,
purged if not accessed recently (~10 days)
access commandcdcdwcds
environment variable$HOME$WORK$SCRATCH
root file system/home/work/scratch
use forSmall files such as scripts that you don't want to lose.Medium-sized artifacts you don't want to copy over all the time. For example, custom programs you install (these can get large), or annotation file used for analysis.Large files accessed from batch jobs. Your starting files will be copied here from somewhere else, and your results files will be copied back to your home system.

When you login, the system gives you information about disk quota and your compute allocation quota:

--------------------- Project balances for user abattenh ----------------------
| Name           Avail SUs     Expires | Name           Avail SUs     Expires |
| CancerGenetics     10627  2014-09-30 | genomeAnalysis     94284  2015-03-31 |
------------------------ Disk quotas for user abattenh ------------------------
| Disk         Usage (GB)     Limit    %Used   File Usage       Limit   %Used |
| /home1              0.0       1.1     0.29          463     1001000    0.05 |
| /work              42.1     250.0    16.85        16281      500000    3.26 |
-------------------------------------------------------------------------------

change directory exercise

When you first login, you start in your home directory. Use these commands to change to your other file systems, and see how your command prompt changes to show your location.

cdw
cds
cd

The cd command with no arguments takes you to your home directory on any Linux/Unix system. The cdw and cds commands are specific to the TACC environment.

Corral

Corral is a gigantic (multiple PB) storage system (spinning disk) where researchers can store data. UT researchers may request up to 5 TB of corral storage through the normal TACC allocation request process. Additional space on corral can be rented for ~$210/TB/year.

The UT/Austin BioInformatics Team, a loose group of researchers, maintains a common directory area on corral.

ls /corral-repl/utexas/BioITeam

File we will use in this course are in a subdirectory there:

ls /corral-repl/utexas/BioITeam/core_ngs_tools

A couple of things to keep in mind regarding corral:

  • corral is a great place to store data in between analyses.
    • Copy your data from corral to $SCRATCH
    • Run your analysis batch job
    • Copy your results back to corral
  • On stampede you can access corral directories from login nodes (like the one you're on now), but your batch jobs cannot access it.
    • This is because corral is a network file system, like Samba or NFS.
    • Since stampede has so many compute nodes, it doesn't have the network bandwidth that would allow simultaneous access to corral .
  • Occasionally corral can become unavailable. This can cause any command to hang that tries to access corral data.

Ranch

Ranch is a gigantic (multiple PB) tape archive system where researchers can archive data. UT researchers may request large (multi-TB) ranch storage allocations through the normal TACC allocation request process.

There is currently no charge for ranch storage. However, since the data is stored on tape it is not immediately available – robots find and mount appropriate tapes when the data is requested, and it can take minutes to hours for the data to appear on disk. (The metadata about your data – the directory structures and file names – is always accessible, but the actual data in the files is not on disk until "staged". See the ranch user guide for more information: https://www.tacc.utexas.edu/user-services/user-guides/ranch-user-guide.

Once that data is staged to the ranch disk it can be copied to other places. However, the ranch file system is not mounted as a local file system from the stampede or lonestar clusters. So remote copy commands are needed to copy data to and from ranch (e.g. scp, sftp, rsync).

Staging your data

So, your sequencing center has some data for you. They may send you a list of web links to use to download the data, or if you're a GSAF customer with an account on fourierseq.icmb.utexas.edu, the name of a directory to access.

The first task is to get this data to a permanent storage area. This is not one of the TACC local file systems! Corral is a great place for it, or on a server maintained by your Lab or company.

We're going to pretend – just for the sake of this class – that your permanent storage area is in your TACC work area. Execute these commands to make your "archive" directory and some subdirectories.

mkdir -p $WORK/archive/original/2014_05.core_ngs

Here's an example of a "best practice". Wherever your permanent storage area is, it should have a rational sub-directory structure that reflects its contents. It's easy to process a few NGS datasets, but when they start multiplying like tribbles, good organization and naming conventions will be the only thing standing between you and utter chaos!

For example:

  • original – for original sequencing data (compressed fastq files)
    • subdirectories named by year_month.<project or purpose>
  • aligned – for alignment artifacts (bam files, etc)
    • subdirectories named by year_month.<project or purpose>
  • analysis – further downstream analysis
    • reasonably named subdirectories, often by project
  • genome – reference genomes and other annotation files used in alignment and analysis
    • subdirectories for different reference genomes
    • e.g. ucsc/hg19, ucsc/sacCer3, mirbase/v20
  • code – for scripts and programs you and others in your organization write
    • ideally maintained in a version control system such as git, subversion or cvs.
    • easiest to name sub-directories for people.

Download from a link – wget

Well, you don't have a desktop at TACC to "Save as" to, so what to do with a link? The wget program knows how to access web URLs such as http, https and ftp.

wget exercise

Get ready to run wget from the directory where you want to put the data. Don't press Enter after the wget command – just put a space.

cd $WORK/archive/original/2014_05.core_ngs
wget 

Here are two web links:

Right click on the 1st link in your browser, then select "Copy link location" from the menu. Now go back to your Terminal. Put your cursur after the space following the wget command then either right-click, or Paste. The command line to be executed should look like this:

wget http://web.corral.tacc.utexas.edu/BioITeam/yeast_stuff/Sample_Yeast_L005_R1.cat.fastq.gz

Now press Enter to get the command going. Repeat for the 2nd link.

Copy from a corral location - cp or rsync

Suppose you have a corral allocation where your organization keeps its data, and that the sequencing data has been downloaded there. You can use various Linux commands to copy the data locally from there to your $SCRATCH area.

cp exercises

The cp command copies one or more files from a local source to a local destination. It has the most common form:

cp [options] <source file 1> <source file 2> ... <destination directory>

Make a directory in your scratch area and copy a single file to it. The trailing slash ("/") on the destination says it is a directory.

cp - single file copy
mkdir -p $SCRATCH/data/test1
cp  /corral-repl/utexas/BioITeam/web/tacc.genomics.modules  $SCRATCH/data/test1/
ls $SCRATCH/data/test1

Copy a directory to your scratch area. The -r argument says "recursive".

cp - directory copy
cds
cd data
cp -r /corral-repl/utexas/BioITeam/web/general/ general/

What files were copied over?

ls general
BEDTools-User-Manual.v4.pdf  SAM1.pdf  SAM1.v1.4.pdf

local rsync exercise

The rsync command is typically used to copy whole directories. What's great about rsync is that it only copies what has changed in the source directory. So if you regularly rsync a large directory to TACC, it may take a long time the 1st time, but the 2nd time (say after downloading more sequencing data to the source), only the new files will be copied.

rsync is a very complicated program, with many options (http://rsync.samba.org/ftp/rsync/rsync.html). However, if you use it like this for directories, it's hard to go wrong:

rsync -avrP local/path/to/source_directory/ local/path/to/destination_directory/

The -avrP options say "archive mode" (preserve file modification date/time), verbose, recursive and show Progress.

The trailing slash ("/") on the source and destination directories are very important! rsync will create the last directory level for you, but earlier levels must already exist.

rsync – local directory
cds
rsync -avrP /corral-repl/utexas/BioITeam/web/ucsc_custom_tracks/ data/custom_tracks/

What files were copied over?

ls $SCRATCH/data/custom_tracks
# or
cds; cd data/custom_tracks; ls

Now repeat the rsync and see the difference:

rsync -avrP /corral-repl/utexas/BioITeam/web/ucsc_custom_tracks/ $SCRATCH/data/custom_tracks/

Copy from a remote computer - scp or rsync

Provided that the remote computer is running Linux and you have SSH access to it, you can use various Linux commands to copy data over a secure connection.

 

 

 

 

 

 

 

 

 

  • No labels