Getting to a remote computer
The Terminal window
- Macs and Linux have Terminal programs built-in – find it now on your computer
- Windows needs help
- Putty – http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
- simple Terminal and file copy programs
- download putty.exe (terminal) and pscp.exe (secure copy client)
- Git-bash – http://msysgit.github.io/
- terminal plus minimal Linux environment
- Cygwin – http://www.cygwin.com/
- a full Linux environment, including X-windows for running GUI programs remotely
- complicated to install
- Putty – http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
SSH
ssh is an executable program that runs on your local computer and allows you to connect securely to a remote computer.
On Macs, Linux and Windows Git-bash or Cygwin, you run it from a Terminal window. Answer yes to the SSH security question prompt.
ssh your_TACC_userID@stampede.tacc.utexas.edu
If you're using Putty as your Terminal from Windows:
- Double-click the Putty.exe icon
- In the PuTTY Configuration window
- make sure the Connection type is
SSH
- enter
stampede.tacc.utexas.edu
for Host Name - click Open button
- answer Yes to the SSH security question
- make sure the Connection type is
- In the PuTTY terminal
- enter your TACC user id after the login as: prompt, then Enter
The bash shell
You're now at a command line! It looks as if you're running directly on the remote computer, but really there are two programs communicating: your local Terminal and the remote Shell. There are many shell programs available in Linux, but the default is bash (Bourne-again shell). The Terminal is pretty "dumb" – just sending your typing over its secure sockets layer (SSL) connection to TACC, then displaying the text sent back by the shell. The real work is being done on the remote computer, by programs called by the bash shell.
Setting up your environment
First create a few directories and links we will use (more on these later).
You can copy and paste these lines from the code block below into your Terminal window. Just make sure you hit "Enter" after the last line.
cd ln -s -f $SCRATCH scratch ln -s -f $WORK work ln -s -f /corral-repl/utexas/BioITeam mkdir -p $HOME/local/bin cd $HOME/local/bin ln -s -f /corral-repl/utexas/BioITeam/bin/launcher_creator.py
Now execute the lines below to set up a login script, called .profile_user. This script will be executed whenever you login to stampede.
cd cp /work/01063/abattenh/seq/code/script/tacc/stampede_dircolors .dircolors cp /work/01063/abattenh/seq/code/script/tacc/stampede_corengs_profile .profile_user chmod 600 .profile_user
Finally, log off and log back in to stampede.tacc.utexas.edu. You should see a new command prompt:
stamp:~$
And nice directory colors when you list your home directory:
ls
So why don't you see the .profile_user file you copied to your home directory? Because all files starting with a period ("dot files") are hidden by default. To see them add the -a (all) option to ls:
ls -la
File systems at TACC
Local file systems
There are 3 local file systems available on any TACC cluster (stampede, lonestar, etc.), each with different characteristics. All these local file systems are very fast and set up for parallel I/O (Lustre file system).
On stampede these local file systems have the following characteristics:
Home | Work | Scratch | |
---|---|---|---|
quota | 5 GB | 400 GB | 12+ PB (basically infinite) |
policy | backed up | not backed up, not purged | not backed up, purged if not accessed recently (~10 days) |
access command | cd | cdw | cds |
environment variable | $HOME | $WORK | $SCRATCH |
root file system | /home | /work | /scratch |
use for | Small files such as scripts that you don't want to lose. | Medium-sized artifacts you don't want to copy over all the time. For example, custom programs you install (these can get large), or annotation file used for analysis. | Large files accessed from batch jobs. Your starting files will be copied here from somewhere else, and your results files will be copied back to your home system. |
When you login, the system gives you information about disk quota and your compute allocation quota:
--------------------- Project balances for user abattenh ---------------------- | Name Avail SUs Expires | Name Avail SUs Expires | | CancerGenetics 10627 2014-09-30 | genomeAnalysis 94284 2015-03-31 | ------------------------ Disk quotas for user abattenh ------------------------ | Disk Usage (GB) Limit %Used File Usage Limit %Used | | /home1 0.0 1.1 0.29 463 1001000 0.05 | | /work 42.1 250.0 16.85 16281 500000 3.26 | -------------------------------------------------------------------------------
Exercise
When you first login, you start in your home directory. Use these commands to change to your other file systems, and see how your command prompt changes to show your location.
cdw cds cd
The cd command with no arguments takes you to your home directory on any Linux/Unix system. The cdw and cds commands are specific to the TACC environment.
Corral
Corral is a gigantic (multiple PB) storage system (spinning disk) where researchers can store data. UT researchers may request up to 5 TB of corral storage through the normal TACC allocation request process. Additional space on corral can be rented for ~$210/TB/year.
The UT/Austin BioInformatics Team, a loose group of researchers, maintains a common directory area on corral.
ls /corral-repl/utexas/BioITeam
File we will use in this course are in a subdirectory there:
ls /corral-repl/utexas/BioITeam/core_ngs_tools
A couple of things to keep in mind regarding corral:
- corral is a great place to store data in between analyses.
- Copy your data from corral to $SCRATCH
- Run your analysis batch job
- Copy your results back to corral
- On stampede you can access corral directories from login nodes (like the one you're on now), but your batch jobs cannot access it.
- This is because corral is a network file system, like Samba or NFS.
- Since stampede has so many compute nodes, it doesn't have the network bandwidth that would allow simultaneous access to corral .
- Occasionally corral can become unavailable. This can cause any command to hang that tries to access corral data.
Ranch
Ranch is a gigantic (multiple PB) tape archive system where researchers can archive data. UT researchers may request large (multi-TB) ranch storage allocations through the normal TACC allocation request process.
There is currently no charge for ranch storage. However, since the data is stored on tape it is not immediately available – robots find and mount appropriate tapes when the data is requested, and it can take minutes to hours for the data to appear on disk. (The metadata about your data – the directory structures and file names – is always accessible, but the actual data in the files is not on disk until "staged". See the ranch user guide for more information: https://www.tacc.utexas.edu/user-services/user-guides/ranch-user-guide.
Once that data is staged to the ranch disk it can be copied to other places. However, the ranch file system is not mounted as a local file system from the stampede or lonestar clusters. So remote copy commands are needed to copy data to and from ranch (e.g. scp, sftp, rsync).
Staging your data
So, your sequencing center has some data for you. They may send you a list of web links to use to download the data, or if you're a GSAF customer with an account on fourierseq.icmb.utexas.edu, the name of a directory to access.
The first task is to get this data to a permanent storage area. This is not one of the TACC local file systems! Corral is a great place for it, or on a server maintained by your Lab or company.
We're going to pretend – just for the sake of this class – that your permanent storage area is in your TACC work area. Execute these commands to make your "archive" directory and some subdirectories.
mkdir -p $WORK/archive/original/2014_05.core_ngs
Here's an example of a "best practice". Wherever your permanent storage area is, it should have a rational sub-directory structure that reflects its contents. It's easy to process a few NGS datasets, but when they start multiplying like tribbles, good organization and naming conventions will be the only thing standing between you and utter chaos!
For example:
original
– for original sequencing data (compressed fastq files)- subdirectories named by
year_month.<project or purpose>
- subdirectories named by
aligned
– for alignment artifacts (bam files, etc)- subdirectories named by
year_month.<project or purpose>
- subdirectories named by
analysis
– further downstream analysis- reasonably named subdirectories, often by project
genome
– reference genomes and other annotation files used in alignment and analysis- subdirectories for different reference genomes
- e.g.
ucsc/hg19
,ucsc/sacCer3
,mirbase/v20
code
– for scripts and programs you and others in your organization write- ideally maintained in a version control system such as git, subversion or cvs.
- easiest to name sub-directories for people.
Download from a link – wget
Well, you don't have a desktop at TACC to "Save as" to, so what to do with a link?