You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 31 Next »

Before you start the alignment and analysis processes, it us useful to perform some initial quality checks on your raw data. You may also need to pre-process the sequences to trim them or remove adapters. Here we will assume you have paired-end data from GSAF's Illumina HiSeq sequencer.

Setup

Special stampede login

Before we start, log into stampede like you did yesterday, but use this special hostname:

login8.stampede.tacc.utexas.edu

Remember how we emphasized yesterday that you should not perform significant computation on login nodes? Well, there are a  few exceptions, and login8.stampede.tacc.utexas.edu is one of them. Is it a dedicated login node owned by an organization our lab belongs to, so we have given you access to it for the duration of this course. This will let us do a few things at the command line that would normally set off alarm bells from the TACC folks if we all did them on a standard login node.

Data staging

Recall that yesterday we copied some fastq data to our "permanent archive area".

mkdir -p $WORK/archive/original/2015_05.core_ngs
cd $WORK/archive/original/2015_05.core_ngs
wget http://web.corral.tacc.utexas.edu/BioITeam/yeast_stuff/Sample_Yeast_L005_R1.cat.fastq.gz
wget http://web.corral.tacc.utexas.edu/BioITeam/yeast_stuff/Sample_Yeast_L005_R2.cat.fastq.gz

Let's now set ourselves up to process this data in $SCRATCH, using some of best practices for organizing our workflow.

Set up directory for working with FASTQs
# Create a $SCRATCH area to work on data for this course,
# with a sub-directory for pre-processing raw fastq files
mkdir -p $SCRATCH/core_ngs/fastq_prep

# Make a symbolic links to the original yeast data in $WORK.
cd $SCRATCH/core_ngs/fastq_prep
ln -s -f $WORK/archive/original/2015_05.core_ngs/Sample_Yeast_L005_R1.cat.fastq.gz
ln -s -f $WORK/archive/original/2015_05.core_ngs/Sample_Yeast_L005_R2.cat.fastq.gz

Illumina sequence data format (FASTQ)

GSAF gives you paired end sequencing data in two matching fastq format files, containing reads for each end sequenced. See where your data really is and how big it is.

ls options to see the size of linked files
# the -l options says "long listing" which shows where the link goes,
# but doesn't show details of the real file
ls -l

# the -L option says to follow the link to the real file, -l = long listing (includes size) 
# and -h says "human readable" (e.g. MB, GB)
ls -lLh

4-line FASTQ format

Each read end sequenced is represented by a 4-line entry in the fastq file that looks like this:

A four-line FASTQ file entry representing one sequence
@HWI-ST1097:127:C0W5VACXX:5:1101:4820:2124 1:N:0:CTCAGA
TCTCTAGTTTCGATAGATTGCTGATTTGTTTCCTGGTCATTAGTCTCCGTATTTATATTATTATCCTGAGCATCATTGATGGCTGCAGGAGGAGCATTCTC
+
CCCFFFFDHHHHHGGGHIJIJJIHHJJJHHIGHHIJFHGICA91CGIGB?9EF9DDBFCGIGGIIIID>DCHGCEDH@C@DC?3AB?@B;AB??;=A>3;;  

Line 1 is the unique read name. The format is as follows, using the read name above:

machine_id:lane:flowcell_grid_coordinates  end_number:failed_qc:0:barcode

@HWI-ST1097:127:C0W5VACXX:5:1101:4820:2124 1:N:0:CTCAGA

  • The line as a whole will be unique for this read fragment. However, the corresponding R1 and R2 reads will have identical machine_id:lane:flowcell_grid_coordinates information. This common part of the name ties the two read ends together.
  • Most sequencing facilities will not give you qc-failed reads (failed_qc = Y) unless you ask for them.

Line 2 is the sequence reported by the machine, starting with the first base of the insert (the 5' adapter has been removed by the sequencing facility). These are ACGT or N uppercase characters.

Line 3 is always '+' from GSAF (it can optionally include a sequence description)

Line 4 is a string of Ascii-encoded base quality scores, one character per base in the sequence.

  • For each base, an integer Phred-type quality score is calculated as integer score = -10 log(probabilty base is wrong) then added to 33 to make a number in the Ascii printable character range. As you can see from the table below, alphabetical letters - good, numbers – ok, most special characters – bad (except :;<=>?@).

 

 

 

 

See the Wikipedia FASTQ format page for more information.

Exercise: What character in the quality score string in the fastq entry above represents the best base quality?

J

About compressed files

Sequencing data files can be very large - from a few megabytes to gigabytes. And with NGS giving us longer reads and deeper sequencing at decreasing price points, it's not hard to run out of storage space. As a result, most sequencing facilities will give you compressed sequencing data files. The most common compression program used for individual files is gzip and its counterpart gunzip whose compressed files have the .gz extension. The tar and zip programs are most commonly used for compressing directories.

Let's take a look at the size difference between uncompressed and compressed files. We use the -l option of ls to get a long listing that includes the file size, and -h to have that size displayed in "human readable" form rather than in raw byte sizes.

Compare compressed and uncompressed files
ls -lh $BI/web/yeast_stuff/*.fastq
ls -lh $BI/web/yeast_stuff/*.fastq.gz

Exercise: About how big are the compressed files? The uncompressed files? About what is the compression factor?

fastq's are ~ 150 MB
compressed they are ~ 50 MB
this is about 3x compression

You may be tempted to want to uncompress your sequencing files in order to manipulate them more directly – but resist that temptation. Nearly all modern bioinformatics tools are able to work on .gz files, and there are tools and techniques for working with the contents of compressed files without ever uncompressing them.

gzip and gunzip

With no options, gzip compresses the file you give it in-place. Once all the content has been compressed, the original uncompressed file is removed, leaving only the compressed version (the original file name plus a .gz extension). The gzunzip function works in a similar manner, except that its input is a compressed file with a .gz file and produces an uncompressed file without the .gz extension.

gzip, gunzip exercise
# make sure you're in your $SCRATCH/core_ngs/fastq_prep directory
cp $CLASSDIR/misc/small.fq .

# check the size, then compress it in-place
ls -lh 
gzip small.fq

# check the compressed file size, then uncompress it
ls -lh 
gunzip small.fq.gz

Both gzip and gunzip are extremely I/O intensive when run on large files. While TACC has tremendous compte resources and the Lustre parallel file system is great, it has its limitations. It is not difficult to overwhelm the Lustre file system if you gzip or gunzip more than a few files at a time (~4) in a batch job. The intensity of compression/decompression operations is another reason you should compress your sequencing files once (if they aren't already) then leave them that way.

head and tail, more or less

One of the challenges of dealing with large data files, whether compressed or not, is finding your way around the data – finding and looking at relevant pieces of it. Except for the smallest of files, you can't open them up in a text editor because those programs read the whole file into memory, so will choke on sequencing data files! Instead we use various techniques to look at pieces of the files at a time.

The first technique is the use of "pagers" – we've already seen this with the more command. Review its use now on our small uncompressed file:

Using the more pager
# Use spacebar to advance a page; Ctrl-c to exit
more small.fq

Another pager, with additional features, is less. The most useful feature of less is the ability to search – but it still doesn't load the whole file into memory, so searching a really big file can be slow.

Here's a summary of the most common less navigation commands, once the less pager is active. It has tons of other options (try less --help).

  • q – quit
  • Ctrl-f or space – page forward
  • Ctrl-b – page backward
  • /<pattern> – search for <pattern> in forward direction
    • n – next match
    • N – previous match
  • ?<pattern> – search for <pattern> in backward direction
    • n – previous match going back
    • N – next match going forward

If you start less with the -N option, it will display line numbers.

Exercise: What line of small.fq contains the read name with grid coordinates 2316:10009:100563?

less -N small.fq
/2316:10009:100563

line number 905, which looks like this:

905 @HWI-ST1097:127:C0W5VACXX:5:2316:10009:100563 1:N:0:CTCAGA

 

For a really quick peak at the first few lines of your data, there's nothing like the head command. By default it displays the first 10 lines of data from the file you give it or from its standard input. With an argument -NNN (that is a dash followed by some number), it will show that many lines of data.

Using the head command
# shows 1st 10 lines
head small.fq

# shows 1st 100 lines -- might want to pipe this to more to see a bit at a time
head -100 small.fq | more

The yang to head's ying is tail, which by default it displays the last 10 lines of its data, and also uses the -NNN syntax to show the last NNN lines. (Note that with very large files it may take a while for tail to start producing output because it has to read through the file sequentially to get to the end.)

See piping for a diagram of what the piping operator ( | ) is doing.

But what's really cool about tail is its -n +NNN syntax. This displays all the lines starting at line NNN. Note this syntax: the -n option switch follows by a plus sign ( + ) in front of a number – the plus sign is what says "starting at this line"! Try these examples:

Using the tail command
# shows the last 10 lines
tail small.fq

# shows the last 100 lines -- might want to pipe this to more to see a bit at a time
tail -100 small.fq | more

# shows all the lines starting at line 900 -- better pipe it to a pager!
tail -n +900 small.fq | more

# shows 15 lines starting at line 900 because we pipe to head -15
tail -n +900 small.fq | head -15

gunzip -c tricks

Ok, now you know how to navigate an uncompressed file using head and tail, more or less. But what if your FASTQ file has been compressed by gzip? You don't want to uncompress the file, remember? So you use the gunzip -c trick. This uncompresses the file, but instead of writing the uncompressed data to another file (without the .gz extension) it write it to its standard output where it can be piped to programs like your friends head and tail, more or less.

Let's illustrate this using one of the compressed files in your fq subdirectory:

Uncompressing output on the fly with gunzip -c
gunzip -c Sample_Yeast_L005_R1.cat.fastq.gz | more
gunzip -c Sample_Yeast_L005_R1.cat.fastq.gz | head
gunzip -c Sample_Yeast_L005_R1.cat.fastq.gz | tail
gunzip -c Sample_Yeast_L005_R1.cat.fastq.gz | tail -n +900 | head -15

# Note that less will display .gz file contents automatically
less Sample_Yeast_L005_R1.cat.fastq.gz

There will be times when you forget to pipe your gunzip -c output somewhere – even the experienced among us still make this mistake! This leads to pages and pages of data spewing across your terminal. If you're lucky you can kill the output with Ctrl-c. But if that doesn't work (and often it doesn't) just close your Terminal window. This terminates the process on the server (like hanging up the phone), then you can log back in.

Counting your sequences

One of the first thing to check is that your FASTQ files are the same length, and that length is evenly divisible by 4. The wc command (word count) using the -l switch to tell it to count lines, not words, is perfect for this. It's so handy that you'll end up using wc -l a lot to count things. It's especially powerful when used with filename wildcarding.

Counting lines with wc -l
wc -l small.fq
head -100 small.fq > small2.fq
wc -l small*.fq

You can also pipe the output of gunzip -c to wc -l to count lines in your compressed FASTQ file.

Exercise: How many lines are in the Sample_Yeast_L005_R1.cat.fastq.gz file? How many sequences is this?

gunzip -c Sample_Yeast_L005_R1.cat.fastq.gz | wc -l

The wc -l command says there are 2,368,720 lines. FASTQ files have 4 lines per sequence, so the file has 2,368,720/4 or 592,180 sequences.

How to do math on the command line

The bash shell has a really strange syntax for arithmetic: it uses a double-parenthesis operator after the $ sign (which means evaluate this expression). Go figure.

Syntax for artithmetic on the command line
echo $((2368720 / 4))

Remember how the backticks evaluate the enclosed expression and substitute its output into a string? Here's how you would combine this math expression with gunzip -c line counting on your file using the magic of backtick evaluation:

Counting sequences in a FASTQ file
gunzip -c Sample_Yeast_L005_R1.cat.fastq.gz | echo "$((`wc -l` / 4))"

Whew!

Processing multiple compressed files

You've probably figured out by now that you can't easily use filename wildcarding along with gunzip -c and piping to process multiple files. For this, you need to code a for loop in bash. Fortunately, this is pretty easy. Try this:

For loop to count sequences in multiple FASTQs
for fname in *.gz; do
   echo "$fname has $((`gunzip -c $fname | wc -l` / 4)) sequences"
done

Here fname is the name I gave the variable that is assigned a different file generated by the filename wildcarding list, each time through the loop. The actual file is then referenced as $file inside the loop.

Note the general structure of the for loop. Different portions of the structure can be separated on different lines (like <something> and <something else> below) or put on one line separated with a semicolon ( ; ) like before the do keyword below.

for <variable name> in <expression>; do 
  <something>
  <something else>
done

FASTQ Quality Assurance tools

The first order of business after receiving sequencing data should be to check your data quality. This often-overlooked step helps guide the manner in which you process the data, and can prevent many headaches.

FastQC

FastQC is a tool that produces a quality analysis report on FASTQ files. Anna's presentation: FastQC.pdf

Useful links:

First and foremost, the FastQC "Summary" should generally be ignored. Its "grading scale" (green - good, yellow - warning, red - failed) incorporates assumptions for a particular kind of experiment, and is not applicable to most real-world data. Instead, look through the individual reports and evaluate them according to your experiment type.

The FastQC reports I find most useful, and why:

  1. Should I trim low quality bases?
    • consult the Per base sequence quality report
      • based on all sequences
  2. Do I need to remove adapter sequences?
    • consult the Overrepresented Sequences report
      • based on the 1st 200,000 sequences
  3. How complex is my library?
    • consult the Sequence Duplication Levels report
    • but remember that different experiment types are expected to have vastly different duplication profiles

 

  • For many of its reports, FastQC analyzes only the first 200,000 sequences in order to keep processing and memory requirements down. Consult the Online documentation for each FastQC report for full details.
  • Some of FastQC's graphs have a 1-100 vertical scale that is tricky to interpret. The 100 is a relative marker for the rest of the graph. For example, sequence duplication levels are relative to the number of unique sequences,

Running FastQC

FastQC is available in the TACC module system on stampede. To make it available:

module load fastqc

It has a number of options (see fastqc --help | more) but can be run very simply with just a FASTQ file as its argument.

Running fastqc on a FASTQ file
fastqc Sample_Yeast_L005_R1.cat.fastq.gz

Exercise: What did FastQC create?

ls -l shows two new items.

drwxrwxr-x 4 abattenh G-803889     4096 May 20 22:59 Sample_Yeast_L005_R1.cat_fastqc
-rw-rw-r-- 1 abattenh G-803889   198239 May 20 22:59 Sample_Yeast_L005_R1.cat_fastqc.zip

Sample_Yeast_L005_R1.cat_fastqc is a directory (the "d" in "drwxrwxr-x"), so use ls Sample_Yeast_L005_R1.cat_fastqc to see what's in it. Sample_Yeast_L005_R1.cat_fastqc.zip is just a Zipped (compressed) version of the whole directory.

Looking at FastQC output

You can't run a web browser directly from your "dumb terminal" command line environment. The FastQC results have to be placed where a web browser can access them. We put a copy at this URL:

http://web.corral.tacc.utexas.edu/BioITeam/yeast_stuff/Sample_Yeast_L005_R1.cat_fastqc/fastqc_report.html

Exercise: Based on this FastQC output, should we trim this data?

The Per base sequence quality report does not look good. The data should probably be trimmed (to 40 or 50 bp) before alignment.

FASTQ Manipulation Tools

Trimming sequences

There are two main reasons you may want to trim your sequences:

  • As a quick way to remove 3' adapter contamination, when extra bases provide little additional information
    • For example, 75+ bp ChIP-seq reads – 50 bases are more than enough for a good mapping, and trimming to 50 is easier than adapter removal, especially for paired end data.
    • You would not choose this approach for RNA-seq data, where 3' bases may map to a different exon, and that is valuable information. Instead you would specifically remove adapter sequences.
  • Low quality base reads from the sequencer can cause an otherwise mappable sequence not to align
    • This is more of an issue with sequencing for genome assembly – both bwa and bowtie2 seem to do fine with a few low quality bases, soft clipping them if necessary.

There are a number of open source tools that can trim off 3' bases and produce a FASTQ file of the trimmed reads to use as input to the alignment program.

FASTX Toolkit

The FASTX Toolkit provides a set of command line tools for manipulating both FASTA and FASTQ files. The available modules are described on their website. They include a fast fastx_trimmer utility for trimming FASTQ sequences (and quality score strings) before alignment.

FASTX Toolkit is available via the TACC module system.

module spider fastx_toolkit
module load fastx_toolkit

Here's an example of how to run fastx_trimmer to trim all input sequences down to 50 bases. By default the program reads its input data from standard input and writes trimmed sequences to standard output:

Trimming FASTQ sequences to 50 bases with fastx_trimmer
gunzip -c Sample_Yeast_L005_R1.cat.fastq.gz | fastx_trimmer -l 50 -Q 33 > trim50_R1.fq
  • The -l 50 option says that base 50 should be the last base (i.e., trim down to 50 bases)
  • The -Q 33 option specifies how base qualities on the 4th line of each FASTQ entry are encoded. The FASTX Toolkit is an older program written in the time when Illumina base qualities were encoded differently, so its default does not work for modern FASTQ files. These days Illumina base qualities follow the Sanger FASTQ standard (Phred score + 33 to make an ASCII character).

Exercise: compressing fastx_trimmer output

How would you tell fastx_trimmer to compress (gzip) its output file?

Type fastx_trimmer -h (help) to see program documentation

You could supply the -z option like this:

gunzip -c Sample_Yeast_L005_R1.cat.fastq.gz | fastx_trimmer -l 50 -Q 33 -z > trim50_R1.fq.gz

Or you could gzip the output yourself.

gunzip -c Sample_Yeast_L005_R1.cat.fastq.gz | fastx_trimmer -l 50 -Q 33 | gzip > trim50_R1.fq.gz

Exercise: other fastx toolkit programs

What other FASTQ manipulation programs are part of the FASTX Toolkit?

Type fastx_ then tab twice (completion) to see their names,

Or, based on the information from module spider, list all the the programs like this:

ls $TACC_FASTX_BIN

Adapter trimming with cutadapt

Data from RNA-seq or other library prep methods that result in short fragments can cause problems with moderately long (50-100bp) reads, since the 3' end of sequence can be read through to the 3' adapter at a variable position. This 3' adapter contamination can cause the "real" insert sequence not to align because the adapter sequence does not correspond to the bases at the 3' end of the reference genome sequence.

Unlike general fixed-length trimming (e.g. trimming 100 bp sequences to 50 bp), adapter trimming removes differing numbers of 3' bases depending on where the adapter sequence is found.

You must tell an adapter trimming program what your R1 and R2 adapters look like. The GSAF website describes the flavaors of Illumina adapter and barcode sequence in more detail https://wikis.utexas.edu/display/GSAF/Illumina+-+all+flavors.

The cutadapt program is an excellent tool for removing adapter contamination. The program is not available through TACC's module system but we linked a copy into your $HOME/local/bin directory.

A common application of cutadapt is to remove adapter contamination from small RNA library sequence data, so that's what we'll show here, assuming GSAF-format small RNA library adapters.

When you run cutadapt you give it the adapter sequence to trim, and this is different for R1 and R2 reads. Here's what the options look like (without running it on our files yet).

cutadapt command for R1 sequences (GSAF RNA library)
cutadapt -m 22 -O 4 -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC
cutadapt command for R2 sequences (GSAF RNA library)
cutadapt -m 22 -O 4 -a TGATCGTCGGACTGTAGAACTCTGAACGTGTAGA

Notes:

  • The -m 22 option says to discard any sequence that is smaller than 22 bases (minimum) after trimming. This avoids problems trying to map very short, highly ambiguous sequences.
  • the -O 4 (Overlap) option says not to trim 3' adapter sequences unless at least the first  bases of the adapter are seen at the 3' end of the read. This prevents trimming short 3' sequences that just happen by chance to match the first few adapter sequence bases.

Figuring out which adapter sequence to use when can be tricky. You may want to read all the "gory details" later.

Please refer to https://wikis.utexas.edu/display/GSAF/Illumina+-+all+flavors for Illumina library adapter layout.

The top strand, 5' to 3', of a read sequence looks like this.

Illumina library read layout
<P5 capture> <indexRead2> <Read 1 primer> [insert] <Read 2 primer> <indexRead1> <P7 capture>

The -a argument to cutadapt is documented as the "sequence of adapter that was ligated to the 3' end". So we care about the <Read 2 primer> for R1 reads, and the <Read 1 primer> for R2 reads.

The "contaminent" for adapter trimming will be the <Read 2 primer> for R1 reads. There is only one Read 2 primer:

Read 2 primer, 5' to 3', used as R1 sequence adapter
AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC

The "contaminent" for adapter trimming will be the <Read 1 primer> for R2 reads. However, there are three different Read 1 primers, depending on library construction:

Read 1 primer depends on library construction
TCTACACGTTCAGAGTTCTACAGTCCGACGATCA    # small RNA sequencing primer site
CAGGTTCAGAGTTCTACAGTCCGACGATCA        # "other"
TCTACACTCTTTCCCTACACGACGCTCTTCCGATCT  # TruSeq Read 1 primer site. This is the RC of the R2 adapter

Since R2 reads are the reverse complement of R1 reads, the R2 adapter contaminent will be the RC of the Read 1 primer used.

For ChIP-seq libraries where reads come from both DNA strands, the TruSeq Read 1 primer is always used.
Since it is the RC of the Read 2 primer, its RC is just the Read 1 primer back.
Therefore, for ChIP-seq libraries the same cutadapt adapter option can be used for both R1 and R2 reads:

Cutadapt adapter sequence for ChIP-seq lib
cutadapt -a GATCGGAAGAGCACACGTCTGAACTCCAGTCAC

For RNAseq libraries, we use the small RNA sequencing primer as the Read 1 primer.
The contaminent is then the RC of this, minus the 1st and last bases:

Small RNA library Read 1 primer, 5' to 3', used as R2 sequence adapter
TCTACACGTTCAGAGTTCTACAGTCCGACGATCA    # R1 primer - small RNA sequencing Read 1 primer site, 5' to 3'
TGATCGTCGGACTGTAGAACTCTGAACGTGTAGA    # R2 adapter contaminent (RC of R1 small RNA sequencing Read 1 primer)

cutadapt example in a batch job

Let's submit a small batch job to run cutadapt on some real human miRNA data. This data is from a small RNA library where the expected insert size is around 15-25 bp.

Setup for cutadapt on miRNA FASTQ
cd $SCRATCH/core_ngs/fastq_prep
cp -p $CLASSDIR/human_stuff/Sample_H54_miRNA_L004_R1.cat.fastq.gz .
cp -p $CLASSDIR/human_stuff/Sample_H54_miRNA_L005_R1.cat.fastq.gz .

Exercise: How long are these reads?

less Sample_H54_miRNA_L005_R1.cat.fastq.gz

Look at the size of the actual sequence on the 2nd line of any FASTQ entry

These are 100 bp reads.

Here is one of the two cutadapt commands we want:

Cutadapt batch command for R1 FASTQ
cutadapt -m 20 -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC Sample_H54_miRNA_L004_R1.cat.fastq.gz 2> acut.mirna_part1_R1.log | gzip > mirna_part1_R1.cuta.fastq.gz

Here's one of those cases where you have to be careful about separating standard output and standard error. cutadapt writes its FASTQ output to standard output, and writes summary information to standard error.

In this command we first redirect standard error to the file named acut.mirna_part1_R1.log using the 2> syntax, in order to capture cutadapt diagnostics. Then standard output is piped to gzip, whose output is the redirected to a new compressed FASTQ file.

Let's  put this command into a cuta.cmds commands file. But first we need to learn a bit about Editing files in Linux.

Exercise: Create cuta.cmds file

Use nano to create a cuta.cmds file with two cutadapt processing commands – the first command above, along with a similar one to process part 2 of the data in the 2nd FASTQ file.

Be sure to redirect both standard output and standard error to file names that are appropriate for the part2 FASTQ file.

When you're finished you should have a cuta.cmds file that is 2 lines long (check this with wc -l) and looks like this

cutadapt -m 20 -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC Sample_H54_miRNA_L004_R1.cat.fastq.gz 2> acut.mirna_part1_R1.log | gzip > mirna_part1_R1.cuta.fastq.gz
cutadapt -m 20 -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC Sample_H54_miRNA_L005_R1.cat.fastq.gz 2> acut.mirna_part2_R1.log | gzip > mirna_part2_R1.cuta.fastq.gz

Create a batch submission script for your commands and submit it to the normal queue with a maximum runtime of 1 hour.

Create and submit cutadapt batch script
launcher_creator.py -n cuta -j cuta.cmds -t 01:00:00 -q normal
sbatch cuta.slurm
showq -u

How will you know your job is done?

Your cuta job will no longer be displayed in the showq -u output.

You can also ask the batch system to send you email when the job starts to run and completes. The launcher_creator.py has a -e option that lets you provide an email on the command line. Or you can set the EMAIL_ADDRESS environment variable if you want launcher_creator.py to always fill in this field:

export EMAIL_ADDRESS="abattenhouse@utexas.edu"


You should see two log files when the job is finished. How many lines does each have?

wc -l acut.*.log


Take a look at the first 12 lines of one of them.

head -12 acut.mirna_part1_R1.log


It will look something like this:

cutadapt log file
 Command line parameters: -m 20 -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC Sample_H54_miRNA_L004_R1.cat.fastq.gz
Maximum error rate: 10.00%
   No. of adapters: 1
   Processed reads:      2001337
   Processed bases:    202135037 bp (202.1 Mbp)
     Trimmed reads:      1991662 (99.5%)
     Trimmed bases:    156404705 bp (156.4 Mbp) (77.38% of total)
   Too short reads:       246782 (12.3% of processed reads)
    Too long reads:            0 (0.0% of processed reads)
        Total time:     55.87 s
     Time per read:      0.028 ms

Notes:

  • The Processed reads line tells you how many sequences were in the original FASTQ file.
  • The Too short reads line tells you how may sequences were filtered out because they were shorter than our minimum length (20) after adapter removal (these may have ben primer dimers).
    • Here ~12% of the original sequences were removed, which is reasonable.
  • Trimmed reads tells you how many of the reads you gave it had at least part of an adapter sequence that was trimmed.
    • Here adapter was found in nearly all (99.5%) of the reads. This makes sense given this is a short (15-25 bp) RNA library.
  • Trimmed bases tells you the total number of adapter bases that were removed, and its proportion of the total number of bases.
    • Here > 3/4 of bases were trimmed. Again, this makes sense since the reads were 100 bm long.

paired-end data considerations

Special care must be taken when removing adapters for paired-end FASTQ files.

  • For paired-end alignment, aligners want the R1 and R2 fastq files to be in the same name order and be the same length.
  • Adapter trimming can remove FASTQ sequences in one or the other file, if the trimmed sequence is too short.
    • this leads to mis-matched R1 and R2 FASTQ files, which can cause problems with aligners like bwa

The latest version of cutadapt (which you're using) has a protocol for re-syncing the R1 and R2 when the R2 is trimmed. See the cutadapt manual for more details (https://code.google.com/p/cutadapt/wiki/documentation).

FASTQ or BAM statistics with samstat

The samstat program can also produce a quality report for FASTQ files, and it can also report on aligned sequences in a BAM file.

Again, this program is not available through the TACC module system but is available in Anna's work directory and has been linked into your $HOME/local/bin. You should be able just to type samstat and see some documentation.

Running samstat on FASTQ files

Here's how you run samstat on a compressed FASTQ files. Let's only run it on a few sequences to avoid overloading the system:

Running samstat on FASTQ example
gunzip -c Sample_Yeast_L005_R1.cat.fastq.gz | head -100000 | samstat -f fastq -n samstat.fastq

This would produce a file named samstat.fastq.html which you need to view in a web browser. Here's some example output from bacterial sequences:

http://web.corral.tacc.utexas.edu/BioITeam/SRR030257_1.fastq.html

Running samstat on BAM files

Before running samstat on a BAM file at TACC, you must make the samtools program accessible by load the samtools module.

module load samtools

Here's how you can run samstat on a small BAM file. This produces a file named small.bam.html which you need to view in a web browser.

Running samstat on BAM example
cp $CLASSDIR/misc/small.bam .
samstat -F 772 small.bam

Note that odd -F 772 option – it's the filtering flags corresponding to samtools view -F <flags>. The value supplied above is what you should supply when you want to analyze only the reads that mapped in your BAM file (BAM files can contain both mapped and unmapped reads).

You can view sample output for aligned yeast data here:

http://web.corral.tacc.utexas.edu/BioITeam/yeast_stuff/yeast_chip_sort.bam.html

 

  • No labels