Overview

Before you start the alignment and analysis processes, it can be useful to perform some initial quality checks on your raw data. If you don't do this (or don't do this sufficiently), you may notice at the end of your analysis some things still are not clear: for example, maybe a large portion of reads do not map to your reference or maybe the reads map well, except the ends do not align at all. Both of these results can give you clues about how you need to process the reads to improve the quality of data that you are putting into your analysis.

A stitch in time saves nine

For many years this tutorial alternated between being included as an optional tutorial, a required tutorial, or if it should be ignored all together as the overall quality of data increases. A few years ago a colleague of mine spent several days working with and trying to understand some data he got back before reaching out for help, after I spent a few additional hours of running into a wall, FastQC was used. Less than 30 minutes later it was clear the library was not constructed correctly and could not be salvaged. I believe that makes this one of the most important tutorials available.

Luckily, read pre-processing has also gotten easier and faster. 

Learning Objectives

This tutorial covers the commands necessary to use several common programs for evaluating read files in FASTQ format and for processing them (if necessary).

  1. Use basic linux commands to determine read count numbers and pull out specific reads.
  2. Diagnose common issues in FASTQ read files that will negatively impact analysis.
  3. Trim adaptor sequences and low quality regions from the ends of reads to improve analysis.

FASTQ data format

A common question is 'after you submit something for sequencing what do you get back?' The answer is FASTQ files.

While there is some additional log files that you may be able to get off the instrument, the reality is none of those are actually 'data' of anything other than high level instrument performance. The good news is you don't actually need anything else. For single end sequencing you would have a single file, while paired end sequencing provides 2 files: 1 for read1 and another for read2. Each file contains a repeating 4-line entry for each individual read.

The first 4-line FASTQ read entry in the $BI/gva_course/mapping/data/SRR030257_1.fastq file
@SRR030257.1 HWI-EAS_4_PE-FC20GCB:6:1:385:567/1
TTACACTCCTGTTAATCCATACAGCAACAGTATTGG
+
AAA;A;AA?A?AAAAA?;?A?1A;;????566)=*1
  1. Line 1 is the read identifier, which describes the machine, flowcell, cluster, grid coordinate, end and barcode for the read. Except for the barcode information, read identifiers will be identical for corresponding entries in the R1 and R2 fastq files.
  2. Line 2 is the sequence reported by the machine.
  3. Line 3 is almost always just '+' . (occasionally the line will be the same as the first line except the initial @ symbol is changed to a +) 
  4. Line 4 is a string of Ascii-encoded base quality scores, one character per base in the sequence. For each base, an integer quality score = -10 log(probability base is wrong) is calculated, then added to 33 to make a number in the ASCII printable character range.

See the Wikipedia FASTQ format page for more information.


How to think of paired end files.

Often I will hear people refer to read1 as the "forward" read and read2 as the "reverse" read. While technically true in that the reads are on opposite strands, thinking of them in this manner seems to correlate with misunderstandings downstream. Specifically thinking of the reads as forward and reverse leads to thinking all read1 should map 5` to 3` on the genome while read2 should map 3` to 5` as complementary sequence. Further, when it comes to evaluating read mapping quality or variant support (covered in later tutorials), thinking of them in this manner results in incorrectly applying confidence checks in the believability of  variant calls. 

Read1 and Read2 come from the same fragment of DNA, but that piece of DNA can be placed between adapter sequences in either orientation. Saving application of additional terminology until after reads are mapped may help you keep things clearer in your head.



Determine 2nd sequence in a FASTQ file

What the 2nd sequence in the file $BI/gva_course/mapping/data/SRR030257_1.fastq is?

Basic command using the 'head' command. You likely know enough to be able to guess this
head $BI/gva_course/mapping/data/SRR030257_1.fastq 
Line typeValue
Read identifier@SRR030257.2 HWI-EAS_4_PE-FC20GCB:6:1:407:767/1
Read SequenceTAAGCCAGTCGCCATGGAATATCTGCTTTATTTAGC
Line 3+
Quality score

AAAAAAAAAA;A?AA;A?AAAAAA8????+9=&=1;

If thats what you thought it was congratulations, if it is different, do you see where we got it from? If it doesn't make sense ask for help.

More advanced solutions to do slightly different things:

  1. The -n option can be used to control how many lines of a file are printed:

    Show just the first 2 reads
    head -n 8 $BI/gva_course/mapping/data/SRR030257_1.fastq 
  2. The output of the head command can be piped to the tail command to isolate specific groups of lines:

    Show just the 2nd read
    head -n 8 $BI/gva_course/mapping/data/SRR030257_1.fastq | tail -n 4
  3. The grep command can be used to look for lines that contain only ACTG or N:

    head | tail | grep to isolate the 2nd read
    head -n 8 $BI/gva_course/mapping/data/SRR030257_1.fastq | tail -n 4 | grep "^[ACTGN]*$"
    head to show the first 10 lines, grep to only print the 3 sequence lines
    head $BI/gva_course/mapping/data/SRR030257_1.fastq | grep "^[ACTGN]*$"
    grep to only print the first 3 lines containing sequence
    grep -m 2 "^[ACTGN]*$" $BI/gva_course/mapping/data/SRR030257_1.fastq

    ^ by increasing the "-m" value we can now quickly get a block of sequence of any size of our choosing. This is the first truly useful command on this page. With a block of sequence, you can start to see things like:

    1. if the first/last bases are always the same

    2. if the reads are the same length

    3. if a single sequence shows up a huge number of times
  4. This is our first example that there are often many different ways to do the same thing in NGS analysis. While some may be faster, or more efficient, the same answers are still achieved. Don't let the pursuit of perfection keep you from getting your answers in whatever way you can justify. 

Counting sequences

Often, the first thing you (or your boss) want to know about your sequencing run is simply, "how many reads are there?". For the $BI/gva_course/mapping/data/SRR030257_1.fastq file, the answer is 3,800,180. How can we figure that out?

The grep (or Global Regular Expression Print) command can be used to determine the number of lines which match some criteria as shown above. Above we used it to search for:

  1. anything from the group of ACTGN with the [] marking them as a group
  2. matching any number of times *
  3. from the beginning of the line ^
  4. to the end of the line $

Here, since we are only interested in the number of reads that we have, we can make use of knowing the 3rd line in the fastq file is a + and a + only, and grep's -c option to simply report the number of reads in a file.

Can you use the information above to write a grep command to count the number of reads in the same file?
grep -c "^+$" $BI/gva_course/mapping/data/SRR030257_1.fastq
No you don't!
grep -c "+" $BI/gva_course/mapping/data/SRR030257_1.fastq

Gives a result of 4,770,069 because the + sign is found in some of the quality score lines.

Remember computers always answer exactly what you ask, the trick is asking the right question

Without the anchors you asked the computer, "how many lines have a + symbol on them". With the anchors you asked "how many lines start with a + symbol and have no other characters on the line. Remember, this only works when we know for certain that line 3 is a "+" symbol by itself. This is where head/tail can be useful.

We can also check using similar methods (and give another example of different analysis giving us the same result):

grep to look specifically for sequences directly. This will work regardless of what line 3 is
grep -c "^[ACTGN]*$" $BI/gva_course/mapping/data/SRR030257_1.fastq
The wc command (word count) using the -l switch to tell it to count lines
wc -l $BI/gva_course/mapping/data/SRR030257_1.fastq 

The wc -l command says there are 15200720 lines. As FASTQ files have 4 lines per sequence, so the file has 15,200,720/4 or 3,800,180 sequences.

Of course, but the bash shell has a really strange syntax for arithmetic: it uses a double-parenthesis operator. Additionally unlike a calculator that automatically prints the result to the screen when it performs an operation, we have to explicitly tell bash that we want to see what the result is. We do this using the echo command, and assigning the result to a non-named temporary variable.

Arithmetic in Bash
echo $((15200720 / 4))

While this is certainly possible, memorizing different formats is often not worth the effort and it can be easier to use another program (ie excel or a standard calculator to do this type of work) or use other commands like grep.


Counting with compressed files

Thus far we have looked at a FASTQ file. Because FASTQ files contain millions-billions of lines and often billions+ characters, they are often stored in a compressed format. Specifically they are typically stored in a 'gzipped' format to save storage space and typically will end with ".gz" so you can identify them. While the files are easily changed from compressed to non-compressed and back again (and you will do some of this throughout the course and plenty more in your own work), the bigger the file, the longer such actions will take.

Sometimes you've already set up some commands to do a particular analysis with a program that accepts gzipped compressed files as inputs, but you are still interested in checking how many reads you have overall (maybe you want to calculate how many reads survive a trimming-mapping-extraction pipeline). For years, the way I had done this was by using pipes to link commands to force output back to the word count command. I was thrilled because it meant I didn't have to gunzip all the files then gzip them back when I was done. Specifically, I used gunzip -c to write decompressed data to standard output (-c means "to console", and leaves the original .gz file untouched) and then piped that output to wc -l to get the line count, copy pasted that into excel, and divided the cell value by 4 to get the final answer. 

Using wc -l on a compressed file output
gunzip -c $BI/gva_course/mapping/data/SRR030257_2.fastq.gz | wc -l

Does that sound tedious to you? Until about 3 years ago, to me it just sounded like how determine the number of reads in a compressed file without having to re-gzip them after you had the answer.

Then a few years ago, by trying to do something slightly different with grep I was looking at the grep manual when the following option jumped out at me:

-Z, -z, --decompress        Force grep to behave as zgrep.

After some googling I found out that a tremendous amount of time could be saved by just using the zgrep command:

zgrep command to count reads in .fastq.gz files much quicker
zgrep -c "^+$" /corral-repl/utexas/BioITeam/gva_course/mapping/data/SRR030257_2.fastq.gz

While you shouldn't spend a large amount of time looking for the perfect solution, don't be afraid to try new things (as long as your data is backed up somewhere incase you mess a file up beyond recognition or repair), and always think how you can apply new skills to old problems.


While checking the number of reads a file has can solve some of the most basic problems, it doesn't really provide any direct evidence as to the quality of the sequencing data. To get this type of information before starting meaningful analysis other programs must be used.

Evaluating FASTQ files with FastQC

FastQC overview

Once you move past the most basic questions about your data, you need to move onto more substantive assessments. As discussed above, this often-overlooked step helps guide the manner in which you process the data, and can prevent many headaches that could require you to redo an entire analysis after they rear their ugly heads.

FastQC is a tool that produces a quality analysis report on FASTQ files that has great examples and is easy to understand: 


Below is a recap of what was discussed during the prestation:

First and foremost, the FastQC "Summary" on the left should generally be ignored. Its "grading scale" (green - good, yellow - warning, red - failed) incorporates assumptions for a particular kind of experiment, and is not applicable to most real-world data. Instead, look through the individual reports and evaluate them according to your experiment type.

The FastQC reports I find most useful are:

The Per base sequence quality report, which can help you decide if sequence trimming is needed before alignment.

The Sequence Duplication Levels report, which helps you evaluate library enrichment / complexity. But note that different experiment types are expected to have vastly different duplication profiles.

The Overrepresented Sequences report, which helps evaluate adapter contamination.

A couple of other things to note:

  • For many of its reports, FastQC analyzes only the first 200,000 sequences in order to keep processing and memory requirements down.
  • Some of FastQC's graphs have a 1-100 vertical scale that is tricky to interpret. The 100 is a relative marker for the rest of the graph.
    • For example, sequence duplication levels are relative to the number of unique sequences.

Running FastQC

You may recall from today's first tutorial, that we used the conda system to install fastqc in preparation for this tutorial. If you did not complete that, please go back and do so now, and don't hesitate to ask a question if you are having difficulties. Interactive GUI versions are also available for Windows and Macintosh and can be downloaded from the Babraham Bioinformatics web site. We don't want to clutter up our work space so copy the SRR030257_1.fastq file to a new directory named GVA_fastqc_tutorial on scratch, and use fastqc's help option after it is installed to figure out how to run the program. Once the program is completed use scp to copy the important file back to your local machine (The bold words are key words that may give you a hint of what steps to take next)

Running FastQC example
mkdir $SCRATCH/GVA_fastqc_tutorial
cd $SCRATCH/GVA_fastqc_tutorial
cp $BI/gva_course/mapping/data/SRR030257_1.fastq .
 
fastqc -h  # examine program options
fastqc SRR030257_1.fastq  # run the program

Potential error with fastqc program execution

As noted during Monday's class, several people were experiencing an that included text:


Can't locate warnings.pm:   /corral-repl/utexas/BioITeam//local/share/perl5/warnings.pm: Permission denied at /home1/08965/vramirez/miniconda3/envs/fastqc-test/bin/fastqc line 2.
BEGIN failed--compilation aborted at /home1/08965/vramirez/miniconda3/envs/fastqc-test/bin/fastqc line 2.

This is believed to be related to the occasional "permission is denied" error some were getting when trying to access the BioITeam contents. If you have circled back to this, and are experiencing this error please get my attention. I have been unable to troubleshoot this fully as I have not been having the problem, but I believe the answer will be found in:

commands for troubleshooting fastqc error
perl -V  #bottom @INC: section
perldoc -l warnings  # should list the specific version of the warnings module attempted to be being used
echo $PERL5LIB

Solution will likely be modifying $PERL5LIB away from BioITeam contents which should force conda environment versions.


ls -l shows something like this
-rwxr-xr-x 1 ded G-802740 498588268 Jun 13 12:06 SRR030257_1.fastq
-rw-r--r-- 1 ded G-802740    291714 Jun 13 12:07 SRR030257_1_fastqc.html
-rw-r--r-- 1 ded G-802740    455677 Jun 13 12:07 SRR030257_1_fastqc.zip

The SRR030257_1.fastq file is what we analyzed, so FastQC created the other two items. SRR030257_1_fastqc.html represents the results in a file viewable in a web browser. SRR030257_1_fastqc.zip is just a Zipped (compressed) version of the results.

Looking at FastQC output

As discussed in the introduction tutorial, you can't run a web browser directly from your command line environment. You should copy the results back to your local machine (via scp) so you can open them in a web browser.

A reminder about the scp tutorial if you didn't get to it in the first part of today's class

Here is a more detailed description of how to use scp to transfer files around. In this case, you will replace "README" with "SRR030257_1_fastqc.html"

Reminder of the 2 terminal method for transferring files with scp
# on tacc terminal
pwd
 
# on new terminal of local computer
scp <username>@stampede2.tacc.utexas.edu:<pwd_results_from_other_window>/SRR030257_1_fastqc.html ~/Desktop
 
# open the newly transferred file from from the desktop and see how the data looks


Exercise: Should we trim this data?

Based on this FastQC output, should we trim (1) adaptor sequences from the ends of the reads AND/OR (2) low quality regions from the ends of the reads?

The Per base sequence quality report does not look great. If I were making the call to trim based soley on this I'd probably pick 31 or 32 as the last base as this is the first base that the average quality score drops significantly. More importantly, nearly 1.5% of all the sequences are all A's according to the Overrepresented sequences. This is something that often comes up in miSeq Illumina runs that has shorter insert sizes than the overall read length. Next we'll start looking at how to trim our data before continuing.

FASTQ Processing Tools

Cutadapt

There are a number of open source tools that can trim off 3' bases and produce a FASTQ file of the trimmed reads to use as input to the alignment program. Cutadapt provides a simple command line tool for manipulating fasta and fastq files. The program description on their website provides good details of all the capabilities and examples for some common tasks. Using what you learned about installing fastqc via conda, see if you can figure out how to install cutadapt.

a good first guess would be
conda install cutadapt

However, just as we saw for fastqc, cutadapt isn't in one of known channels. Based on the error message we would then be directed to search for cutadapt on https://anaconda.org/ and would then make our way to https://anaconda.org/bioconda/cutadapt where we would find the solution is actually:

conda install -c bioconda cutadapt


To run the program, you simply type 'cutadapt' followed by whatever options you want, and then the name of the fastq files without any option in front of it. Use the -h option to see all the different things you can do and what options you think might be useful for removing a string of "A"s at the 3` end of the read, and shortening the read to 34 bp

Adapter trimming

cutadapt can be used to trim specific sequences. Typically this is done to remove adapter sequences introduced during library prep that are not related to your sample. Based on our fastqc analysis, the sequence AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA is significantly overrepresented in our data. How would you use cutadapt to remove those sequences from the fastq file?

 

Again, we go back to the program documentation to find what we are looking for: cutadapt -h

For removing the string of "A"s, consider using the -o and the -a options. It is good practice to always include the -m option and get rid of things shorter than 16-20 bp as these are less likely to map well particularly with more recent data that should be measured in the 100 or more bp in length.

Possible solution
cutadapt -o SRR030257_1.Adepleted.fastq -a AAAAAAAAAAAAAAAAAAAA -m 16 SRR030257_1.fastq

 

Command portionpurpose
-o SRR030257_1.Adepleted.fastqcreate this new output file emphasizing that the A bases have been depleted

-a AAAAAAAAAAAAAAAAA

remove bases containing this sequence. 20 As are listed here. Thinking further, how often do homopolymers exist naturally in genomes? when they do exist, how long do they tend to be?
-m 16discard any read shorter than 16 bases after sequence removed as these are more likely difficult to uniquely align to your reference
SRR030257_1.fastquse this file as input

From the summary printed to the screen you can see that this removed a little over 2.5M bp of sequence.

Best practice consideration

Since we identified the overrepresentation of the "A"s as being the biggest problem, when you are looking at your own data you would probably want to rerun FastQC on the SRR030257_1.Adepleted.fastq file and reevaluate if 34bp is still the best length to trim at.

It might be longer now because the bases that were removed could be lower quality and thus dragging the overall quality at the base down. This is one of the reasons that trimming to specific lengths is rarely the best solution (though is often the quickest way to get at the highest quality data)

If you have moved quickly through the material thus far or want to revisit this during the optional tutorials later in the week, consider doing exactly this: rerun FastQC and see if you would still make the same choice of much to trim the sequence down.


Trimming low quality bases

Low quality base reads from the sequencer can cause an otherwise mappable sequence not to align. See if you can come up with a cutadapt command to trim the reads down to 34bp.

Again, we go back to the program documentation to find what we are looking for: cutadapt -h

For limiting the length we want to focus on the -l option. As mentioned above we'll also include a -o and -m option.

One possible solution
cutadapt -m 16 -l 34 -o SRR030257_1.trimmed.fastq SRR030257_1.fastq
Command portionpurpose
-o SRR030257_1.trimmed.fastqcreate this new output file emphasizing that the reads have been trimmed

-l 34

Sets the read length to 34
-m 16discard any read shorter than 16 bases after sequence removed as these are more likely difficult to uniquely align to your reference
SRR030257_1.fastquse this file as input

This time, we see we have now removed much more sequence: more than 7.5M bp. That tells us that we got rid of at least 5 million bp that were not part of a chain of As without telling us anything about if they were the lower quality bases that were dragging down the overall quality score. Additionally you may have noticed that trimming was much quicker as the command was more simple.

Possible solution
cutadapt -o SRR030257_1.Adepleted_and_trimmed.fastq -l 34 -a AAAAAAAAAAAAAAAAAAAA -m 16 SRR030257_1.fastq

This time, you see that we removed just under 10M bp. Does it make sense that doing both at the same time removes slightly less sequence?



Bonus Exercise: compressing the trimmed files 

As mentioned above, fastq files are often stored as gzipped compressed files as they are smaller, easier to transfer, and many programs allow for their use directly.

Possible solutions after you have already used cutadapt
gzip SRR030257_1.Adepleted.fastq
gzip SRR030257_1.Adepleted_and_trimmed.fastq
gzip SRR030257_1.trimmed.fastq 


# or more simply using wildcards:
gzip *.fastq
# note that using the * wildcard to match all files that end with .fastq means you would end up compressing the input file as well

Above the citation you see a paragraph that starts: Input may also be in FASTA format. Compressed input and output is supported and auto-detected from the file name (.gz, .xz, .bz2)

So simply by adding .gz to the output file name, cutadapt will compress it after it does the trimming. This may have been more difficult to find since none of the options specifically list an option to compress the output

Possible solution using the program directly
cutadapt -o SRR030257_1.Adepleted_and_trimmed.fastq.gz -l 34 -a AAAAAAAAAAAAAAAAAAAA -m 16 SRR030257_1.fastq

Notice the only thing required was to add the .gz ending to our desired output name and cutadapt takes care of the rest for us.


This is another example of different solutions giving the same final product, and how careful reading of documentation may improve your work. NGS data analysis is a results driven process, if the result is correct, and you know how you got the result it is correct as long as it is reproducible.

A note on versions 


In our first tutorial we mentioned how knowing what version of a program you are using can be. When we installed the the cutadapt package we didn't specify what version to install. Can you figure out what version you used, and what the most recent version of the program there is? .

try using the program's help files, or conda's list function

Still not sure?
cutadapt --version
conda list
conda list cutadapt

Note that all 3 of the above commands will give you the same answer: 1.18

Figuring out the most recent version is a little more complicated. Unlike programs on your computer like Microsoft Office or your internet browser, there is nothing in an installed program that tells you if you have the newest version or even what the newest version is. If you go to the programs website (easily found with google or this link), the changes section lists all the versions that have been list with v4.1 being released on the 7th of this month.

If you were to look at the labels section of https://anaconda.org/bioconda/cutadapt you would see that both v4.1 and v1.18 are available. Since we didn't specify a version, conda tried to figure out what would work best. If you were to play around with removing the cutadapt package and attempt to force v4.1 to be installed you would eventually come to find that there is a conflict between the linux-64 glibc program required with V4.1. Cutadapt version 1.18 however does not have such requirements for installation and therefore was installed as the only available option. 

To install the 4.1 version of cutadapt via conda, we would have to explicitly specify both the version of cutadapt that we wanted (4.1) as well as a compatible version of glibc. The point of using conda is that it is supposed to make installing programs easier, and having to hunt through error messages is anything but easy. This is simplified by specifying an additional conda channel "conda-forge". Given that the version 1.18 did the job, it is not likely to be worth the effort to update the cutadapt version, but if you wanted to:

Easiest known solution to installing version 4.1 of cutadapt
conda install -c bioconda cutadapt=4.1 -c conda-forge

We will discuss conda-forge further in later tutorials.

This won't be the last time we mention different program versions.

Optional Exercise: Improve the quality of R2 the same way you did for R1.

Unfortunately we don't have time during the class to do this, but as a potential exercise in your free time, you could improve R2 the same way you did R1 and use the improved fastq files in the subsequent read mapping and variant calling tutorials to see the difference it can make in overall results.


Return to GVA2022 course page

  • No labels