Throughout the course we have focused on aligning a single sample of reads against a single reference genome with a few exceptions, but sometimes we know that is not the biological case. Sometimes we know that there are multiple different things present in a single sample. One of the most common situation would be a sample with a chromosome as well as a plasmid. Here we will examine the same sample used in the novel DNA identification tutorial to see how inclusion of the 2nd reference file changes the mapping results.
The discussion of concepts in this tutorial are identical to the breseq with multiple refs tutorial and work with the same data. Discussion of results is different in both tutorials.
- Understand how mapping against multiple reference genomes simultaneously decreases noise and increases confidence in variant calls.
- Map reads against multiple references, and evaluate mapping success/frequency.
Mapping against multiple references
Why is mapping against multiple references at the same time preferred to mapping against multiple different references 1 at a time? The answer relates to identifying real mutations from errors. As we discussed in our initial mapping tutorial/presentation, mapping scores and alignment scores are both related to how confident the mapping program is that an individual read is mapped to the correct location in the genome, and how well that that read aligns at that location. Imagine a hypothetical situation where in you have a 200bp region of a low copy plasmid that differs from the chromosome by a single base.
- If you map against both references at the same time, the mapper will associate each read uniquely to either the plasmid region or the chromosome without having any mismatches.
- If you map against the references separately, both will result in 50% of the reads aligned to this region as having a high quality mapping score, and a slightly diminished alignment score for both runs. In the case of clonal haploids, the 50% frequency would be concerning as 50% shouldn't occur under normal circumstances, but the duplication of a region followed by a single base change in one of the copies would produce an identical result.
Get some data
Here we will use the same data as was used in the novel DNA identification tutorial plus an additional reference file associated with the plasmid known to be present.
^ expected 2 gzipped fastq files and 2 gbk reference files
Set up the run
Hopefully by now, it will not surprise you that we will have a 3 step process: converting references to fasta, indexing the references, and mapping the reads. If it does recall the read mapping tutorial, and the novel DNA identification tutorial for additional information and examples. Here, less description of individual steps will be given.
Recall that fasta files can have multiple sequence entriesAnd thus we could combine our new fasta files using the cat command and piping to create a single file containing both reference sequences. In that case, the same procedure as was used in the original read mapping tutorial could be followed and will produce the same results as obtained here. Besides making it less obvious how bowtie2 is handling multiple reference sequences and semi defeating the purpose of this tutorial. I think it better practice to keep the reference files separate as from personal experience I can tell you it is easier to supply a list of different fasta files to be indexed than it is to edit fasta files for adding/removing multiple individual fasta entries as I recently did for a project involving 39 different references.
Remember to make sure you are on an idev done
For reasons discussed numerous times throughout the course already, please be sure you are on an idev done. It is unlikely that you are currently on an idev node as copying the files while on an idev node seems to be having problems as discussed. Remember the hostname command and showq -u can be used to check if you are on one of the login nodes or one of the compute nodes. If you need more information or help re-launching a new idev node, please see this tutorial.
^ this may be the first time we have used the --threads command while making our index, this is just giving the index command access to all the threads/processors available. While increasing the amount of text printed to our terminal, it also speeds up the process.
^ finally, you may notice that it uses a much shorter name of the index inside the bowtie2 directory. Recall advice about naming and the risk of making multiple 'my-bowtie2-reference' files and being able to easily know what genomes to which they refer.
The following command will take ~45 seconds to complete. Before you run the command execute '
bowtie2 -h' so while the command is running you can try to figure out what the different options are doing that we did not include in our first tutorial.
Here we map in with the same conditions as was used in the novel DNA identification tutorial though you may notice in this case we are not capturing unmapped reads to a separate fastq file. A description of the options which may be new to you is as follows:Click here to explain the new options.
map in very sensitive local alignment mode
print time info
use 48 processors
compressed reads used for mapping. reads often stored in compressed state to save disk space
Sam file detailing mappingWhy did this mapping take ~45 seconds despite having 2 references to map against while in the novel DNA identification tutorial it took ~7 minutes?
In the novel DNA identification tutorial, we additionally requested unmapped reads to be printed directly to a fastq file for downstream use. Reading and writing from and to the disk are often two of the most time consuming tasks that computer perform. Consider this when some programs have options to write to files that you will never use, and when you are stringing commands together that offer to put things to standard out and read from standard in as they skip the write then read steps
Evaluating the results
Results from mapping output
The stdoutput of the program listed:
99.31% overall alignment rate
For reference incase you did not do the novel DNA tutorial, only 66.62% of reads mapped to the chromosome alone. This suggests that ~1/3 of all the reads came from the plasmid.
1 liner for for determining frequencies
This may take a minute to run. Expected output:
Note that the alignment of columns is not awesome but can easily be copied to excel for better alignment and that this would work for any number of sam files (aka samples).
Downstream consequences of mapping to both references at the same time.
Notice that if you divide the reads mapping to the chromosome (1556813) by the total reads (2452484) you get 63.48% not the 66.62% listed above for mapping to the single reference.
As mentioned in the introduction, this is expected as having the second reference present allows for individual reads to map to the correct reference and with a higher score than it would have mapped to the incorrect reference, rather than forcing the read to map to the only available reference and reporting a lower alignment score.
If you were to take this set of reads, map it to the chromosome alone (as was done in the novel DNA tutorial), and map it to both chromosome and plasmid together (as you have just done here) and call SNVs on both sam files (with knowledge you gained from the SNV tutorial) and compare the variants, you would see fewer variants in the sample where the reads were mapped to both references. You could further use the information in the human trios tutorial to call the SNVs at the same time, and conduct the comparison presented in that tutorial to specifically identify which variants are different.
While the data presented above may be referred to as 'tidy format' where each line contains 1 piece of information, making it easy for computers to parse (especially the tidyverse packages in R), personally, my human eyes prefer tables where rows are related to samples (in this case sam files), and columns are related to values (in this case chromosomes mapped to). I'm sure that someone better versed in bash/awk/etc could put together a script to do so directly from the raw file. As I'm better versed in python, instead I use the for loop from the above command, and pipe the tidy results to a .tsv file, then use a python script to convert it to a table. How can you get a hold of this awesome tool you ask? BioITeam of course. Note this means you will need to be on the head node to use it.
While the above doesn't print anything, 'cat ref_counts_table.tsv' prints the following:
Again the format is not great, but can be copy pasted to Excel or similar for column alignment. and in either case this will produce a single row for each sample and single column for each reference, while at the same time, changing the * category (which from reading info about samtools you would see is a read that does not map to any reference) to a more human understood 'unmapped' classification.
You were promised a description of the complicated for loop above.
The above box is a copy paste from the header information found in the bt2-mapping-ref-count-table.py file.
Note 1 last trick here where the text between the ` marks (backtick NOT single quote) is evaluated fist allowing you to open the script for viewing in the less viewer. Frustratingly, once you hit the first ` mark, you lose the ability to tab complete often making it easier to type the which command with tab completion, then go back to add the ` marks.
As noted above, the human trios tutorial deals with calling SNVs on multiple samples at once. Consider how you could combine the two steps.
Since we are looking at a bacteria and it's plasmid, you can run this through breseq. Very short tutorial can be found here.