This section provides directions for generating SSCS (Single Strand Consensus Sequence) reads and trimming molecular indexes from raw fastq files.
First we want to generate SSCS reads where we take advantage of the molecular indexes added during library prep. To do so we will use a "majority rules" python script (named SSCS_DCS.py) which was heavily modified by DED from a script originally created by Mike Schmitt and Scott Kennedy for the original duplex seq paper. This script can be found in the $BI/bin directory which means we will need to copy it to our scratch directory to use it on the compute node. For the purpose of this tutorial, the paired end sequencing of sample DED110 (prepared with a molecular index library) has been placed in the $BI/gva_course/mixed_population directory. Invoking the script is as simple as typing SSCS_DCS.py; adding -h will give a list of the available options. The goal of this command is to generate SSCS reads, for any molecular index where we have at least 2 reads present, and to generate a log file which will tell us some information about the data.
mkdir $SCRATCH/GVA_Error_Correction cd $SCRATCH/GVA_Error_Correction cp $BI/gva_course/mixed_population/DED110*.fastq . cp `which SSCS_DCS.py` . |
If you have not seen the double backtick command in any of the tutorials yet, the command within the `` is evaluated first and the output placed within the `` marks to be evaluated by the outer command. In this case it means that wherever the SSCS_DCS.py script is in our path it will be copied to our current location.
You can often get more information about python scripts by typing the name of the script followed by the -h command.
|
This is expected to take more than 30 minutes in an idev shell. You may want run it as a submitted job rather than interatively. If you are unsure how to accomplish this please ask. Suggest looking over the alternative library prep presentation Alternative Library Prep Methods.pdfor the duplex sequencing paper itself in the mean time
The SSCS_Log is a great place to start. Use the tail command to look at the last 8 lines of the log file to determine how many reads made it from raw reads to error corrected SSCS reads.
tail -n 8 SSCS_Log |
There are approximately 1/6th as many SSCS reads as raw reads: Total Reads: 6066836 SSCS count: 978142 While this is somewhat misleading as it takes a minimum of 2 reads to generate a single SSCS read, we do have some additional information regarding what happened to the other reads. The first thing is to consider is the "Dual MI Reads" these represent the reads which correctly had the 12bp of degenerate sequence and the 4bp anchor. In this case, more than 1.5 million reads lacked an identifiable molecular index on read 1 and/or read 2. By that regard, we had ~1/4 as many SSCS reads as raw reads. |
Perhaps more interesting is the number of errors removed. This is also available in the SSC_Log file, but in the middle of the file and don't have any good handle to grep with. One option is to cat the entire file and scroll around, another is to use tail/head commands you can get the specific lines only:
tail -n 94 SSCS_Log | head -n 86 |
The 3 columns are the read posistion, the number of bases changed, and the number of bases not changed. If you copy and paste these 3 columns into excel you can easily calculate the sum of the 2nd column to see that 446,104 bases were changed. The read position is based on the 5-3' sequence, and you should notice that generally the higher the read position, the more errors were corrected. This should make sense based on what we have talked about with decreasing quality scores as read length increases.
From our earlier tutorial on read quality control you likely remember that we installed cutadapt into our GVA2021 conda environment. Use cutadapt's help files to figure out how to trim the first 16 bases off DED110_CATGGC_L006_R1_001.fastq and DED110_CATGGC_L006_R2_001.fastq
conda activate GVA2021 cutadapt -h |
cutadapt -u 17 DED110_CATGGC_L006_R1_001.fastq -o DED110.R1.trimmed.fastq cutadapt -u 17 DED110_CATGGC_L006_R2_001.fastq -o DED110.R2.trimmed.fastq |
Each of these commands will take 1-2 minutes to complete. Think about ways you could have run both commands at the same time. In the tutorials (including some of the optional ones) there are at least 3 ways you may have been shown, and many more we haven't. How many can you come up with?
# 1. use a semicolon to separate the two commands so that the second will start as soon as the first finishes: cutadapt -u 17 DED110_CATGGC_L006_R1_001.fastq -o DED110.R1.trimmed.fastq; cutadapt -u 17 DED110_CATGGC_L006_R2_001.fastq -o DED110.R2.trimmed.fastq # 2. use a double && between the commands so the second will start as soon as the first finishes, if it finishes without any errors: cutadapt -u 17 DED110_CATGGC_L006_R1_001.fastq -o DED110.R1.trimmed.fastq && cutadapt -u 17 DED110_CATGGC_L006_R2_001.fastq -o DED110.R2.trimmed.fastq # 3. use a trailing & to have the commands run in the background: cutadapt -u 17 DED110_CATGGC_L006_R1_001.fastq -o DED110.R1.trimmed.fastq & cutadapt -u 17 DED110_CATGGC_L006_R2_001.fastq -o DED110.R2.trimmed.fastq & |
Of the 3 answers we show above, one of them will actually finish much sooner than the first. Do you know which one and why?
The 3rd solution will finish before the other two because they are actually executed at the same time rather than waiting for one to finish. In many circumstances this is among the best ways to do something like this, and 'simple' read trimming with the cutadapt is one of them. If you are doing something much more computationally intense (say read mapping, variant calling, or genome assembly) trying to complete the tasks at the same time will often leave you with no results at all as you run out of memory even on the compute nodes and the programs error out. |
Checking the current contents of the directory will show you we've now made 2 new .trimmed.fastq files in addition to the trio of .fastq files we made in the error correction part of the tutorial. The DED110_SSCS.fastq is the one of most interest to us for the follow up tutorial, while both the .trimmed.fastq files will be of interest. Rather than working with 3 files for 2 samples (error corrected and trimmed), use what you have learned about piping to generate a single file called DED110_all.trimmed.fastq and check your work.
|
cat *.trimmed.fastq > DED110_all.trimmed.fastq head DED110_all.trimmed.fastq tail DED110_all.trimmed.fastq wc -l *.trimmed.fastq |
If your solution was different, feel free to ask me if yours will work the same.
You should now have 2 new .fastq files which we will use to call variants in: DED110_SSCS.fastq, and DED110_all.trimmed.fastq. You should take these files into a more in depth breseq tutorial for comparisons of the specific mutations that are eliminated using the error correction (SSCS). Link to other tutorial.