The fastQC tool was presented in the second tutorial on the first day of the class as the go to tool for quality control analysis of fastq files, but there is an underlying issue that checking each fastq file is quite daunting and evaluating each file individually can introduce its own set of artifacts or biases. The MultiQC tool represents a tool which works directly on fastQC reports to quickly generate summary reports to both identify samples that are different among a group and to make global decisions about how to treat a set of files.
In this tutorial, we will:
- work with some simple bash scripting from the command line (for loops) to generate multiple fastqc reports simultaneously and look at 272 plasmid samples.
- work with MultiQC to make decisions about read preprocessing.
- identify outlier files that are clearly different from the group as a whole and determine how to deal with these files.
Hopefully by now you have had enough experience installing packages via conda that it is second nature to think of going to https://anaconda.org/ searching for multiqc, finding https://anaconda.org/bioconda/multiqc, and using the information there to install the package. Instead, it is suggested that you go to the multiqc homepage and see if they have any recommendations for installation. ← is a hint that there is something potentially unexpected recommended. Try to use this information to figure out what command you should use, and try what you think is best. Explanations follow the version check.
Always remember, you need to activate your environment after you create it.
If you caught the trick from the multiqc homepage you should get returns of "FastQC v0.11.9" and "multiqc, version 1.12" respectively.
If the version number for multiqc is not matching up, I expect you will see something like this.
First, the version lists "dev" in the tittle which typically means it is a temporary development version. Second, we get warnings about pyyaml loader being unsafe and depreciated likely as a matter of safety. Read through the next section which identifies the correct command before continuing as it is untested if this version of multiqc will work.
On the multiqc homepage the quick install section has a tab for conda which lists the following suggested installation block
Which was likely modified to:
If you did not get it the first time, go ahead and use the command above now (when prompted overwrite/replace the existing environment). While this is not the first time we have seen conda-forge be useful to the point of being required, stressing to look over the home page rather than just the anaconda page is done to try to instill a better practices. Sometimes, getting a program "working" is all you need, but more often you can save yourself some time by tracking down the installation instructions from the developer.
Get some data and verify access to fastqc
Copy the plasmid sequencing files found in the BioITeam directory gva_course/plasmid_qc/ to a new directory named GVA_multiqc. There are 2 main ways to do this particularly since there are so many files (544 total).
Generating FastQC analysis
Here we present 2 different options for performing fastqc analysis on all 500+ samples. Given the very small size of these plasmid sequencing files, the second option on the idev node is probably a better choice. Before skipping down to it I suggest reading through the first option and at least generating the "fastqc_commands" file as in your own work you are likely to work with larger numbers of large fastq files, which will make option 1 the better choice.
A note about running fastqc on the head node
Previously, people have asked if fastqc can be run on the head node. The answer is that for a single sample it is usually fine, but that if we were going to deal with large numbers of samples or total number of reads it was probably not the best idea.
Option 1: job queue system
Throughout the first part of the course we focused on working with a single sample and thus were able to type commands 1 at a time. We further only had a few input files that we were dealing with in an individual tutorial thus tab completion and ls are very useful. Here we are dealing with 544 files which is more than the total number of files we dealt with in all the required tutorials combined, and nobody wants to type out 544 commands 1 at a time. Therefore, we are going to construct a single commands file with 544 lines that we can use to launch all commands without having to know the name of any single file. To do so we will use the bash 'for' command.
For loops on the command line have 3 parts:
- A list of something to deal with 1 at a time. Followed by a ';'
- for f in *.gz; in the following example
- Something to do with each item in the list. this must start with the word 'do'
- do echo "fastqc -o fastqc_output $f "; in the following example
- The word "done" so bash knows to stop looking for more commands.
- done in the following example, but we add a final redirect (>) so rather than printing to the screen the output goes to a file (fastqc_commands in this case)
wc -l to see what the output is and how much there is of it respectively.
Next we need to make the output directory for all the fastqc reports to go into and send the fastqc_commands file to the queue to execute. Like our breseq tutorial, this involves the use of a .slurm file.
Again while in nano you will edit most of the same lines you edited in the in the breseq tutorial. Note that most of these lines have additional text to the right of the line. This commented text is present to help remind you what goes on each line, leaving it alone will not hurt anything, removing it may make it more difficult for you to remember what the purpose of the line is
|Line number||As is||To be|
#SBATCH -J jobName
|#SBATCH -J multiqc|
#SBATCH -n 1
#SBATCH -n 68
#SBATCH -t 12:00:00
#SBATCH -t 0:20:00
The changes to lines 22 and 23 are optional but will give you an idea of what types of email you could expect from TACC if you choose to use these options. Just be sure to pay attention to these 2 lines starting with a single # symbol after editing them.
Again use ctl-o and ctl-x to save the file and exit.
Option 2: idev node
As mentioned above, we do not want this on the head node. Make sure you are on an idev node. Please get my attention if you do not know how to do this at this point, or if you don't know how to check if you are.
If you look at the
fastqc -h options you may notice that there is an option for
-t to specify multiple threads and that multiple fastq files can be supplied to a single command.
Using both the * wildcard, and 68 threads, analysis of many samples are initiated at the same time making the output somewhat difficult to read, but significantly increasing the speed at which the samples get analyzed.
Hopefully you can recognize the error message is telling you that the output folder you aretrying to write the fastqc reports to doesn't exist yet, and since the command immediately stopped, its clear this is an error not just something for you to make note of. Unfortunately not all programs are designed to create folders as well as output files. Creating your own empty folder before running a command will never cause a problem, but not creating one can cause problems if the program can't create it for you. As you work with these programs more and more you will either 1. get a feel for which type of program you have and generate folders yourself, with occasional errors and loss of time/effort or 2. get frustrated with the aforementioned losses and just always create your own folders
ls fastqc_output directory you are hit in the face with more files and directories than you have seen in any directory during this class. You immediately can notice that there is a directory and a compressed version of each of those directories, but in order to know things worked correctly, we need to make sure that we have 2 files for each of our 544 samples. The easiest way to do that in my opinion is to pipe that output to the
wc -l command to count the total number of lines.
My personal use
Assuming you see both directories and their associated compressed files, and a total count of 1088 run the 'multiqc -h' command to look through the options and see if you can figure out how to guild the command.
In this case (much as is the case with FastQC) while there are are a reasonable number of options that can be used, none are truly needed for evaluating fastq files. The only requirement is that you specify where the FastQC output that you want to generate a single report for is located. In the example above you changed into the directory containing those results and then specified multiqc should look in you current directory for the files. It would have been comparable to stay in the existing directory, and instead use a command of "multiqc fastqc_output".
Evaluate MultiQC report
As the multiqc_report.html file is a html file, you will need to transfer it back to your laptop to view it. Hopefully, by now you have learned how to do this without needing the scp tutorial open to help you. If not, consider getting my attention on zoom so i can try to help clear up any confusion you may be having.
Once you have the report back on your local computer, open it and begin looking at it. Unlike the FastQC report, the multiqc report comes with detailed help information for each section you can access with the different ?help icons on the right as well as a video at the top of the page. If anything is not clear, and you'd like help clearing it up, let me know.
- Using information in the MultiQC report, modify the bash loop used to create the fastqc_commands file above to create a cutadapt_commands file that could modify all 544 files at once.
- Move over to the fasp tutorial and come back to trim all adapter sequences from all files and rerun fastqc/multiqc to see what a difference trimming makes on overall quality.