Scott's list of linux one-liners
This is a selection of linux one-liners useful in NGS, roughly categorized. Almost none of these will run "as-is" - some are cut straight from shell scripts and so refer to variables, others have file names which you'd need to customize.
Note also that some of the grep strings are hardwired to the UT Illumina sequencer - the introductory "@HWI" of a fastq file is instrument-specific. You can't just use '^@' by itself to grep a fastq because some fraction of the ASCII-encoded quality values start with "@".
These are intended to be useful and general-purpose; you can do almost all of these examples in a more optimized way, in 10 different ways, with other tools, Python, Perl, etc. etc. But that's not the point.
The point for me is to have a set of core commands I know well enough that they will give me the answer I want, correctly, and quickly to a very wide variety of questions.
The core commands - all of these have lots of options, making them powerful and general.
grepuses regular expressions to search for strings in files. Some good online resources to understand it are at robelle.com and at thegeekstuff.com (http://www.thegeekstuff.com/2011/01/advanced-regular-expressions-in-grep-command-with-10-examples-–-part-ii/)
awkis an actual programming language, like perl or python. But it's built-in to linux, is very general-purpose, and is ideal for parsing & filtering large text files.
uniqsimply recognizes when two or more adjacent lines are the same. Some useful switches:
-ccounts how many identical lines there were and prepends the count to the line itself,
-w Nonly uses the first
Ncharacters of each line to decide if they're the same.
sortorders a bunch of lines. It's very fast and memory efficient - sorting a few million lines on a current linux system should only take a few minutes.
There are others I use often, but not as much as these four. Examples:
Fasta/fastq file characterization, reformatting, etc.
Find out if you have some unusually abundant sequences:
This will produce a ranked list of the most abundant sequences from among the first 25,000 sequences (remember 4 lines per sequence in a fastq).
It can be easily to fool this algorithm, especially if your sequences are long and have higher error rate at the end, in which case you can adjust the uniq command to, say, the first 30 bp:
If you are expecting a constant motif somewhere in all your data, it's a good idea to see if it's there. Just add an awk to snip out where the CR should be (bp 12 to 22 in this example):
Take two fastq files and turn them into a paired fasta file - drop the quality info and put each read pair one after the other:
If you'd like to filter only on read pairs with quality values no smaller than 10, it's a bit more work:
Note that you could have combined the two awk commands - I don't like to do that because it's harder for me to see what I'm doing, and harder for me to copy it for the next problem I need to solve. Like that handy awk script to convert ASCII quality values to numbers?
Sam file utilities (samtools v0.5.19)
If you have, or need, bam files use samtools to interconvert:
If you need to globally change the names of all your references in a sam file, you can use something like this:
Of course you can also use samtools reheader to accomplish the same thing. It works more reliably on large .bam files with many contigs.
If you only need to change the name of one reference, I'd use sed and a trick like this to pipe straight from binary to text and back to binary:
You can put any combination of grep/sed/awk you like in between the two samtools commands; this might be useful if you have a mixed population. You should map all the data at once to a concatenated reference, but then might want to separate the BAM file into two - one for organism A and one for organism B.
You can also use this for things like searching for indels quickly by scanning the CIGAR string, or parsing any of the custom fields that your mapper/genotyper have produced.
You can also do this with bamtools and picard.
This seqanswers post has a great two-liner to add a read group to a
BAM file, and contrasts it with a perl program (many more than 2 lines) to do the same thing.
If you'd like to find out the most abundant sequence start site, try this:
Note that this samples just the first 100,000 mapped sequences - works fine on an unsorted sam file, but doesn't make sense on a sorted bam/sam file. On a sorted file, you have to process the whole file. Even so, doing this command on a few tens of millions of sequences should take only a few minutes.
If you'd like to join mated sequences in a sam/bam file onto one line (and discard the unmated sequences) so you can process them jointly, this method is pretty robust:
It uses the SAM file's flag field (field 2) and some bitwise operations to confirm that both the read and it's mate are mapped, puts the separate alignments into separate files, then uses paste to join them. Since mappers like BWA may output a variable number of fields for each read, I've just used the first four fields. You can also do this with a one-line awk command and do more processing within that command if you want to (left as an exercise to the reader).
Exploring vcf files
It's very easy to get some quick stats on VCF files with linux utilities.
Take Illumina CASAVA >= 1.8 and add /1 or /2 to the end of the first field of the tag, and throw out the second field of the tag.
If a FASTQ file has paired-end reads intermingled, and you want to separate them into separate /1 and /2 files. This script assumes the /1 reads precede the /2 reads.
After some operations on separate R1 and R2 FASTQ files, paired-end records will no longer be matched, and you need to eliminate records that are no longer matched.