LFQC: a lossless compression algorithm for FASTQ files

Similar documents
LFQC: A lossless compression algorithm for FASTQ files

SUPPLEMENT MATERIALS

Scalable Data Structure to Compress Next- Generation Sequencing Files and its Application to Compressive Genomics

GOLOMB Compression Technique For FPGA Configuration

Recycling Bits in LZ77-Based Compression

Word-based Statistical Compressors as Natural Language Compression Boosters

Fast Floating Point Compression on the Cell BE Processor

A Novel Decode-Aware Compression Technique for Improved Compression and Decompression

Satoshi Yoshida and Takuya Kida Graduate School of Information Science and Technology, Hokkaido University

Reduction of Bitstream Transfer Time in FPGA

An Architecture of Embedded Decompressor with Reconfigurability for Test Compression

For IEC use only. Technical Committee TC3: Information structures, documentation and graphical symbols

Data Extraction from Damage Compressed File for Computer Forensic Purposes

Decompression Method For Massive Compressed Files In Mobile Rich Media Applications

INTRODUCTION AND METHODS OF CMAP SOFTWARE BY IPC-ENG

NCSS Statistical Software

An Architecture for Combined Test Data Compression and Abort-on-Fail Test

Light Loss-Less Data Compression, With GPU Implementation

Compression of FPGA Bitstreams Using Improved RLE Algorithm

Efficient Placement of Compressed Code for Parallel Decompression

International Journal of Engineering Trends and Technology (IJETT) Volume 18 Number2- Dec 2014

Fingerprint Recompression after Segmentation

Matrix-based software test data decompression for systems-on-a-chip

INTRODUCTION TO PATTERN RECOGNITION

Profile-driven Selective Code Compression

- 2 - Companion Web Site. Back Cover. Synopsis

Delta Compressed and Deduplicated Storage Using Stream-Informed Locality

A Study on Algorithm for Compression and Decompression of Embedded Codes using Xilinx

A New Searchable Variable-to-Variable Compressor

A N E X P L O R AT I O N W I T H N E W Y O R K C I T Y TA X I D ATA S E T

Early Skip Decision based on Merge Index of SKIP for HEVC Encoding

A Hare-Lynx Simulation Model

PVP Copyright 2009 by ASME

An Efficient Code Compression Technique using Application-Aware Bitmask and Dictionary Selection Methods

COMPRESSION UP TO SINGULARITY: A MODEL FOR LOSSLESS COMPRESSION AND DECOMPRESSION OF INFORMATION

COMPRESSION OF FPGA BIT STREAMS USING EFFECTIVE RUN LENGTH ENCODING TECHIQUES AND ITS PERFORMANCE ESTIMATION

Estimation and Analysis of Fish Catches by Category Based on Multidimensional Time Series Database on Sea Fishery in Greece

Tutorial for the. Total Vertical Uncertainty Analysis Tool in NaviModel3

Basketball field goal percentage prediction model research and application based on BP neural network

EEC 686/785 Modeling & Performance Evaluation of Computer Systems. Lecture 6. Wenbing Zhao. Department of Electrical and Computer Engineering

Queue analysis for the toll station of the Öresund fixed link. Pontus Matstoms *

VPC3: A Fast and Effective Trace-Compression Algorithm

Outline. Terminology. EEC 686/785 Modeling & Performance Evaluation of Computer Systems. Lecture 6. Steps in Capacity Planning and Management

Transposition Table, History Heuristic, and other Search Enhancements

Diagnosis of Fuel Evaporative System

Which compressed air system design is right for you?

THE PROBLEM OF ON-LINE TESTING METHODS IN APPROXIMATE DATA PROCESSING

ICES REPORT November gfpc: A Self-Tuning Compression Algorithm. Martin Burtscher and Paruj Ratanaworabhan

REASONS FOR THE DEVELOPMENT

The Application of Data Mining in Sports Events

Preparing Questions for Brownstone's Software Formats:

Opleiding Informatica

Guidelines for Providing Access to Public Transportation Stations APPENDIX C TRANSIT STATION ACCESS PLANNING TOOL INSTRUCTIONS

HIGH RESOLUTION DEPTH IMAGE RECOVERY ALGORITHM USING GRAYSCALE IMAGE.

A Novel Approach to Predicting the Results of NBA Matches

A Nomogram Of Performances In Endurance Running Based On Logarithmic Model Of Péronnet-Thibault

The Incremental Evolution of Gaits for Hexapod Robots

ENHANCED PARKWAY STUDY: PHASE 2 CONTINUOUS FLOW INTERSECTIONS. Final Report

Numerical simulation of radial compressor stages with seals and technological holes

WHEN WILL YOUR MULTI-TERABYTE IMAGERY STOP REQUIRING YOU TO BUY MORE DATA STORAGE?

Linear Compressor Suction Valve Optimization

CT PET-2018 Part - B Phd COMPUTER APPLICATION Sample Question Paper

CONTROL SOLUTIONS DESIGNED TO FIT RIGHT IN. Energy Control Technologies

Biostatistics & SAS programming

ROUNDABOUT CAPACITY: THE UK EMPIRICAL METHODOLOGY

LEE COUNTY WOMEN S TENNIS LEAGUE

UNDERSTANDING A DIVE COMPUTER. by S. Angelini, Ph.D. Mares S.p.A.

Spacecraft Simulation Tool. Debbie Clancy JHU/APL

Decompression of run-time compressed PE-files

Milking center performance and cost

7 th International Conference on Wind Turbine Noise Rotterdam 2 nd to 5 th May 2017

Technical Bulletin, Communicating with Gas Chromatographs

EMBEDDED computing systems are space and cost sensitive.

Wade Reynolds 1 Frank Young 1,2 Peter Gibbings 1,2. University of Southern Queensland Toowoomba 4350 AUSTRALIA

Sports Analytics: Designing a Volleyball Game Analysis Decision- Support Tool Using Big Data

( ) ( ) *( A ) APPLICATION DATA. Procidia Control Solutions Coarse/Fine Control. Split-Range Control. AD Rev 2 April 2012

Assigning breed origin to alleles in crossbred animals

An Application of Signal Detection Theory for Understanding Driver Behavior at Highway-Rail Grade Crossings

EVALUATION OF ENVISAT ASAR WAVE MODE RETRIEVAL ALGORITHMS FOR SEA-STATE FORECASTING AND WAVE CLIMATE ASSESSMENT

Transformational Safety Leadership. By Stanley Jules

LEE COUNTY WOMEN S TENNIS LEAGUE

MEDICAL IMAGE WAVELET COMPRESSION- EFFECT OF DECOMPOSITION LEVEL

Rearrangement of Recognized Strokes in Online Handwritten Gurmukhi. words recognition.

Module 3 Developing Timing Plans for Efficient Intersection Operations During Moderate Traffic Volume Conditions

Iteration: while, for, do while, Reading Input with Sentinels and User-defined Functions

IEEE TRANSACTIONS ON COMPUTERS, VOL.54, NO.11, NOVEMBER The VPC Trace-Compression Algorithms

Public transport and town planning from a retroactive point of view C. Wallstrom, S. Johansson et al

PEPSE Modeling of Triple Trains of Turbines Sending Extraction Steam to Triple Trains of Feedwater Heaters

67. Sectional normalization and recognization on the PV-Diagram of reciprocating compressor

Smart Card based application for IITK Swimming Pool management

Hitting with Runners in Scoring Position

ASIA PACIFIC RAIL 2006

Broadly speaking, there are four different types of structures, each with its own particular function:

Using Spatio-Temporal Data To Create A Shot Probability Model

A Quadtree-Based Lightweight Data Compression Approach to Processing Large-Scale Geospatial Rasters

Legendre et al Appendices and Supplements, p. 1

Computing s Energy Problem:

Simulation Analysis of Intersection Treatments for Cycle Tracks

CASE STUDY. Compressed Air Control System. Industry. Application. Background. Challenge. Results. Automotive Assembly

GOLFER. The Golf Putting Robot

Transcription:

Bioinformatics, 31(20), 2015, 3276 3281 doi: 10.1093/bioinformatics/btv384 Advance Access Publication Date: 20 June 2015 Original Paper Sequence analysis LFQC: a lossless compression algorithm for FASTQ files Marius Nicolae, Sudipta Pathak and Sanguthevar Rajasekaran* Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269-4155, USA *To whom correspondence should be addressed. Associate Editor: John Hancock Received on April 24, 2015; revised on May 19, 2015; accepted on June 6, 2015 Abstract Motivation: Next Generation Sequencing (NGS) technologies have revolutionized genomic research by reducing the cost of whole genome sequencing. One of the biggest challenges posed by modern sequencing technology is economic storage of NGS data. Storing raw data is infeasible because of its enormous size and high redundancy. In this article, we address the problem of storage and transmission of large FASTQ files using innovative compression techniques. Results: We introduce a new lossless non-reference based FASTQ compression algorithm named Lossless FASTQ Compressor. We have compared our algorithm with other state of the art big data compression algorithms namely gzip, bzip2, fastqz (Bonfield and Mahoney, 2013), fqzcomp (Bonfield and Mahoney, 2013), Quip (Jones et al., 2012), DSRC2 (Roguski and Deorowicz, 2014). This comparison reveals that our algorithm achieves better compression ratios on LS454 and SOLiD datasets. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/rajasek/lfqc-v1.1.zip. Contact: rajasek@engr.uconn.edu 1 Introduction With the advancement of Next Generation Sequencing (NGS), researchers have started facing the problem of storing vast amounts of data. Petabytes of data produced by modern sequencing technologies demand huge infrastructure to store them on disks for current or future analysis. The rate of increase of NGS data volume has quickly outpaced the rate of decrease in hardware cost and as a consequence scientists have realized the need for new sophisticated software for economic storage of raw data. Moreover, transmission of gigantic files produced by sequencing techniques will cost huge amounts of time causing a significant delay in research and analysis. One other fact observed by the scientists is that the NGS data files contain huge redundancies and could be efficiently compressed before transmission. Numerous general purpose compression algorithms can be found in the literature. However, these algorithms have been shown to perform poorly on sequence data. Hence, time and energy were invested to develop novel domain specific algorithms for compression of big biological data files. As revealed in Giancarlo et al. (2009), compression of FASTQ files is an important area of research in computational biology. The NGS data are generally stored in FASTQ format. FASTQ files consist of millions to billions of records and each record has the following four lines: Line 1 stores an identifier. It begins with @ followed by a sequence identifier and an optional description. Line 2 represents the read sequence. This is essentially a sequence of letters. Line 3 begins with the þ character which is sometimes followed by the same sequence identifier as line 1. Line 4 represents the Quality Scores. This is a string of characters. Line 2 and line 4 must be of equal lengths. The sequencing instruments produce DNA sequences and a probability value P for each base in the sequence. This value P is the probability of the base being incorrectly called. These error probabilities are then quantized to integers. In the algorithm of Ewing and Green (1998), the error probabilities are transformed into PHRED quality scores and stored. The transformation is given by Q ¼ 10 log 10 P. Then, the quality scores Qs are truncated and fitted in the range of 0 to 93. Each integer is then incremented by 33 VC The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com 3276

A lossless compression algorithm for FASTQ files 3277 so that the values range from 33 to 126. This is done to be able to print the scores in the form of printable ASCII characters. This format is known as SANGER-format (Cock et al., 2010). Compression of nucleotide sequences has been an interesting problem for a long time. Cox et al. (2012) apply Burrows Wheeler Transform (BWT) for compression of genomic sequences. GReEn (Pinho et al., 2012) is a reference-based sequence compression method offering a very good compression ratio. (Compression ratio refers to the ratio of the original data size to the compressed data size). Compressing the sequences along with quality scores and identifiers is a very different problem. Tembe et al. (2010) have attached the Q- scores with the corresponding DNA bases to generate new symbols out of the base-q-score pair. They encoded each such distinct symbol using Huffman encoding (Huffman, 1952). Deorowicz and Grabowski (2011) divided the quality scores into three sorts of quality streams: 1. quasi-random with mild dependence on quality score positions. 2. quasi-random quality scores with strings ending with several # characters. 3. quality scores with strong local correlations within individual records. To represent case 2 they use a bit flag. Any string with that specific bit flag is processed to remove all trailing #. They divide the quality scores into individual blocks and apply Huffman coding. Tembe et al. (2010) and Deorowicz and Grabowski (2011) also show that general purpose compression algorithms do not perform well and domain specific algorithms are indeed required for efficient compression of FASTQ files. In their papers they demonstrate that significant improvements in compression ratio can be achieved using domain specific algorithms compared with bzip2 and gzip. The literature on FASTQ compression can be divided into two categories, namely lossless and lossy. A lossless compression scheme is one where we preserve the entire data in the compressed file. On the contrary, lossy compression techniques allow some of the less important components of data to be lost during compression. Kozanitis et al. (2011) perform randomized rounding to perform lossy encoding. Asnani et al. (2012) introduce a lossy compression technique for quality score encoding. Wan et al. (2012) proposed both lossy and lossless transformations for sequence compression and encoding. Hach et al. (2012) presented a boosting scheme which reorganizes the reads so as to achieve a higher compression speed and compression rate, independent of the compression algorithm in use. When it comes to medical data compression it is very difficult to identify which components are unimportant. Hence, many researchers believe that lossless compression techniques are particularly needed for biological/medical data. Quip (Jones et al., 2012) is one such lossless compression tool. It separately compresses the identifier, sequence and quality scores. Quip makes use of Markov Chains for encoding sequences and quality scores. DSRC (Deorowicz and Grabowski, 2011) is also considered a state of the art lossless compression algorithm which we compare with ours in this article. Bonfield and Mahoney (2013) have come up with a set of algorithms (named Fqzcomp and Fastqz) to compress FASTQ files recently. They perform identifier compression by storing the difference between the current identifier and the previous identifier. Sequence compression is performed by using a set of techniques including base pair compaction, encoding and an order-k model. Apart from hashing, they have also used a technique for encoding quality values by prediction. Kozanitis et al. (2011) and Fritz et al. (2011) have contributed a reference-based compression technique. The idea is to align the reads using a reference genome. This approach stores the genomic locations and any mismatches instead of storing the sequences. A major advantage of reference-based compression is that as the read length increases it produces much better compression ratios compared with non-reference based algorithms. In spite of providing excellent sequence compression ratios, reference-based compression techniques have many disadvantages. The success of these algorithms solely depends on the availability of a good reference genome database which may not always be ensured. Creation of such databases itself is a challenging problem. Also, referencebased compression techniques are not self contained. Decompression requires the exact reference genome to match the genomic locations and extract the reads. Hence, it is important to preserve the reference genome. In this article, we present a lossless non-reference based FASTQ compression algorithm called LFQC (Lossless FASTQ Compressor) that can elegantly run on commodity machines. The algorithm is provisioned to run in in-core as well as out-of-core settings. The rest of the paper is organized as follows: Section 2 describes the FASTQ structure and provides details on steps involved in compressing FASTQ files. Section 3 supplies details on decompression algorithms. In Section 4, we compare LFQC with other state of the art algorithms and provide performance results. In Sections 5 and 6, we provide future directions and conclusions, respectively. 2 FASTQ files compression A FASTQ file consists of records where each record has four lines. An example record is shown below. @SRR013951:81530PT5AAXX : 8 : 1 : 14 : 518 AGTTGATCCACCTGAAGAATTAGGA þ I1IFI0IEI þ½dollaršii5iiicbcð1bþ0 Several FASTQ compression algorithms found in the literature compress the four fields of the records separately. We also employ this approach in our algorithm LFQC. In LFQC, each stream goes through a preprocessing algorithm and then it is compressed by a regular data compressor. The data compressors we use are zpaq v7.02 (http://mattmahoney.net/dc/zpaq.html) and lpaq8 (http://cs.fi t.edu/m mahoney/compression/#lpaq). Which compressor is used for each stream is discussed in the following sections. 2.1 Compression of the identifier field The identifier field is generated very systematically. This property can be utilized to compress this field. We provide an example of Illumina generated identifier below. HWUSI EAS687 61DAJ : 8 : 1 : 1055 : 3384=1 The identifier consists of the following set of fields: instrument name, flow-cell lane, tile-number within flow-cell lane, x -coordinate of the cluster within the tile, y -coordinate of the cluster within the tile, index number for multiplexed sample and a field indicating whether or not the sequence is a member of a pair. Most of the time the identifier is preceded by the dataset name. After studying a considerable number of FASTQ files we found that the identifier can be divided into four types of tokens. Type 1: Tokens having data that do not change from one record to the next. Type 2: Tokens having the same data value over a set of consecutive records.

3278 M.Nicolae et al. Type 3: Tokens having integer values that are monotonically incremented or decremented over consecutive records. Type 4: Tokens having data not belonging to any of the above mentioned types. Our algorithm for compressing the identifiers works as follows. First, we partition each identifier into tokens. Let I be any identifier. Let I i;j represent the substring of I from the ith position through the jth position. We scan through each identifier I and extract I i;j as a token if the following occurs: I i 1 and I jþ1 belong to any of the following set C of characters: dot(.), space( ), underscore( ), hyphen( ), slash(=), equal sign (¼) or colon(:). We call these symbols as Pivot Symbols. The following example provides an explanation of how an identifier is divided into tokens. Let the identifier be @SRR007215:1135 HWUSI EAS 687 61DAJ : 8 : 1 : 1055 : 3384=1. We split the identifier into the following tokens: Token 1: @SRR007215 Token 2: 1135 Token 3: HWUSI Token 4: EAS687 Token 5: 61DAJ Token 6: 8 Token 7: 1 Token 8: 1055 Token 9: 3384 Token 10: 1 This procedure of splitting the identifier is called tokenization. Tokenization also produces a regular expression formed of place holders, essentially the token numbers and the character set C mentioned above. This regular expression helps us to rebuild the identifier file during decompression. Regular expression for the above example is the following: T 1 :T 2 T 3 T 4 T 5 : T 6 : T 7 : T 8 : T 9 =T 10 ; where T i represents the ith token. Decompression algorithm replaces each T i with the respective value and builds the identifier. Pseudocode for The Tokenization Algorithm is given in Algorithm 1. Let the number of tokens present in each identifier be t. Let the tokens present in identifier I q be T q 1 ; Tq 2 ; Tq 3 ;... ; Tq t. Let the identifiers from all the input records be I 1 ; I 2 ;... ; I N.LetT i stand for the set of all ith tokens from all the identifiers, for 1it.EachT i is called a token set (1it). We first construct these token sets. Followed by this, we have to choose an appropriate technique to compress each of the token sets. We let s i denote the compressed T i, for 1it. The compression techniques we have employed are listed below. RunLength Encoding: Token sets with alphanumeric values are compressed using run length encoding. If run length encoding does not reduce the size of the token set below 90% of the original, the token set is left uncompressed. Note that for token sets with constant values throughout the dataset, this compression technique will reduce the set to two values: the constant value followed by the count. Incremental Encoding: Token sets with integer values are compressed by storing the differences between consecutive tokens. If this method does not reduce the size of the token set below 90% of the original, the token set is left uncompressed. If a token set is not transformed by any of the above methods, then we take each token and reverse it (read it from right to left). We have observed that this tends to improve the compression ratio of the context mixing algorithm applied downstream. Algorithm 1 1: procedure TOKENIZATION 2: C f:; ; ; ;;:g 3: A is the set of all characters 4: for each identifier I in the input do 5: for 1 i < j n (where n ¼jIj) do 6: if I i 1 2 C and I jþ1 2 C then 7: T k I i;j where T k is the k th token and 8: I i;j represents the substring of I from 9: position i through position j. 10: end if 11: end for 12: end for After the above transformations are applied for each token set, a standard context mixing compression algorithm is applied to the transformed strings. In particular, we apply zpaq with parameters -method 5 -threads 4. As discussed above the Identifier Compression Algorithm also generates a regular expression used for reconstruction of the original text after decompression of individual compressed token set texts. We provide the pseudocode for Identifier Compression Algorithm and Incremental Encoding in Algorithm 2 and Algorithm 3, respectively. The Identifier Compression Algorithm also records the encoding algorithms used for the compression of each T i, for 1it. Identifier Compression Algorithm stores the compressed token sets s i in individual files. The regular expression given by RegEx is stored in a separate file too. Algorithm 2 1: procedure IDENTIFIER COMPRESSION 2: Form the token sets using the algorithm TOKENIZATION. 3: Let RegEx be an empty regular expression. 4: for each token set T i do 5: Compress T i using an appropriate encoding algorithm 6: as mentioned above. Let the encoded text be s i. 7: Append s i to RegEx as a place holder for 8: token set T i. 9: Append C j to RegEx where C j is the set of pivot 10: symbols for T i 11: end for Algorithm 3 1: procedure INCREMENTAL ENCODING 2: for 1 i t do 3: Let the tokens in T i be Ti 1; T2 i ;... ; TN i. 4: Let s 1 i ¼ Ti 1 and 5: for 2 j N do 6: Let s j i ¼ Tj i Tj 1 i 7: end for 8: Where s i is the compressed text. 9: end for

A lossless compression algorithm for FASTQ files 3279 2.2 Quality score compression In this section we consider the problem of compressing quality scores. There will be a quality sequence for every read. The length of a quality sequence will be the same as that of the read. There is a quality score for every character in the read. Let Q ¼ q 1 ; q 2 ; q 3 ;... ; q m be any quality score sequence. A careful study of quality sequences reveals that there exists a strong correlation between any quality score q i and the preceding some number of quality scores. The correlation gradually decreases as we move away from q i. The rate of change of correlation varies from one file to the other almost randomly. This nature of FASTQ quality scores makes it challenging to efficiently encode for compression. We have also observed that many sequences end with quality scores 2 and this is a known issue with Illumina base caller. The quality score compression could be slightly improved using a simple technique. If the read lengths are the same, we get rid of all trailing #. The original sequence can be easily reconstructed by appending # characters up to the original read length. For many of the FASTQ files we observe that after # no quality score occurs. In these cases, removing # improves the compression ratio although only slightly. Taking all this into account we have decided to encode quality scores using a context mixing algorithm. We have used zpaq with parameters -method 5 -threads 4. 2.3 Sequence compression We have used the same algorithm as the quality scores for sequence compression. The FASTQ sequences are strings of five possible characters namely A; C; G; T; N. If the sequence contains N s for unknown nucleotides, these usually all have the lowest quality score (which is encoded by! with ASCII code 33). Therefore, we could remove the N characters from the sequences and replace their quality scores by a character not used for other quality scores. However, we empirically determined that this did not improve the results by much and it increases the running time, so we did not make use of this observation. Some datasets have color space encoded reads (SOLiD). A color space read starts with a nucleotide (A; C; G; T) followed by numbers 0 3. The numbers encode nucleotides depending on their relationship to the previous nucleotide. For example, 0 means the nucleotide is identical to the previous one, and 3 means the nucleotide is the complement of the previous one, etc. We remove all the end of line characters from the sequences and keep the ends of line with the quality scores, since the length of the quality score sequence is the same as the length of the read. Next we apply the lpaq8 compression algorithm with parameter 9 meaning highest compression. The algorithm runs single threaded. 3 Decompression Decompression algorithms are inverse to the way compressions are performed. We use two distinct algorithms for decompression, one for identifier decompression and the other for sequence and quality score decompression. 3.1 Identifier decompression In decompression each compressed text s i is decoded using the corresponding algorithm. Details are in Algorithm 5. RegEx provides the reconstruction formula for each identifier. Each place holder t i in the RegEx is replaced with the corresponding decompressed token set T i. We provide the identifier decompression algorithm in Algorithm 4 Incremental Decoding Algorithm in Algorithm 5. Algorithm 4 1: procedure IDENTIFIER DECOMPRESSION 2: for each compressed text s i do 3: Apply the appropriate decoding algorithm to get the 4: decompressed text T i. 5: end for 6: Replace each place holder t i in RegEx with the corresponding decompressed text T i. Algorithm 5 1: procedure INCREMENTAL DECODING 2: Parse s i. 3: Ti 1 ¼ s 1 i 4: T jþ1 i ¼ T j i þ si jþ1 for j ¼ 1; 2; 3;... ; N 1 5: where N is the total number of records. 3.2 Quality score and sequence decompression algorithm As we make use of the same algorithm for compression of sequences and quality scores we combine them into a single algorithm. The decompression of sequence and quality scores is just the reverse of the algorithm discussed for compression and it is straight forward. 4 Experimental results We have compared the performance of our algorithm LFQC with other state of the art algorithms and report the findings in this section. We have compared our algorithm with general purpose compression algorithms like gzip and bzip2 and also a set of algorithms specific to the domain namely DSRC2, fqzcomp, fastqz v1.5, SeqSqueeze1 and Quip. All the algorithms were executed with the recommended options for highest compression, whenever available. The algorithms and the compression options are summarized in Table 1. It is important to note that our algorithm does not accept any input other than a raw FASTQ file. Hence, we do not bring into comparison algorithms that accept any reference genome, performs any sorting or alignment using a reference genome. Some of the algorithms assume that the FASTQ files are stored in Unicode format where each character takes 2 bytes on disk. We calculate all the compression ratios based on the assumption that each character on disk occupies 1 byte. In other words, we work on the raw FASTQ files without any change and without taking any extra information from outside. Ours is a lossless compression algorithm and performs an overall compression of the entire FASTQ file. To perform a fair comparison we have ignored all lossy compression techniques and algorithms compressing only the sequence or quality score fields. The datasets used in our experiments are all publicly available. We downloaded them from 1000 Genome Project. The datasets chosen are the ones used by Deorowicz and Grabowski (2011). Our aim is to develop an elegant compression algorithm that can run on commodity machines. All the experiments are performed on a hardware with the following specifications: Intel core i7 processor and 8 GB of primary memory.

3280 M.Nicolae et al. Table 1. Compression algorithms used in the comparison Algorithm Options Version Website DSRC2 c -m2 2.0 RC http://sun.aei.polsl.pl/dsrc/ fqzcomp -n2 -s7þ -b -q3 (for SOLEXA) 4.6 http://sourceforge.net/projects/fqzcomp/ -n1 -s7þ -b -q2 (for LS454) -S -n2 -s5þ -q1 (for SOLiD) SeqSqueeze1 -h 4 1/5 -hs 5 -b 1:3 -b 1:7 -b 1:12 1/10 http://sourceforge.net/p/ieetaseqsqueeze/code-0/14/ -bg 0.9 -s 1:2 1/5 -s 1:3 1/10 -ss 10 -sg 0.95 Quip 1.1.8 http://homes.cs.washington.edu/ dcjones/quip/ Fastqz c 1.5 http://mattmahoney.net/dc/fastqz/ gzip -9 1.6 http://gzip.org bzip2-9 1.0.6 http://bzip.org Table 2. Performance comparison Type Dataset Uncompressed Compression ratio Size (bytes) LFQC DSRC2 FQZComp SeqSqueeze1 Quip Fastqz Gzip Bzip LS454 SRR001471 216482689 5.246 4.839 5.024 5.156 4.471 3.238 3.935 LS454 SRR003177 1196625057 5.106 4.600 4.777 4.900 4.459 3.168 3.814 LS454 SRR003186 886007841 4.643 4.343 4.493 4.633 4.175 2.977 3.590 SOLiD SRR007215_1 695631390 7.262 6.764 7.072 4.188 5.207 SOLiD SRR010637 2086802888 5.587 5.316 5.569 3.481 4.253 SOLEXA SRR013951_2 3190655731 3.516 3.395 3.571 3.467 3.489 3.561 2.406 2.807 SOLEXA SRR027520_1 4808574655 4.391 4.333* 4.553 4.447 4.483 4.512 2.878 3.419 SOLEXA SRR027520_2 4808574655 4.299 4.236* 4.453 4.354 4.384 4.404 2.808 3.335 The compression ratio is defined as the ratio of the original file size to the compressed file size. FQZComp, Quip and Fastqz did not work for color space encoded reads (SOLiD datasets). Fastqz also did not work for datasets where the read length is variable (LS454 datasets). For such cases the table contains a -. DSRC2 was very slow to decompress two of the SOLEXA datasets (did not finish within 24 h). Those datasets are marked with a *. The bold values indicate the best compression ratio/compression speed/decompression speed respectively. Table 3. Compression speed in MiB/s (Millions of Bytes per second) Type Dataset Uncompressed Compression speed (MiB/s) Size (bytes) LFQC Dsrc2 FQZComp SeqSqueeze1 Quip Fastqz Gzip Bzip LS454 SRR001471 216482689 0.759 13.102 15.702 1.138 28.000 4.267 12.641 LS454 SRR003177 1196625057 1.053 30.369 12.589 1.078 25.532 3.905 12.312 LS454 SRR003186 886007841 1.080 25.217 15.678 1.077 29.576 3.902 11.880 SOLiD SRR007215_1 695631390 1.395 29.463 2.547 14.606 9.134 SOLiD SRR010637 2086802888 1.638 32.483 2.110 11.349 8.389 SOLEXA SRR013951_2 3190655731 1.138 30.084 12.121 1.120 15.367 4.681 6.236 7.708 SOLEXA SRR027520_1 4808574655 1.220 33.443* 14.383 1.307 16.661 5.633 6.148 8.735 SOLEXA SRR027520_2 4808574655 1.189 33.166* 14.379 1.290 16.267 5.650 6.225 8.606 FQZComp, Quip and Fastqz did not work for color space encoded reads (SOLiD datasets). Fastqz also did not work for datasets where the read length is variable (LS454 datasets). For such cases the table contains a -. DSRC2 was very slow to decompress two of the SOLEXA datasets (did not finish within 24 h). Those datasets are marked with *. The bold values indicate the best compression ratio/compression speed/decompression speed respectively. Table 2 shows a comparison of performances of LFQC relative to some of the state of the art algorithms. Our algorithm runs in only one mode targeting to achieve the best compression ratio in real time. The best results are boldfaced in the table. Table 3 shows the compression rates of the algorithms, measured in millions of bytes per second (MiB/s). The decompression rates are shown in Table 4. FQZComp, Quip and Fastqz did not work for color space encoded reads (SOLiD datasets). Fastqz also did not work for datasets where the read length is variable (LS454 datasets). For such cases the tables contains a -. DSRC2 was very slow to decompress two of the SOLEXA datasets (did not finish within 24 h). This is possibly due to requiring more memory than the 7 GB available on our machine. Those datasets are marked with * in Tables 2 and 3 and with TL in Table 4. As far as compression ratios are concerned we are the winner on LS454 and SOLiD datasets. The key to success of our compression algorithm is that we perform well when it comes to identifier compression. Splitting the identifier enables us to apply run length encoding on the tokens. We capture the parts that remain unchanged over the entire set of identifiers and maintain only a single copy with a count. We call this type of compression vertical

A lossless compression algorithm for FASTQ files 3281 Table 4. Decompression speed in MiB/s (Millions of Bytes per second) Type Dataset Uncompressed Decompression speed (MiB/s) Size (bytes) LFQC Dsrc2 FQZComp SeqSqueeze1 Quip Fastqz Gzip Bzip LS454 SRR001471 216482689 0.717 6.907 8.134 1.057 9.141 15.433 10.209 LS454 SRR003177 1196625057 1.173 9.670 8.537 1.058 9.009 14.680 10.051 LS454 SRR003186 886007841 1.217 9.608 8.492 1.056 8.863 12.240 9.636 SOLiD SRR007215_1 695631390 1.004 9.150 2.248 15.284 10.129 SOLiD SRR010637 2086802888 1.254 10.138 1.978 14.919 10.239 SOLEXA SRR013951_2 3190655731 1.196 1.197 7.787 1.111 8.682 3.579 14.318 8.933 SOLEXA SRR027520_1 4808574655 1.253 TL 8.753 1.305 8.364 4.371 13.531 9.442 SOLEXA SRR027520_2 4808574655 1.067 TL 9.073 1.311 8.517 4.096 14.669 9.216 FQZComp, Quip and Fastqz did not work for color space encoded reads (SOLiD datasets). Fastqz also did not work for datasets where the read length is variable (LS454 datasets). For such cases the table contains a -. DSRC2 was very slow to decompress two of the SOLEXA datasets (did not finish within 24 h). Those datasets are indicated by TL. The bold values indicate the best compression ratio/compression speed/decompression speed respectively. compression. For quality and sequence we get good compression ratios by using very powerful context mixing compression algorithms like lpaq8 and zpaq. In terms of compression rate DSRC2 and Quip are the winners. For decompression rate, gzip is the fastest. At the time of this writing, our algorithm is implemented in ruby, but a C/Cþþ implementation would speed up the algorithm to some extent. One bottleneck of our algorithm is in the back end compressors lpaq8 and zpaq. The experimental results support the trade off between compression ratio and compression/decompression speed as mentioned in Bonfield and Mahoney (2013). Funding This research has been supported in part by the following grants: NIH grant R01-LM010101 and NSF 1447711. Conflict of Interest: none declared. References Asnani,H. et al. (2012) Lossy compression of quality values via rate distortion theory. Bonfield,J.K. and Mahoney,M.V. (2013) Compression of Fastq and sam format sequencing data. PLoS One, 8, e59190. Cock,P.J. et al. (2010) The sanger Fastq file format for sequences with quality scores, and the solexa/illumina Fastq variants. Nucleic Acids Res., 38, 1767 1771. Cox,A.J. et al. (2012) Large-scale compression of genomic sequence databases with the burrows wheeler transform. Bioinformatics, 28, 1415 1419. Deorowicz,S. and Grabowski,S. (2011) Compression of DNA sequence reads in Fastq format. Bioinformatics, 27, 860 862. Ewing,B. and Green,P. (1998) Base-calling of automated sequencer traces using phred. II. Error probabilities. Genome Res., 8, 186 194. Fritz,M.H.-Y. et al. (2011) Efficient storage of high throughput DNA sequencing data using reference-based compression. Genome Res., 21, 734 740. Giancarlo,R. et al. (2009) Textual data compression in computational biology: a synopsis. Bioinformatics, 25, 1575 1586. Hach,F. et al. (2012) Scalce: boosting sequence compression algorithms using locally consistent encoding. Bioinformatics, 28, 3051 3057. Huffman,D.A. (1952) A method for the construction of minimum redundancy codes. Proc. IRE, 40, 1098 1101. Jones,D.C. et al. (2012) Compression of next-generation sequencing reads aided by highly efficient de novo assembly. Nucleic Acids Res., 40,e171. Kozanitis,C. et al. (2011) Compressing genomic sequence fragments using slimgene. J. Comput. Biol., 18, 401 413. Pinho,A.J. et al. (2012) Green: a tool for efficient compression of genome resequencing data. Nucleic Acids Res., 40, e27. Roguski,Ł. and Deorowicz,S. (2014) Dsrc 2 industry-oriented compression of Fastq files. Bioinformatics, 30, 2213 2215. Tembe,W. et al. (2010) G-sqz: compact encoding of genomic sequence and quality data. Bioinformatics, 26, 2192 2194. Wan,R. et al. (2012) Transformations for the compression of Fastq quality scores of next-generation sequencing data. Bioinformatics, 28, 628 635.