Abstract
Background
Copy number variation (CNV) is an important structural variation (SV) in human genome. Various studies have shown that CNVs are associated with complex diseases. Traditional CNV detection methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) suffer from low resolution. The next generation sequencing (NGS) technique promises a higher resolution detection of CNVs and several methods were recently proposed for realizing such a promise. However, the performances of these methods are not robust under some conditions, e.g., some of them may fail to detect CNVs of short sizes. There has been a strong demand for reliable detection of CNVs from high resolution NGS data.
Results
A novel and robust method to detect CNV from short sequencing reads is proposed in this study. The detection of CNV is modeled as a changepoint detection from the read depth (RD) signal derived from the NGS, which is fitted with a total variation (TV) penalized least squares model. The performance (e.g., sensitivity and specificity) of the proposed approach are evaluated by comparison with several recently published methods on both simulated and real data from the 1000 Genomes Project.
Conclusion
The experimental results showed that both the true positive rate and false positive rate of the proposed detection method do not change significantly for CNVs with different copy numbers and lengthes, when compared with several existing methods. Therefore, our proposed approach results in a more reliable detection of CNVs than the existing methods.
Background
Copy number variation (CNV) [1] has been discovered widely in human and other mammal genomes. It was reported that CNVs are present in human populations with high frequency (more than 10 percent) [2]. Various studies showed that CNVs are associated with Mendelian diseases or complex diseases such as autism [3], schizophrenia [4], cancer [5], Alzheimer disease [6], osteoporosis [7], etc.
CNV is commonly referred to as a type of structural variations (SVs), and involves a duplication or deletion of DNA segment of size more than 1 kbp [8]. The mechanism by which CNVs convey with phenotypes is still under study. A widely accepted explanation is that, if a CNV region harbors a dosagesensitive segment, the gene expression level varies, which leads to the abnormality of related phenotype consequently [9].
Before the emergence of next generation sequencing (NGS) technologies, methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) were employed to detect CNVs. The main problem of these methods is their relatively low resolutions (about 5 ∼10 Mbp for FISH, and 10 ∼25 kbp with 1 million probes for aCGH [10]). With the rapid decrease of the cost of NGS, high coverage sequencing became feasible, offering high resolution CNV detection. After Korbel et al.’s work of detecting CNVs from NGS data [11,12], many CNV detection methods have been developed recently [10,1323]. However, as shown in our previous study [24], the performances of the existing methods are not robust; e.g., CNVnator degenerates at small single copy length; and readDepth degenerates at low copy number variation (see the simulation). So new methods are needed for reliable detection of CNVs.
Methodologically, there are mainly two ways to detect CNVs from NGS data [25]: pairend mapping (PEM) and depth of coverage (DOC) based methods. The PEM based method is commonly used to detect insertion, deletion, inversion, etc.[26]. After the pair ends from the test genome being aligned to the reference genome, the span between the pair ends of the test genome is compared with that of the reference genome. The significant difference between the two spans implies the presence of a deletion or insertion event. There are several DOC based methods, such as CNVseq [14], FREEC [20], readDepth [21], CNVnator [22], SegSeq [13], and eventwise testing (EWT) [10]. The principle of DOC based methods is: the short reads are randomly sampled on the genome, so when the short reads are aligned to the reference genome, the density of the short reads is locally proportional to the copy number [10]. Based on the probability distribution of the read depth (RD) signal, a statistical hypothesis testing will tell whether a CNV exists or not. Specifically, the procedure of DOC based methods include: aligned reads are first piled up and then the read counts are calculated across a sliding [14] or nonoverlapping windows (or bins) [10,13,20,22], yielding the socalled RD signal. The ratio of the read counts (case vs. matched control) is used by CNVseq [14] and SegSeq [13], so further normalization is not required [18]. Otherwise, normalization such as GCcontent [10,22] and mapability [21] correction is required. The normalized read depth signal (or the raio) is analyzed with either of the following procedures: (1) segmented or partitioned by changepoint detection algorithms, and followed with a merge procedure [13] (e.g. readDepth [21] and CNVnator [22] utilize circular binary segmentation (CBS) and mean shift, respectively). (2) tested by a statistical hypothesis at each window (e.g., eventwise testing (EWT) [10]) or several consecutive windows (e.g., CNVseq [14]).
We propose a total variation (TV) penalized least squares model to fit the RD signal, based on which the CNVs are detected with a statistical testing. We name the method as the CNVTV. CNVTV assumes that a plateau/basin in the RD signal correspond to a duplication/deletion event (i.e., CNV). Then a piecewise constant function is used to fit the RD signal with the TV penalized least squares, from which the CNVs are detected. It is often cumbersome to determine the tuning of the penalty parameter in the model, which controls the tradeoff between sensitivity and specificity. Therefore, the Schwarz information criterion (SIC) [27] is introduced to find the optimal parameter. The proposed method may be applied either to paired data (tumor v.s. control in oncogenomic research) or to single sample that has been adjusted for technical factors such as GCcontent bias. The key feature of the CNVTV method is its robust performance, i.e., the detection sensitivity and specificity keeps stable when detecting CNVs with short length or nearnormal copy number. Compared with several recently published CNV detection methods on both simulated and real data, the results show that CNVTV can provide more robust and reliable detection of CNVs.
Methods
The first step to process the raw NGS data is to align (or map) the short reads with a reference genome (or template, NCBI37/hg19, for example) by alignment tools such as MAQ [28] and Bowtie [29]. Then the aligned reads are piled up, and read depth signal y_{i},(i=1,2,…,n) is calculated to measure the density of the aligned reads, where n is the length of the read depth signal. There are several ways to calculate y_{i}, for example, Yoon et al.[10] used the count of aligned reads that fall in a nonoverlapping window with size 100 bp, while Xie and Tammi [14] used a sliding window with 50% overlap.
The detection of CNVs from read depth signal y_{i} can be viewed as a changepoint detection problem (see Figure 1 where y_{i}’s are the black dots). There exist many methods to address this problem [30]. The total variation (TV) based regularization method has been widely used in the signal processing community to remove noise from signals [31]. In this paper, we use the total variation penalized least squares as shown in Eq. (1) to fit the RD profile, based on which a statistical test is used to detect CNVs.
Figure 1. The processing result of the region chr21:37.0∼37.1 Mbs (zoom in of the region between the vertical magenta lines in Figure 6). The black dots are the read depths; the blue line is the smoothed signal x_{i}; the red line is the corrected smoothed signal ; the horizontal green lines are the lower and upper cutoff values estimated from the histogram; and the thick red lines highlight the detected CNVs. Note that a small CNV at region 37.04 with length 1.1 kbp is detected.
In Eq. (1), the first term is the fitting error between y_{i} and the recovered smooth signal x_{i}; the second term is the total variation penalty: when a changepoint presents between x_{i} and x_{i+1}, a penalty ϕ(x_{i+1}−x_{i}) is imposed. The penalty function ϕ(x) is usually a symmetric function that is null at the origin and monotonically increases for positive x. The ideal choice of ϕ(x) is the ℓ0 norm of x. However the ℓ0 norm yields an NPhard problem, which is computationally prohibitive. Instead, convex or nonconvex relaxations of ℓ0 norm are of greater interest, such as Huber function [32], truncated quadratic [33]etc. In recent compressed sensing theory [34,35], ℓ1 norm penalized models [36] received wide attention because of their robust performance, as well as the availability of fast algorithms such as the homotopy [37,38] and least angle regression (LARS) [39]. For these reasons, we select the ℓ1 norm as the penalty function ϕ(x).
λ is the penalty parameter, which controls the tradeoff between the fitting fidelity (or fitting error) and penalty caused by the changepoints. When λ→0, the effect of penalty term is ignorable and the solution is x_{i}=y_{i}. On the contrary, when λ→+∞, the effect of fitting fidelity term is ignorable and the solution is , indicating that there is no changepoint (here is the mean of y_{i}). As a result, when λ decreases from +∞ to 0, the changepoints can be detected one by one according to their significance level. The notation x_{i}(λ),(i=1,2,…,n), which characterizes the evolution of solution x_{i} with respect to λ, is termed as the set of solutions.
To simplify notations in Eq. (1) for further presentation, y and x are introduced as the the vector forms of y_{i} and x_{i} respectively, i.e.y=[y_{1},y_{2},…,y_{n}]^{T}, and x=[x_{1},x_{2},…,x_{n}]^{T}, where T represents the transpose operation. Therefore, the matrix form of Eq. (1) reads:
where ∥·∥^{2} is the sum of squares of a vector; ∥·∥_{1} denotes the ℓ1 norm, i.e. the sum of absolute values of each entry in a vector; and D is a matrix of size (n−1)×n that calculates the first order derivatives of signal x (note that the first entry of Dx is x_{2}−x_{1}, the second is x_{3}−x_{2}, etc.):
Harchaoui and LévyLeduc [40] proposed to use the LASSO [41] to solve an alternative form of Eq. (2). In [42] we presented an algorithm to estimate directly the set of solutions of Eq. (2). In fact, Eq. (2) is equivalent to the following problem [43]:
where
Eq. (4) is the ℓ1 norm based regression, and thus can be solved efficiently using algorithms like homotopy [37,38] and least angle regression (LARS) [39]. Once u is known, x can be obtained as [44]
As mentioned previously, both the robust performance and the availability of efficient numerical algorithms are our considerations for choosing the ℓ1 norm based penalization. Another attracting property of ℓ1 norm is that it yields sparse solution [45], i.e.,u is a sparse vector with a limited number of nonzero values. Consequently, x, the first order integral of u, is a piecewise constant signal, which is our basic assumption about the read depth signal.
If the set of solutions {x_{i}(λ_{k})i=1,2,…,n;k=1,2,…,K} of Eq. (2) is known, changepoints can be sorted according to their significance by tuning λ from λ_{1}=+∞ to λ_{K}=0. Here K is the number of transition points of the solution when λ decreases from +∞ to 0 [46], which can be estimated by a LASSO solver.
A user can make the final decision on which λ to use. However, an automatic approach to choose this parameter is desirable. In the following, the model selection technique is employed to address this problem. In our problem, the degree of the model is the number of pieces in the smoothed read depth signal x_{i}, or the number of changepoints plus one. A few commonly used model selection methods include Lcurve [47], Akaike information criterion [48], Schwarz information criterion (SIC) [27], etc. Here, the SIC is adopted because of its robust performance [49], and has been used in our earlier study for detecting CNVs from aCGH data [50].
Since the ℓ1 norm based solution is biased [51], a correction is needed first. For solutions x_{i}(λ_{k})^{′}s,(i=1,2,…,n) at λ_{k}, first they are segmented into pieces such that within the piece , x_{i}=x_{i+1}=…=x_{i+l} (here we omit the dependency on λ_{k}), and at changepoints x_{i−1}≠x_{i},x_{i+l}≠x_{i+l+1}. Then the correction is carried out piece by piece. For each piece , the mean of y_{i} within this piece is used as the amplitude of x_{i}, i.e., (see Figure 1, where x_{i} is the blue line and is the red one). The SIC at λ_{k} is calculated as:
where m is the number of pieces, and σ^{2} is the variance of noise, which can be estimated manually from the region that does not harbor any CNV. The optimal λ is achieved at (see Figure 2):
Once is known, the optimal smooth signal of y_{i} is ; then a CNV can be identified as a segment with significantly abnormal amplitude, i.e. the amplitude below or above some predefined cutoff values. This cutoff values can either be estimated from the noise variance, or be estimated adaptively from the histogram of the read depth signal since the distribution of the read depth signal can be modeled as a mixture of Poisson distributions [52]. After the region of CNV is estimated, the copy number value can be estimated as the ratio between the reads count of the CNV region in the test genome and that of the corresponding region in the reference or control genome.
Results
We evaluated the proposed method on both simulated and real data, and compared the results with six representative CNV detection methods.
A number of CNV detection methods have been published recently for NGS data analysis [10,1323], and these methods are different in the use of statistical model, parameter, methodology, programming language, operating system, input requirement, output format, etc.; a comparative study of these different methods has been conducted by us [24]. Based on these factors, as well as the availability and the citation of these methods in literatures, six popular and representative methods were selected: CNVseq [14], FREEC [20], readDepth [21], CNVnator [22], SegSeq [13], and eventwise testing (EWT) [10].
The parameters of selected CNV detection methods were tuned to achieve their best performances in the sense that their sensitivities are maximized while the false positive rates are controlled below 1e3. The criteria of tuning the parameters are given as follows: (1) the shared parameters are set the same for fairness. For example, the thresholds for CNVseq and FREEC are set to 0.6; the pvalues of CNVseq, P_{init} and P_{merge} of SegSeq, false detection rate of readDepth are set to 1e3; the bin size of CNVnator is set to 100 bp since the recommended bin size of GCcontent correction is 100 bp for both readDepth and EWT. The smallest H_{b} parameter (number of consecutive bins) of CNVnator is 8, so the ‘filter’ parameter of EWT is also set to 8. With this parameter, the smallest detectable CNV has the length of 800 bp, so the window size of FREEC and SegSeq is set to 800 bp. (2) The unique parameter of each method is tested after the shared parameters are fixed. In summary, the parameters are as follows: for CNVseq, ‘pvalue’ is set to 1e3, and ‘log2threshold’ is set 0.6; the ‘bin_size’ of CNVnator is set to 100 bp. For readDepth, ‘fdr’ is set to 1e3; ‘overDispersion’ is set to 1; ‘readLength’ is set to 36 bp; ‘percCNGain’ and ‘percCNLoss’ are set to 0.01; ’chunkSize’ is set to 5e6. For EWT, the bin size ‘win’ is set to 100 bp; and ‘filter’ is set to 8. For SegSeq, the window size is set to 800 bp; the breakpoint pvalue ‘p_bkp’ and merge pvalue ‘p_merge’ are set to 1e3. For FREEC, ‘window’ is set to 800 bp; ‘step’ is set to 400 bp; and the threshold is set to 0.6. Parameters not mentioned here are set to default.
For CNVTV, the read depth signal was calculated from the BAM file with SAMtools [53], with the window size of 100 bp. The GCcontent bias [54] was corrected using the profile file of RDXplorer [10]. The corrected read depth signal was then segmented by the proposed method. The matlab function SolveLasso from the SparseLab package (http://sparselab.stanford.edu/ webcite) was used to estimate the set of solutions of Problem (4). The noise variance σ in Eq. 7 was calculated as the median of the standard deviations of 10 segments with length 10 kbp, which are evenly distributed on the whole chromosome. The cutoff value to call a CNV was determined by the histogram of the corrected read depth signal, such that both the left and right tail areas cover five percent of the whole distribution.
Simulated data processing
To test the performance of CNVTV comprehensively for a set of conditions (copy number c and single copy length l), simulations were carried out. 1000 Monte Carlo trials were run repeatedly for each condition. In the first experiment, the effect of single copy length (the length of red block in Figure 3) was tested, which changes from 1 kbp to 6 kbp. In the second experiment, the effect of copy number (the number of red block in Figure 3) was tested, which varies from 0 to 6. The coverage is fixed to 5.
Figure 3. A schematic demonstration of the generation of test genome (the lower figure) from the reference genome (the upper one) in the simulation study. A DNA section of single copy length l bp (the length of a single red block) starting from genomic locus b is copied and inserted c−2 times. In the displayed test genome (the lower), the copy number c (the number of red blocks) is 4.
The procedure of each Monte Carlo trial is as follows: (1) All the reported variations of chromosome 1 and 21 of NCBI36/hg18 were removed, and 10 sequences of length 1 Mbp were extracted. Here, the removed CNVs were retrieved from the database of genomic variants (DGV, http://projects.tcag.ca/variation/ webcite), including all the discovered CNVs reported in the literature. Then, a sequence was selected randomly among the 10, and was concatenated with its duplication, yielding the reference genome of length 2 Mbp. This reference genome was also used as the control genome. Since we only introduce one CNV in each genome for efficient comparison, a genome of 2 Mbp is large enough. (2) A CNV with copy number c and single copy length l was introduced artificially to generate the test genome (see Figure 3, where the copy number varies from 2 to 4). Copy number 2 is assumed to be normal; copy number smaller than 2 (0 and 1) indicates deletion event; and copy number larger than 2 (3 and 6) indicates duplication event. (3) SNPs and indels were introduced. The frequency is 5 SNPs/kbp and 0.5 indels/kbp respectively, and the indels have random length of 1 ∼3 bp. (4) Short reads were sampled on both control and test genome to simulate the shortgun sequencing. In such a case, read counts follow the Poisson distribution with the density parameter proportional to the copy number. To simulate the nonuniform bias, the reads were sampled with a sample probability p, which is the product of mapability and GCcontent profile. Each read has the length of 36 bp to agree with the Illumina platform. We note that, all the studies in the paper used the data that simulate the Illumina platform but the proposed method can be applied to other NGS platforms with longer read length. (5) The short reads were aligned to the reference genome by using Bowtie [29]. Since a read may align to multiple loci, there are mainly two ways to handle this issue: one way is to report only the uniquely mapped read [13], while the other is to select randomly one among the multiple aligments [22]. These two ways have been discussed in [28,29,55]. In this work, the default setting of Bowtie (similar to MAQ’s default policy [29]) is used such that best alignments with less mismatches are reported. When a read has multiple alignments with the same quality score, a random locus is assigned. (6) Finally, CNVTV and other CNV detection methods were called. Their outputs, i.e., estimates of both changepoint position and copy number, were compared with the ground truth (i.e.,parameters used in introducing CNVs into the test genome in Step (2)).
The false positive rate (FPR, equivalent to 1specificity) v.s. true positive rate (TPR, or sensitivity) of these detection methods are listed in Tables 1 and 2. The FPR is defined as the ratio between the number of false detected CNV loci and that of ground truth normal loci, in the unit of base pair; the TPR is defined as the ratio between the number of true detected CNV loci and that of ground truth CNV loci. The box plots (which includes the minimum, the lower quartile, the median, the upper quartile and the maximum) of the estimates of both the break point locus and copy number are displayed in Figures 4 and 5; the means and standard deviations of the estimation errors are shown in Additional file 1: Tables S1 and S2 respectively. Since CNVseq, FREEC and SegSeq need control samples, while readDepth, CNVnator and EWT do not, they are displayed in two groups respectively. Correspondingly, ‘CNVTV1’ indicates the testcontrol setting, in which the input x_{i} is the read depth signal ratio between the test and the control sample; ‘CNVTV2’ indicates the testonly setting. We found that the methods to be compared fail occasionally; for example, CNVnator degenerates when the length of CNV is small (see Table 1); readDepth and CNVseq fail when the copy number is close to the normal one (c=2, see Table 2). However, it can be seen that there are little changes on the estimates with CNVTV with respect to both the single copy length l and the copy number c, indicating more robust performance of CNVTV than that of other methods.
Table 1. The detection FPR/TPR with different single copy length l
Table 2. The detection FPR/TPR with different copy number c
Format: DOCX Size: 20KB Download file
Figure 4. The box plots of the break point position estimates (first column) and the copy number estimates (second column) of CNVs for different detection methods, and with different single copy lengthes: 1 kbp (first row), 2 kbp (second row) and 6 kbp (third row). The coverage is fixed to 5, and copy number is fixed to 6. The horizontal red dotted lines indicate the ground truth values; the red solid lines indicate the median values; and the red pluses indicate the outliers. It can be seen that our proposed CNVTV method gives more robust estimate of both the break point position and copy numbers (e.g., with smaller variance) than other methods for CNVs of different single copy length.
Figure 5. The box plots of the break point position estimates (first column) and the copy number estimates (second column) of CNVs with different copy number: 0, 1, 3 and 6 (from the first row to the last row). The coverage is fixed to 5, and the single copy length is fixed to 6 kbp. The horizontal red dotted lines indicate the ground truth values b; the red solid lines indicate the median value; and the red pluses indicate outliers. It indicates that our proposed CNVTV method gives more robust estimates of both the break point position and copy number than other methods for CNVs of different copy numbers.
Real data processing
To demonstrate the performance of CNVTV with real data, and compare the quality of detected CNVs with other methods, mapped reads data (BAM files) were downloaded from the 1000 Genomes Project at http://ftp.1000genomes.ebi.ac.uk/ webcite. The reads were sequenced from the chromosome 21 of NA19240 (yoruba female) with SLX, Illumina Genome Analyzer. There are 33.4 million reads uniquely aligned to NCBI36/hg18.
Figure 6 shows the read depth signal (blue line) as well as the detected CNV regions (red dots below), and the enlarged view of the region 37.0 ∼37.1 Mbp (region within the two vertical magenta lines) is displayed in Figure 1. The overlaps of CNVs detected by the CNVTV, and other six methods, as well as those listed in DGV [2], were displayed by an 8way Venn diagram, whose unit is a block of size 100 bp. Since the 8way Venn diagram is too complicated to visualize (there are totally 2^{8}−1=255 domains), it is tabularized in a binary manner, as shown in Table 3, which only lists the domains with block number greater than 1000. For example, the first column means that there are 31144 blocks that are uniquely detected by SegSeq but are not detected by any other methods or in DGV. Here we used the beta version of DGV, where CNVs can be retrieved by sample, platform, study, etc. The option of filter query was ‘external sample id = NA19240, chromosome = 21, assembly = NCBI36/hg18, variant type = CNV’. Table 3 shows that most of the CNVs detected by CNVTV are consistent with other methods, demonstrating the robustness and reliability of our proposed method. Nevertheless, CNVTV also reported a small amount of uniquely detected CNVs with length around 1 kbp, e.g., the region at 37.04 Mbp in Figure 1.
Figure 6. Chromosome 21 of NA19240. The blue curve is the read depth signal, the red dots below are detected CNV regions. Zoom in of the region within the two vertical magenta lines is displayed in Figure 1.
Table 3. 8way tabularized Venn diagram of the detected CNVs in the sample NA19240
The Fscore [56] measures the overlap quality between two sections. It takes values between 0 and 1. A low score indicates poor quality overlap while a high score indicates good quality overlap. The Fscore is calculated as , where P is the precision (percent of detected CNVs that overlap with the ground truth CNVs from DGV) and R is the recall (percent of the ground truth CNVs which overlap with the detected CNVs). Table 4 lists the top 10 Fscores of each method, and the corresponding P and R are listed in the Additional file 1 (Tables S3 and S4). It can be seen that the CNVTV method can provide CNVs with higher Fscores, indicating better quality compared with other methods.
Table 4. Fscores of top 10 CNVs detected by each method from the sample NA19240
Five more sequence data were also processed, which were sampled from chromosome 21 of a CEU trio of European ancestry: NA12878 the daughter, NA12891 the father and NA12892 the mother, a Yoruba Nigerian female NA19238, and a male NA19239. The 8way Venn diagram analysis shows that on average 98.7% of CNVs detected by the CNVTV overlap with at least one CNV by other method, or DGV. This number for CNVseq is 97.8%, FREEC 97.1%, readDepth 89.5%, CNVnator 85.2%, SegSeq 22.4%, EWT 78.3%, respectively.
Table 5 summarizes the average distributions of Fscore of the detected CNVs of each method over the six sequence data. Each detected CNV is cataloged into 10 classes (0∼0.1,0.1∼0.2,…,0.9∼1) according to its Fscore. It is shown that the CNVTV reports less low quality detections (Fscore is lower than 0.1) and more high quality detections (Fscore is greater than 0.5), indicating its robust performance.
Table 5. Average distribution (in percentage) of Fscores of detected CNVs in the real data processing
The experiments were carried out on a desktop computer with a dualcore 2.8 GHz x86 64 bit processor, 6 GB memory and openSUSE 11.3. CNVTV finished the processing in 112.2 seconds with peak memory usage of 383.4 Mega bytes. The computation time and memory usage of CNVseq, FREEC, readDepth, CNVnator, SegSeq and EWT are 251.5, 319.6, 134.8, 162.6, 248.8 and 268.9 seconds, 27.1, 7.1, 1060.1, 101.9, 3508.4, and 156.6 Mega bytes, respectively. This shows that the CNVTV is the fastest in computation with reasonable memory usage.
Conclusion and discussion
In this paper, we proposed the CNVTV method based on total variation penalized least squares optimization, in order to detect copy number variation from next generation sequencing data. The proposed method assumes that the read depth signal is piecewise constant, and the plateaus and basins of the read depth signal correspond to duplications and deletions respectively. Here three major points should be highlighted: (1) The proposed CNVTV method is quite automatic. We use the SIC to determine the tuning of the penalty parameter for the control of the tradeoff between TPR and FPR, which is often cumbersome to do. (2) The method can be applied to either matched pair data or single data adjusted for technical factors such as the GCcontent correction. (3) The method has better robustness, more reliability, and higher detection resolution. We compared the CNVTV method with six other CNV detection methods. The simulation studies show that the detection performance of CNVTV in terms of break point position and copy number estimation are more robust compared with six other methods under a set of parameters (e.g., different single copy lengths and copy numbers). The test on real data processing demonstrates that CNVTV gives higher resolution to detect CNVs of smaller size. In addition, the method can detect CNVs with higher Fscores, showing better quality compared with other methods.
The simulation results (Tables 1, 2, Additional file 1: Tables S1, and S2) show that CNVTV gives slightly lower FPR and estimation error than those of FREEC when the single copy length is 6 kbp, and the copy number is 0. Real data processing results (Tables 4 and 5) indicate that CNVTV can detect CNVs with higher Fscore compared with FREEC. However, both simulation and real data processing results show that the overall performances of FREEC and CNVTV are similar. Since both of them formulate the CNV detection problem as a changepoint detection based on sparse representation, and use the LASSO to solve the problem. Therefore it is worthwhile to show their differences and connections. The first is that the two methods use different models. FREEC uses the method proposed by Harchaoui and LévyLeduc [40], in which the matrix A in Eq. (4) is an n×n lower triangular matrix with nonzero elements equal to one; in our CNVTV method, the A matrix is an n×(n−1) triangular matrix. These two matrices are closely related, but with the difference up to a projection procedure implied in Eq. (5). The second lies in the method to determine the number of changepoints. FREEC uses the LASSO to select a set of candidate changepoints, and the number of the change points is upbounded by a predefined value K_{max}. Then it uses the reduced dynamic programming (rDP) to determine the best number of changepoints among the candidates. CNVTV uses the SIC to determine the number of changepoints, which takes the complexity of the model into account. The computational complexity of rDP and SIC are and respectively. When K_{max} is large, especially being true for whole genomic data analysis, CNVTV can save computation significantly.
Our proposed CNVTV is based on DOC profile and therefore we make the comparison currently with those methods also based on DOC. Because large events can be detected with DOC profile while small events can be detected with PEM signature, these two signatures provide complementary information. A good strategy is to combine these two signatures as described in methods [16,17,57]. These methods use the DOC signature to detect the coarse region of CNV, and then estimate the fine locus of the break points with PEM signature. In addition, the analysis of tandem duplication regions is also challenging since one read may have multiple alignment loci. A simple way to alleviate this issue is to randomly assign a locus. Another way is to increase the read length, which can decrease the frequency of multiple alignment. He et al.[58] proposed to use the discordant read pairs and unmapped reads that span on the break points to detect CNVs, and the precision of detected CNV break points can reach at base pair level. So our future work will consider the incorporation of multiple signatures into algorithm design, which could further improve CNV detection accuracy.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
JD, JGZ, YPW and HWD designed this study. JD and JGZ wrote the code for the comparative study. JD wrote the manuscript, JG Zhang and YPW revised the manuscript. All have read the manuscript and approved the final version.
Acknowledgements
This study was partially supported by NIH, NSF, and Shanghai Eastern Scholarship Program.
References

Redon R: Global variation in copy number in the human genome.
Nature 2006, 444(7118):444454. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Iafrate AJ, Feuk L, Rivera MN, Listewnik ML, Donahoe PK, Qi Y, Scherer SW, Lee C: Detection of largescale variation in the human genome.
Nat Genet 2004, 36(9):949951. PubMed Abstract  Publisher Full Text

Sebat J, Lakshmi B, Malhotra D, Troge J, LeseMartin C, Walsh T, Yamrom B, Yoon S, Krasnitz A, Kendall J, Leotta A, Pai D, Zhang R, Lee YH, Hicks J, Spence SJ, Lee AT, Puura K, Lehtimäki T, Ledbetter D, Gregersen PK, Bregman J, Sutcliffe JS, Jobanputra V, Chung W, Warburton D, King MC, Skuse D, Geschwind DH, Gilliam TC, Ye K, Wigler M: Strong association of de novo copy number mutations with autism.
Science 2007, 316:445449. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Stefansson H: Large recurrent microdeletions associated with schizophrenia.
Nature 2008, 455:232236. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Campbell PJ, Stephens PJ, Pleasance ED, O’Meara S, Li H, Santarius T, Stebbings LA, Leroy C, Edkins S, Hardy C, Teague JW, Menzies A, Goodhead I, Turner DJ, Clee CM, Quail MA, Cox A, Brown C, Durbin R, Hurles ME, Edwards PAW, Bignell GR, Stratton MR, Futreal PA: Identification of somatically acquired rearrangements in cancer using genomewide massively parallel pairedend sequencing.
Nat Genet 2008, 40:722729. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

RoveletLecrux A, Hannequin D, Raux G, Meur NL, Laquerrière A, Vital A, Dumanchin C, Feuillette S, Brice A, Vercelletto M, Dubas F, Frebourg T, Campion D: APP locus duplication causes autosomal dominant earlyonset Alzheimer disease with cerebral amyloid angiopathy.
Nat Genet 2006, 38:2426. PubMed Abstract  Publisher Full Text

Yang TL, Chen XD, Guo Y, Lei SF, Wang JT, Zhou Q, Pan F, Chen Y, Zhang ZX, Dong SS, Xu XH, Yan H, Liu X, Qiu C, Zhu XZ, Chen T, Li M, Zhang H, Zhang L, Drees BM, Hamilton JJ, Papasian CJ, Recker RR, Song XP, Cheng J, Deng HW: Genomewide copynumbervariation study identified a susceptibility gene, UGT2B17, for osteoporosis.
Am J Hum Genet 2008, 83(6):663674. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Freeman JL, Perry GH, Feuk L, Redon R, McCarroll SA, Altshuler DM, Aburatani H, Jones KW, TylerSmith C, Hurles ME, Carter NP, Scherer SW, Lee C: Copy number variation: new insights in genome diversity.
Genome Res 2006, 16:949961. PubMed Abstract  Publisher Full Text

Stankiewicz P, Lupski JR: Structural variation in the human genome and its role in disease.
Annu Rev Med 2010, 61:437455. PubMed Abstract  Publisher Full Text

Yoon S, Xuan Z, Makarov V, Ye K, Sebat J: Sensitive and accurate detection of copy number variants using read depth of coverage.
Genome Res 2009, 19:15861592. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Korbel JO, Urban AE, Affourtit JP, Godwin B, Grubert F, Simons JF, Kim PM, Palejev D, Carriero NJ, Du L, Taillon BE, Chen Z, Tanzer A, Saunders ACE, Chi J, Yang F, Carter NP, Hurles ME, Weissman SM, Harkins TT, Gerstein MB, Egholm M, Snyder M: Pairedend mapping reveals extensive structural variation in the human genome.
Science 2007, 318:420426. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Mills RE: Mapping copy number variation by populationscale genome sequencing.
Nature 2011, 470(7332):5965. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Chiang DY, Getz G, Jaffe DB, O’Kelly MJT, Zhao X, Carter SL, Russ C, Nusbaum C, Meyerson M, Lander ES: Highresolution mapping of copynumber alterations with massively parallel sequencing.
Nat Methods 2009, 6:99103. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Xie C, Tammi MT: CNVseq, a new method to detect copy number variation using highthroughput sequencing.
BMC Bioinformatics 2009, 10:80. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Simpson JT, McIntyre RE, Adams DJ, Durbin R: Copy number variant detection in inbred strains from short read sequence data.
Bioinformatics 2010, 26(4):565567. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Medvedev P, Fiume M, Dzamba M, Smith T, Brudno M: Detecting copy number variation with mated short reads.
Genome Res 2010, 20(11):16131622. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Waszak SM, Hasin Y, Zichner T, Olender T, Keydar I, Khen M, Stütz AM, Schlattl A, Lancet D, Korbel JO: Systematic inference of copynumber genotypes from personal genome sequencing data reveals extensive olfactory receptor gene content diversity.
PLoS Comput Biol 2010, 6:e1000988. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Kim TM, Luquette LJ, Xi R, Park PJ: rSWseq: algorithm for detection of copy number alterations in deep sequencing data.
BMC Bioinformatics 2010, 11:432. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Ivakhno S, Royce T, Cox AJ, Evers DJ, Cheetham RK, Tavaré S: CNAseg–a novel framework for identification of copy number changes in cancer from secondgeneration sequencing data.
Bioinformatics 2010, 26(24):30513058. PubMed Abstract  Publisher Full Text

Boeva V, Zinovyev A, Bleakley K, Vert JP, JanoueixLerosey I, Delattre O, Barillot E: Controlfree calling of copy number alterations in deepsequencing data using GCcontent normalization.
Bioinformatics 2011, 27(2):268269. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Miller CA, Hampton O, Coarfa C, Milosavljevic A: ReadDepth: a parallel R package for detecting copy number alterations from short sequencing reads.
PLoS ONE 2011, 6:16327. Publisher Full Text

Abyzov A, Urban AE, Snyder M, Gerstein M: CNVnator: an approach to discover, genotype, and characterize typical and atypical CNVs from family and population genome sequencing.
Genome Res 2011, 21(6):974984. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Gusnanto A, Wood HM, Pawitan Y, Rabbitts P, Berri S: Correcting for cancer genome size and tumour cell content enables better estimation of copy number alterations from nextgeneration sequence data.
Bioinformatics 2012, 28:4047. PubMed Abstract  Publisher Full Text

Duan J, Zhang JG, Deng HW, Wang YP: Comparative studies of copy number variation detection methods for next generation sequencing technologies.
Plos One 2013, 8(3):e59128. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Hormozdiari F: Combinatorial algorithms for structural variation detection in highthroughput sequenced genomes.
Genome Res 2009, 19:12701278. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Magi A: Bioinformatics for next generation sequencing data.
Genes 2010, 1:294307. Publisher Full Text

Schwarz G: Estimating the dimension of a model.
Annals Statist 1978, 6:461464. Publisher Full Text

Li H: The sequence alignment/map format and SAMtools.
Bioinformatics 2009, 25(16):20782079. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Langmead B, Trapnell C, Pop M, Salzberg SL: Ultrafast and memoryefficient alignment of short DNA sequences to the human genome.
Genome Biol 2009, 10(3):R25. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Lai WR: Comparative analysis of algorithms for identifying amplifications and deletions in array CGH data.
Bioinformatics 2005, 21:37633770. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Chambolle A, Lions PL: Image recovery via total variation minimization and related problems.
Numer Math 1997, 76:167188. Publisher Full Text

Blake A, Zisserman A: Visual Reconstruction. Cambridge: The MIT Press; 1987.

Candès EJ, Wakin MB: An introduction To compressive sampling.

Tropp JA: Just relax: convex programming methods for identifying sparse signals in noise.

Osborne MR, Presnell B, Turlach BA: A new approach to variable selection in least squares problems.
IMA J Numerical Anal 2000, 20(3):389403. Publisher Full Text

Malioutov DM: Homotopy continuation for sparse signal representation. In Proc. IEEE ICASSP, Volume V. Philadephia; 2005:733736.

Efron B, Hastie T, Johnstone I, Tibshirani R: Least angle regression.
Ann Stat 2004, 32(2):407499. Publisher Full Text

Harchaoui Z, LévyLeduc C: Catching changepoints with Lasso.

Tibshirani R: Regression shrinkage and selection via the Lasso.

Duan J, Zhang JG, Lefante J, Deng HW, Wang YP: Detection of copy number variation from next generation sequencing data with total variation penalized least square optimization. In IEEE International Conference on Bioinformatics and Biomedicine Workshops. Atlanta; 2011:312.

Tibshirani R, Bien J, Friedman J, Hastie T, Simon N, Taylor J, Tibshirani RJ: Strong rules for discarding predictors in lassotype problems.

Duan J, Soussen C, Brie D, Idier J, Wang YP: A sufficient condition on monotonic increase of the number of nonzero entry in the optimizer of ℓ1 norm penalized leastsquare problem. Tech. rep., Department of Biomedical Engineering, Tulane University 2011

Nikolova M: Local strong homogeneity of a regularized estimator.
SIAM J Appl Mathematics 2000, 61(2):633658. Publisher Full Text

Duan J, Soussen C, Brie D, Idier J, Wang YP: On LARS/homotopy equivalence conditions for overdetermined LASSO.

Hansen P: Analysis of discrete illposed problems by means of the Lcurve.
SIAM Rev 1992, 34:561580. Publisher Full Text

Akaike H: A new look at the statistical model identification.
IEEE Trans Automat Contr 1974, 19(6):716723. Publisher Full Text

Markon KE, Krueger RF: An empirical comparison of informationtheoretic selection criteria for multivariate behavior genetic models.
Behavior Genetics 2004, 34(6):593610. PubMed Abstract  Publisher Full Text

Chen J, Wang YP: A statistical change point model approach for the detection of DNA copy number variations in array CGH data.

Zhang CH: Discussion: Onestep sparse estimates in nonconcave penalized likelihood models.
Ann Stat 2008, 36(4):15091533. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Klambauer G, Schwarzbauer K, Mayr A, Clevert DA, Mitterecker A, Bodenhofer U, Hochreiter S: cn.MOPS: mixture of Poissons for discovering copy number variations in nextgeneration sequencing data with a low false discovery rate.
Nucleic Acids Res 2012, 40(9):e69. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Li H: The sequence alignment/map format and SAMtools.
Bioinformatics 2009, 25(16):20782079. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Bentley DR: Accurate whole human genome sequencing using reversible terminator chemistry.
Nature 2008, 456:5359. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Alkan C, Kidd JM, MarquesBonet T, Aksay G, Antonacci F, Hormozdiari F, Kitzman JO, Baker C, Malig M, Mutlu O, Sahinalp SC, Gibbs RA, Eichler EE: Personalized copy number and segmental duplication maps using nextgeneration sequencing.
Nat Genet 2009, 41:10611067. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Medvedev P, Stanciu M, Brudno M: Computational methods for discovering structural variation with nextgeneration sequencing.
Nat Methods 2009, 6:S13—S20. PubMed Abstract  Publisher Full Text

Zhu M, Need AC, Han Y, Ge D, Maia JM, Zhu Q, Heinzen EL, Cirulli ET, Pelak K, He M, Ruzzo EK, Gumbs C, Singh A, Feng S, Shianna KV, Goldstein DB: Using ERDS to infer copynumber variants in highcoverage genomes.
Am J Hum Genet 2012, 91(3):408421. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

He D, Hormozdiari F, Furlotte N, Eskin E: Efficient algorithms for tandem copy number variation reconstruction in repeatrich regions.
Bioinformatics 2011, 27(11):15131520. PubMed Abstract  Publisher Full Text  PubMed Central Full Text