Skip to main content
  • Research article
  • Open access
  • Published:

Centroid based clustering of high throughput sequencing reads based on n-mer counts

Abstract

Background

Many problems in computational biology require alignment-free sequence comparisons. One of the common tasks involving sequence comparison is sequence clustering. Here we apply methods of alignment-free comparison (in particular, comparison using sequence composition) to the challenge of sequence clustering.

Results

We study several centroid based algorithms for clustering sequences based on word counts. Study of their performance shows that using k-means algorithm with or without the data whitening is efficient from the computational point of view. A higher clustering accuracy can be achieved using the soft expectation maximization method, whereby each sequence is attributed to each cluster with a specific probability. We implement an open source tool for alignment-free clustering. It is publicly available from github: https://github.com/luscinius/afcluster.

Conclusions

We show the utility of alignment-free sequence clustering for high throughput sequencing analysis despite its limitations. In particular, it allows one to perform assembly with reduced resources and a minimal loss of quality. The major factor affecting performance of alignment-free read clustering is the length of the read.

Background

The most common technique for establishing a homology between sequences is sequence alignment (e. g., [1]). Numerous algorithms have been developed for aligning sequences. These include exhaustive dynamic programming algorithms as well as faster heuristic algorithms (e. g., BLAST [2]). In these algorithms each alignment is evaluated using some score matrix, wherein the score matrix depends on the expected similarity between the aligned sequences. However, in some cases one needs to compare related sequences that are divergent, or related sequences that are not at all homologous. One example of biologically related sequences that do not share a common ancestor is where two genes with different evolutionary history are found on the same genome. Another example is related to sequencing wherein non-overlapping reads originate from the same gene (“contig”). These cases do not allow for a direct sequence alignment. They may, however, be identified as related using alignment-free sequence comparisons.

Alignment-free methods are less accurate than direct sequence alignments. Thus, they are only used as a last resort when direct alignment is either impossible or computationally complex. A common method for alignment-free sequence comparison is comparison via the word (n-mer) counts [3-5]. In this approach a nucleotide sequence of arbitrary length L is represented by the counts of the 4n different n-mers. It is no surprise that such comparisons are less accurate than the sequence alignment as replacing the sequence with the vector of the word counts results in a loss of information.

High throughput sequencing data analysis is a relatively novel area of computational biology where application of alignment-free sequence comparison is especially desirable. Indeed, high throughput sequencing runs generate a vast amount of relatively short (typically 30-500 bp) reads. These factors make direct comparison with the reference complex if not impossible. One can think of different applications of alignment-free methods to sequencing data. In particular, an assembly free comparison of genomes using reads generated from them is possible [6]. In the present work we focus on a different application: clustering of reads coming from different genes and possibly different species based on n-mer counts. One case when such a clustering is desirable is a sample of large size, so that the large number of reads makes direct assembly computationally prohibitive. In this case clustering reads rather than randomly splitting the sample is desirable.

In the present study we focus on small values of n, such that L 4n. In other words, we assume that the compared sequences are sufficiently long to avoid the situation where L ≤ 4n and almost all counts are either zero or one. In particular, for 30 bp reads the appropriate value of n is 1, possibly 2; for 400 bp reads the appropriate values of n are 1, 2 or 3.

We implement a word count based clustering algorithm for short nucleotide sequences (“reads”)1. In this approach each sequence is represented by the vector of the n-mer counts (or n-mer frequencies). We focus on a centroid based clustering algorithm because of its linear space and time complexity. In the framework of centroid based clustering each centroid is characterized by its n-mer frequencies. In particular, we study expectation maximization (EM) algorithm, which is a generalization of the k-means algorithm. Each individual sequence is attributed to a centroid based on the likelihood of obtaining the observed word counts within a given model. In this context the Kullback-Leibler (KL) divergence naturally arises as the log likelihood function. We study centroid based algorithms involving other distances as well. We evaluate performance of different algorithms using simulated reads data. In the end we apply our clustering methods to a real sequencing run.

Results and discussion

Comparison of centroid based algorithms

We study several centroid based clustering algorithms in the context of alignment-free sequence comparison. From the point of view of these algorithms each sequence is represented by the vector of the word (n-mer) counts. We restrict ourselves to the case of the relatively short sequences, having length typical to sequencer reads. We implement expectation maximization algorithm using Kullback-Leibler divergence as the log likelihood function. We also consider centroid based clustering algorithms using L2 distance (regular distance in Euclidean space) and d2 distance [6]. In addition we consider k-means algorithm. k-means algorithm is the L2 algorithm with preliminary whitening of data. All these algorithms have an established convergence property. We implement centroid based clustering algorithms using some other distance functions: symmetrized KL divergence, d 2 [6] and χ2 statistic (see Methods section for a detailed description). The latter algorithms do not possess a known convergence property.

We compared the performance of these algorithms using 50 randomly chosen subsets of human reference mRNA sequences. Each subset consisted of 1000 sequences. For each of the 50 subsets we generated several sets of simulated reads, different sets containing simulated reads of different length. For each dataset we performed clustering using different methods. For k-means clustering we used implementation available in scipy package. We evaluated classification performance (recall) for a sequence as the maximal fraction of reads from this sequence which fall into the same cluster. We compared the distribution of the recall rates obtained for each sequence in each of the datasets. The results are presented in Tables 1, 2, 3, and 4. In this simulation the EM algorithm showed a higher performance for the word size n = 1 with 4 or 5 clusters, n = 2 with 2 clusters and n = 3 with 2 clusters. The L2 algorithm showed a higher performance for n = 2 with 4 or 5 clusters and for n = 3 with 4 or 5 clusters. Note that the L2 and d2 algorithms operate with the word frequencies normalized for each read individually, while the k-means and the EM methods use the normalization related to several reads (cf. Equation (5)). Methods from the first group (L2 and d2) generally exhibit a better performance for a larger number of clusters.

Table 1 Recall rates for simulated human reads of different length,n= 1
Table 2 Recall rates for simulated human reads of different length,n= 2
Table 3 Recall rates for simulated human reads of different length,n= 3
Table 4 Recall rates for simulated human reads of different length, various distance functions,n= 2

Even though these differences can be considered statistically significant, their magnitude is rather small for practical purposes. Based on this fact we recommend using the L2 or k-means algorithm for computational efficiency. Indeed, the EM algorithm involves the computationally expensive evaluation of the natural logarithms; while the L2 and k-means algorithms only involve arithmetic operations. This can make the run time of the EM algorithm exceed that of the L2 and k-means algorithms by more than a factor of 10.

We evaluated the performance of the algorithms involving the symmetrized KL divergence, d 2 and the χ2 distance on the same datasets for the word length n = 2. The d 2 algorithm failed to converge in 21 out of 900 runs. The results are shown in Table 4. None of the three mentioned algorithms exhibits a performance better than that of the EM or k-means algorithm. Taking into account the fact that the convergence of these algorithms is not established (and the numerical experiment in fact proves the lack of guaranteed convergence for the d 2 algorithm), our data exhibit no benefits of using these methods.

Tables 1, 2, 3 and 4 show that the recall rate increases with the increasing read length, number of reads being constant. This conforms to our intuition that with the increasing reads length the word counts increase, resulting in a smaller effect of statistical fluctuations. Tables 1, 2 and 3 show that the recall rate has almost no dependence on the word size n for n = 1,2,3.

We performed a set of simulations with different number of reads generated from the same source sequences in order to study the dependence of the recall rate on the number of reads. The results are shown in Table 5. For smaller number of reads the recall rate is lower. It is gradually increasing and stabilizing as the number of reads is increasing. Our explanation for this fact is that for a small number of reads some of the source sequences have only a few reads, and the recall rate is significantly influenced by the pseudocounts.

Table 5 Recall rates for simulated human reads, different number of reads,n= 2

We performed a series of simulations for different sequencing error rates. The results are shown in Tables 6, 7 and 8. As expected, the recall rate decreases monotonically when the sequencing error rate increases for all clustering methods.

Table 6 Recall rates for simulated human reads, n= 1, different error rates
Table 7 Recall rates for simulated human reads, n= 2, different error rates
Table 8 Recall rates for simulated human reads, n= 3, different error rates

Soft EM clustering

We implemented the soft EM clustering algorithm using the KL divergence as the log likelihood function. We performed soft EM clustering of simulated viral reads in the human background using single stranded and double stranded DNA and RNA viruses as well as retroviruses. We generated 50 datasets for each read length and built the ROC curve for each dataset. The area under the curve (AUC) averaged over the 50 datasets is shown in Figure 1, 2, 3 and 4

Figure 1
figure 1

AUC for selected viral sequences, 2 clusters. Viral reads clustered with background human reads, AUC is calculated for each virus.

Figure 2
figure 2

AUC for selected viral sequences, 3 clusters. Viral reads clustered with background human reads, AUC is calculated for each virus.

Figure 3
figure 3

AUC for selected viral sequences, 4 clusters. Viral reads clustered with background human reads, AUC is calculated for each virus.

Figure 4
figure 4

AUC for selected viral sequences, 5 clusters. Viral reads clustered with background human reads, AUC is calculated for each virus.

The AUC and its dependence on the read length is determined by the interplay of the different factors. These include the choice of the likelihood function in the EM algorithm, uniformity of the sequence composition of the studied viral sequences as well as the choice of the background reads. Our results indicate that double stranded viruses as well as single stranded RNA viruses generally show higher AUC than single stranded DNA and retroviruses. Note that the lower AUC is a consequence of the change of the nucleotide composition across the sequence (Figure 5).

Figure 5
figure 5

Assignment of reads depending on position. The fraction of reads assigned to the dominant (TPR) and other than the dominant (FPR) cluster as a function of the position in the genome of the Hepatitis B virus. The data are smoothed by averaging over the window of length 50. Different regions of the genome cluster differently, forming consistent patterns as a consequence of the changing nucleotide composition across the genome.

Application to real data

A real world scenario where alignment-free sequence clustering is desirable is assembly of an HTS run containing a large number of reads. It can be the case that the available computational resources (in particular, memory) are not sufficient to perform a direct assembly. In such instances one may need to split the reads into several sets and assemble each set individually, merging the contigs afterwards. We explore the random splitting and the educated splitting using alignment-free clustering of an Illumina run containing 22 million cDNA reads from a nasal swab. The results are shown in Table 9. It turns out that the educated splitting results in a better assembly (more reads mapping back onto contigs). The difference between the hard EM and the k-means partitioning is rather small, and these two partitionings improve assembly compared to the random splitting. The soft EM leads to a better assembly than both hard EM and k-means partitioning. The reason for this is that the soft EM algorithm allows a single read to be assigned to multiple clusters simultaneously. This provides more possibilities for the reads from the same contig to be clustered together and consequently assembled. In fact, for a small value of the velvet hash length (namely, 21) the soft EM partitioning results in more reads mapping back to contigs than assembling the run as a whole. We speculate that the explanation for this observation is that the small value of the hash length results in a larger number of contigs at the cost of specificity. Partitioning the reads in an educated way makes assembly of each subset more specific.

Table 9 Assembly statistics for clustering and random splitting of a real sample

afcluster software

We implemented afcluster software for centroid based alignment-free clustering of nucleotide sequences based on word (n-mer) counts. Word counts can be computed using overlapping or non-overlapping n-mers, optionally concatenating the sequence together with its reverse complement. Where no reading frame is found we recommend using overlapping n-mer counts and/or stacking with the reverse complement. Non-overlapping n-mer counts can be used to compare the codon usage of coding sequences.

Implemented clustering algorithms include expectation maximization algorithm, k-means algorithm as well as centroid algorithms using different distance types: L2 distance, d2 distance, d 2 distance, symmetrized KL divergence, χ2 distance. One can also perform consensus clustering. In this case regular clustering is performed a specified number of times, and the consensus partitioning is built based on patterns of individual samples clustering together. Consensus clustering mitigates the dependence of the resulting partitioning on the random initialization inherent to centroid-based methods. However, this is achieved at the cost ofO( N 2 logN) time complexity and O( N 2 ) space complexity for input consisting of N sequences.

The software also allows soft EM clustering, in which case each sequence is only assigned to each cluster with some probability. This method gives some estimate of the clustering accuracy without the overhead of the consensus clustering. The ability to simultaneously assign the same sequence to several clusters is also useful when splitting a sample before performing assembly.

afcluster software is implemented in C++. It has been compiled and tested using GNU GCC. The tool is open source under the GNU GPL license. It is publicly available from https://github.com/luscinius/afcluster as a source code together with the documentation. It is also available as Additional file 1.

Conclusions

Alignment-free sequence clustering is a useful tool for sequence analysis. However, it has the limitations found with other clustering algorithms based on word counts. A major potential confound is assumption that for any given gene or organism there is a consistent frequency distribution for counted words. However, there are examples where word frequencies change across the same genome [7-9]. Also, viral genomes are systematically affected by the interaction with the host which leads to the host mimicry [10, 11]. A separate study would be required to address these issues.

Centroid based clustering offers the linear time and space complexity, which is critical for large datasets; in particular, HTS reads. Even though the hard expectation maximization algorithm using the Kullback-Leibler divergence as a log likelihood function shows superior performance in some cases, it is computationally feasible to use the k-means algorithm as the time gain outweighs the possible accuracy loss. It also turns out that it is sufficient to use short word sizes (n = 1 or n = 2). Soft expectation maximization clustering is more effective than the hard expectation maximization as it allows to assign a read to more than one cluster simultaneously. Application to a real dataset shows that the soft EM algorithm is especially effective in the context of the HTS read assembly.

Methods

Kullback-Leibler divergence is the log likelihood for the word counts

Kullback-Leibler (KL) divergence has a natural probability theory interpretation as a limiting case of the multinomial distribution. In particular, it was used in the context of alignment-free sequence comparison in the work [12];

Under this model one assumes that the counted n-mers are drawn from a pool q with frequencies of each n-mer being q i , index i numbering all possible n-mers and running from 1 to 4>n. In other words, the model assumes that the words in a sequence are independent, and the probability of appearance of a particular word at a given position is position independent. The probability of appearance of the word i at a given position is q i ; i = 1,…,4n. Under these assumptions the probability of obtaining n-mer count vector c is given by the multinomial distribution2:

P(c|q)= L ! c 1 ! c 4 n ! i = 1 4 n q i c i .
(1)

We denote i c i =L — the total number of words in a sequence. For sufficiently large counts one can use the Stirling’s approximation,

c ! 2 π c c e c , c 1 ;

which yields

P(c|q) 2 π n i = 1 4 n 1 2 π c i L q i c i c i .
(2)

Denote the normalized counts as

p i = c i L ;

consequently, the log of the probability is

logP(p|q)L i = 1 4 n p i log q i p i =L D KL (p|q).
(3)

The KL divergence between the frequency distributions p and q is:

D KL (p|q)= i = 1 4 n p i log p i q i .
(4)

When the difference between p i and q i is small, this probability distribution reduces to the multivariate normal distribution3,

log P ( c | q ) = L i = 1 4 n p i log 1 + q i p i p i L i = 1 4 n p i q i p i p i 1 2 ( q i p i ) 2 p i 2 = i = 1 4 n ( c i L q i ) 2 2 L q i .
(5)

We have used the Taylor expansion for the natural logarithm:

log(1+x)=x 1 2 x 2 +O( x 3 ),
(6)

dropping the cubic and higher terms.

Interpretation of the formula (3) in the context of clustering is as follows. When we have several candidate pools qα (“centroids”), KL divergence DKL(p|qα) gives the log odds ratio for a sequence having the vector of counts c to be attributed to centroid α. Therefore the ML estimate of a centroid is obtained by minimizing the KL divergence. We employ this relation within the framework of expectation maximization clustering.

Expectation maximization clustering

The problem of clustering is the challenge of partitioning a dataset of N points into K subsets (clusters) such that the similarity between the points within each subset is maximized, and the similarity between different subsets is minimized. The measure of similarity can vary depending on the data. Generally the clustering problem is computationally hard. However, there exist heuristic clustering algorithms that run in polynomial time. Most common clustering approaches are hierarchical clustering, k-means type (centroid based) clustering [13] and density based clustering [14]. Each of these approaches possesses its own advantages and disadvantages.

Hierarchical clustering does not require one to specify the number of clusters a priori. Instead it produces the linkage tree, and the structure of the tree (in particular, branch lengths) determines the optimal number of clusters. However, the complexity of hierarchical clustering is at least O( N 2 ). Density based algorithms (e. g., DBSCAN [14]) can find clusters of arbitrary shape as well as identify outliers. They do not require the prior specification of the number of clusters. Run time is O(NlogN). Centroid based approaches (k-means, expectation maximization) have a linear run time. Prior specification of the number of clusters is required, and results depend on the initialization of the algorithm.

In the present work we focus on centroid based technique. Our rationale for this is as follows. First, there exists a natural likelihood function for the word counts, which allows one to perform EM clustering. Also, the space of word counts possesses a natural notion of a centroid: for a set of sequences which belong to the same cluster one adds all the words within them; and the resulting frequencies yield the cluster centroid. Second, linear run time is critical for large datasets (in particular, HTS data).

EM is a generalization of k-means algorithm. The number of clusters K needs to be specified in advance. For the execution of the algorithm on N sequences one needs the following variables: centroids qα, α = 1,…,K; and assignments (“latent data”) za, a = 1,…,N. The algorithm consists of the two steps repeated iteratively until it converges.

  1. 1.

    Expectation step: given the current centroids q α, compute the new values of z a so that the log likelihood L is maximized.

  2. 2.

    Maximization step: given the current assignments z a, compute the new values of q α so that the log likelihood L is maximized.

This procedure guarantees that the log likelihood is non-decreasing at each step. Note that Equation (3) implies that the log likelihood is bounded from above by zero. These two facts imply that the algorithm converges4. In terms of the the variables qα and za the log likelihood is

L= a = 1 N L a D KL ( p a | q z a ) a = 1 N L a i = 1 4 n p i a log p i a q i z a .
(7)

We denote the total number of words in the a’th sequence as L a . Consequently, expectation step reassigns each point to its closest centroid:

z a = arg min α D KL ( p a | q α ).
(8)

Centroids are updated during the maximization step as follows:

q i α = a = 1 N δ z a , α c i a a = 1 N j = 1 4 n δ z a , α c j a .
(9)

Here we have introduced the Kronecker delta symbol:

δ α β = 1 , α = β 0 , α β
(10)

This prescription exactly corresponds to the natural notion of a centroid: one adds all the words counts within a cluster to obtain the total count vector and normalizes this vector. Detailed derivation of Equation (9) is presented in Appendix 1.

The EM algorithm depends on initialization. In other words, depending on the initial centroid assignment the algorithm may converge to a partitioning that is only locally optimal. One of the ways to minimize the impact of the random initialization is to perform clustering several times using different initializations. This results in several partitionings, and then the one which maximizes the likelihood function is chosen. In the framework of k-means clustering selecting the partitioning with the minimal distortion leads to such maximization. Distortion is the sum of the intra-cluster variances for all the clusters. Using KL divergence as a likelihood function, one arrives at the modified definition of distortion:

D= a L a D KL ( p a | q z a ).
(11)

Note that in the limit when the likelihood function reduces to the Gaussian one, our EM algorithm reduces to Gaussian mixture EM. In this case in the light of the formula (5) our definition of distortion reduces to the regular one.

Alternative distance (pseudo-likelihood) functions

We also explore some other distance functions, such as d2 and d 2 [6, 15, 16]. We are not aware of their direct probabilistic interpretation as a likelihood function. Nevertheless, they represent some distances; i. e., they can serve as some measure of a degree of dissimilarity between the two sequences. One can operate in terms of a distortion function as a measure of the quality of the partitioning. In the case of the EM clustering of k-means clustering, distortion equals the negative log likelihood. If one can prove that the analogs of both expectation and maximization steps lead to a decrease of distortion, this provides the basis for the convergence of the clustering algorithm.

d 2 distance

d2 distance between the two vectors is defined as 1− cos θ, where θ is the angle between these vectors:

d 2 (c,q)=1 c · q c q .
(12)

Here v denotes the norm of the vector v:

v= i = 1 4 n v i 2 ,
(13)

and the dot denotes the dot product:

c·q= i = 1 4 n c i q i .
(14)

One can define the distortion function as

D= a = 1 N d 2 ( c a , q z a )=N a = 1 N c a · q z a c a q z a .
(15)

In the context of d2 distance it is natural to normalize the word counts for centroids and individual sequences so that they have a unit norm: p = q = 1.

EM algorithm can be generalized to use the d2 distance as follows. During the expectation step one assigns each sequence to the closest (in terms of the d2 distance) centroid. During the maximization step one updates the centroids as follows:

q α = a = 1 N δ z a α p a a = 1 N δ z a α p a .
(16)

We assume that the word counts for individual sequences are normalized so that pa = 1. Equation (16) is derived in Appendix 1. This procedure ensures that at each step the distortion is non-increasing. The distortion is bounded by zero from below. These two facts ensure the convergence of the algorithm. Equations (12) and (16) imply that the value of the d2 distance and the updated positions of the the centroids only depend on the normalized word counts. Consequently, the algorithm makes no distinction between the short and the long sequences.

d 2 distance

D 2 distance was introduced in works [15],[16]. Its modification with suitable normalization for comparing short sequences was introduced in work [6] and called d 2 . This distance computation of expected word frequencies using the zero order Markov model and standardization of the observed word counts. In the context of centroid based clustering it can be formulated as follows.

  1. 1.

    For a given cluster count the frequencies of single nucleotides (1-mers) within the union of all sequences within the cluster.

  2. 2.

    Compute the vector of expected frequencies of n-mers Q using zero order Markov model. Under this prescription the expected frequency of n-mer is the product of the frequencies of individual characters.

  3. 3.

    For a vector of raw counts x define the corresponding standardized vector x ~ as

    x i ~ = x i Q i j = 1 4 n x j Q i j = 1 4 n x j .
    (17)
  4. 4.

    Denote the word count vector of all sequences within a cluster as x; then the distance between the centroid of this cluster and a sequence with the word count vector c is

    d 2 (c,x)= 1 2 1 c ~ · x ~ c ~ x ~ .
    (18)

Update of sequences’ assignment to clusters is the analog of the maximization step. Update of the expected frequencies is the analog of the expectation step. A priori it is not obvious how to define the distortion so that both expectation and minimization steps lead to a guaranteed decrease in distortion. We leave this question as well as the proof of convergence beyond the scope of the current work.

χ 2 distance

Standardization procedure as defined in Equation (17) is inspired by the Poisson distribution where mean equals variance. Following a similar logic, we introduce the χ2 distance:

χ 2 (c,Q)= i = 1 4 n ( c i Q i L ) 2 Q i L ,L= i = 1 4 n c i .
(19)

Despite the apparent similarity of this definition with Equation (5), the frequency vector Q is the expected vector computed from the zero order Markov model (the same way as it was computed in the calculation of d 2 distance).

Symmetrized Kullback-Leibler divergence

This distance is the symmetrized Kullback-Leibler divergence:

D KL S (p|q)= i = 1 4 n ( p i q i )log p i q i .
(20)

It assumes that p and q are normalized frequency vectors:

i = 1 4 n p i = i = 1 4 n q i =1.
(21)

Consensus clustering

Centroid based and hierarchical clustering techniques can be combined in consensus clustering. In this approach centroid based clustering is performed a number of times, each time randomly selecting a fraction of samples into the bootstrap dataset. After that the distance matrix is formed as

D ij = 1 # ( times i and j were clustered together ) # ( times i and j were in the same dataset ) .
(22)

Hierarchical clustering is performed with distance matrix D ij . This approach is computationally expensive as complexity of the distance matrix construction is O( N 2 ), and the complexity of the hierarchical clustering using average linkage is O( N 2 logN) for an input of N sequences.

Recall rate

Consider a set of HTS reads originating from several genes (contigs). Grouping together reads originating from the same gene provides a natural partitioning of the read set. Recall rate is a measure of how well the clustering agrees with this natural partitioning. In other words, the recall rate provides a measure of how well the reads from the same contig cluster together. It is defined as follows. Consider reads originating from some gene G. For example, if the number of clusters is K = 4 and 40% of reads from G are assigned to cluster 1, 20% of reads from G are assigned to cluster 2, 10% of reads from G are assigned to cluster 3, 30% of reads from G are assigned to cluster 4; the recall rate is R G  = 40%.

Generally, assume that there are K clusters, and consider reads originating from some gene G. Denote f k the fraction of all reads originating from G which are assigned to the cluster k. Recall rate for gene G is

R G =max( f 1 ,, f k ).
(23)

Recall rate provides a measure of how clustering interferes with assembly. In particular, when the recall rate is R G  = 1, all reads from gene G get assigned to the same cluster; and the contig for G can be assembled from just one cluster with no losses.

We performed a numerical experiment to estimate the dependence of the recall rate on the read length and the clustering method. We generated 50 sets of human RNA sequences, each set containing 1000 sequences randomly chosen from the set of the reference sequences. We required that the length of each sequence is at least 500 bp and at most 10000bp. After that we simulated reads of length 30, 50, 75, 100, 150, 200, 250, 300, 400bp from each of these 50 sets using Mason [17]. Each read set contained 100000 reads. Mason options used were illumina -N 100000 -n READLENGTH -pi 0 -pd 0 -pmm 0 -pmmb 0 -pmme 0. This way we obtained a total of 450 simulated read sets: one set for each of the 50 gene sets and 9 values of the read length. To study the dependence of the recall rate on the sequencing error rate for each of the 50 gene sets we generated 100000 reads of length 200 and error rate 0.001, 0.005, 0.01, 0.02, 0.03, 0.04, 0.05. Mason options used were illumina -N 100000 n 200 -pi 0 -pd 0 -pmm ERRORRATE -pmmb ERRORRATE -pmme ERRORRATE. This way we obtained 350 simulated read sets: one set for each of the 50 gene sets and 7 values of the error rate. To study the dependence of the recall rate on the depth of coverage (total number of reads) we simulated read sets with 200000, 150000, 100000, 75000, 50000, 30000, 20000, 10000, 5000 reads. Mason options used were mason illumina -N NUMREADS -n 200 -pi 0 -pd 0 -pmm 0 -pmmb 0 -pmme 0. This way we obtained 450 simulated read sets: one read set for each of the 50 gene sets and 9 values of the number of reads.

We performed hard EM clustering, k-means clustering, L2 clustering and d2 clustering and computed the recall rate for each gene in each read set. The results show that the EM algorithm exhibits a higher recall rate than that of k-means algorithm. For k-means clustering we used the implementation available in scipy[18] package.

Soft EM clustering

For the execution of the algorithm one needs the following variables: centroids qα and probabilities Z a α for observation point a to be associated with cluster α. EM algorithm iteratively updates probabilities Z a α starting from centroid locations, and then updates centroid locations qα using the updated probabilities Z a α . These steps are performed as follows.

Given a set of centroids qα and observations (count vectors) ca, the probability for observation a to be associated with centroid α is

Z a α = P ( p a | q α ) β P ( p a | q β ) ,
(24)

as it follows from Bayes’ theorem. In the “soft” EM algorithm Z a α can take fractional values, calculated according to Equation (24)5.

Given the probabilities Z a α , one updates centroid locations by maximizing the log likelihood expectation

L(q)=E [ log P ( p | q ) ] .
(25)

When written explicitly it becomes

L= α = 1 K a = 1 N Z a α L a D KL ( p a | q α ).
(26)

Here we denote the number of clusters by K and the number of sequences by N. In our conventions Greek index α runs over the different clusters, Latin index a runs over different sequences, and Latin index i runs over different n-mers. As derived in Appendix 1, centroids are computed as follows:

q i α = 1 Λ α a = 1 N Z a α c i a ,
(27)

where

Λ α = i = 1 4 n a = 1 N Z a α c i a .
(28)

Note that Equation (27) conforms to the intuitive notion of centroid in terms of the word counts. Namely, word counts from all the sequences in the cluster are added up (with some weights in soft EM), and the resulting frequencies are calculated.

As explained, soft EM algorithm assigns the read a to the cluster α with some probability Z a α . Choice of a confidence threshold ε produces a set of clusters: read a is a member of cluster α if Z a α ε. Note that the clusters are possibly overlapping; i. e., one read can be assigned to multiple clusters simultaneously.

ROC curve for soft EM clustering

Consider a group of reads coming from the same origin (e. g., the same gene or the same organism). A perfect alignment-free classification would assign them to a single cluster (possibly containing some other reads). Let us assume that we know the origin of each read. A choice of some fixed value for the cutoff ε will assign each read to zero or more clusters. We consider the cluster which embraces the largest part of the reads from gene G to be the “correct” assignment for the reads originating from this gene. For example, assume that we have K = 4 (overlapping) clusters, containing 40%, 35%, 35% and 10% of the reads correspondingly. Then the first cluster is the “correct” assignment that would be attributed to all the reads from gene G if the clustering algorithm were perfect.

The true positive rate (recall rate) is

TPR= # ( reads correctly assigned ) # ( reads ) .
(29)

We define the false positive rate as

FPR= # ( reads incorrectly assigned ) # ( reads ) .
(30)

A read is considered “incorrectly” assigned if it is assigned to at least one cluster different from the correct one. Note that for some values of the threshold ε the same read can be simultaneously assigned to a correct and an incorrect cluster, thus producing both a true and a false positive. In the limit ε → 0 each read is assigned to each cluster (FPR=TPR=1). In the limit ε → 1 neither read gets assigned to any cluster (FPR=TPR=0).

Dependence of TPR vs FPR as ε changes from 0 to 1 gives an analog of the ROC curve6. Performance of the algorithm is characterized by the area under the curve (AUC).

Assembly of real data

Reads from an Illumina run on cDNA of a nasal swab were taken. After filtering out the low quality and the low complexity reads 21,568,249 100bp single end reads were left. Velvet [19] assembly was performed with the default settings. Velveth command line was velveth Assem HASHLENGTH -short -fasta INPUTFILE. Velvetg command line was velvetg Assem. Values of the has length were 21, 31, 41. Assembly was performed on the complete read set as well as on subsets obtained as a result of alignment-free clustering of the reads. Hard clustering was performed 5 times, and the partitioning with the minimal distortion was chosen. Soft clustering was performed once. Confidence cutoff for the soft clustering is ε = 0.05. For every splitting of the read set all the contings generated from individual parts were merged together. After that the original reads were mapped back onto the contings using bwa[20] and allowing up to 2 mismatches. The number of reads which map back onto the contigs is a measure of the quality of assembly. It takes care of the possible duplicate contigs which may be formed when assembling separate parts of the sample.

Sequence data

Reference sequences for human mRNA genes were obtained from NCBI RefSeq ftp site, http://ftp.ncbi.nih.gov/refseq/H_sapiens/mRNA_Prot/. Data were downloaded from NCBI on Apr 09 2013. Sequences for the bacterial recA, dnaA, rpsA and 16S rRNA genes used in the simulation were extracted from streptococcus pneumoniae genome, [GenBank:NC_003028]. Viral sequences used in the simulation are [GenBank:NC_001477, NC_001943, NC_000883, NC_015783, NC_001806, NC_003977, NC_001802]. We concatenate all the segments of segmented viruses (rotavirus [GenBank:NC_011500, NC_011506, NC_011507, NC_011508, NC_011510, NC_011501, NC_011502, NC_011503, NC_011504, NC_011509, NC_011505], Lujo virus [GenBank:FJ952385, FJ952384] and influenza virus). For influenza virus we use the sequence of the vaccine strain, A/California/7/2009.

Endnotes

1 Having in mind application of the clustering methods to high throughput sequencing data, we use the words “read” and “sequence” interchangeably throughout the paper.

2 Our notations are as follows. We use boldface letters to denote the vectors in the 4n-dimensional space of the word counts (e. g., p, q), and we use regular letters to denote the individual components of such vectors (e. g., p i , q i ). We denote the vector of raw word counts by c, its components are integers. We denote the coordinates of a centroid by q, and the normalized word counts by p. Normalization of p and q means that

i = 1 4 n p i = i = 1 4 n q i =1

unless otherwise specified. Also, P(c|q) and P(p|q) denote the likelihood of obtaining raw counts c or normalized counts p under our model if the sequence is assigned to the centroid with coordinates q. Note that P(c|q) and P(p|q) denote the same quantity. Either notation is used depending on the context.

3 Note the constraint i c i =L.

4 Note that an empty cluster can be formed at one of the steps. In this case the algorithm fails.

5 Recall that in the “hard” EM algorithm Z a α can only be 0 or 1 (each point has to be assigned to exactly one cluster). In this case one finds the maximum likelihood estimate α(a) for each a and sets Z a β = δ β , α ( a ) . Note that this might lead to a formation of an empty cluster after one of the iterations.

6 This dependence is not a ROC curve in the sense of the standard definition since the clustering does not generally produce a binary classification.

Appendix 1

Evaluation of centroids during the maximization step

Prescription for updating centroids qα during the maximization step can be derived as follows. One needs to minimize the log likelihood as defined in Equation (7) w. r. t. the variables q i α under the constraint i q i α =1 for all α. The log likelihood function is a sum of log likelihood functions for different clusters. One therefore can independently maximize it w. r. t. each centroid. Without a loss of generality one can assume that the sequences assigned to a given cluster are numbered 1,…,M. Maximizing the likelihood can be done with the help of introducing a Lagrange multiplier Λ and maximizing the new function:

L ~ = a = 1 M L a i = 1 4 n p i a log p i a q i Λ i = 1 4 n q i 1 .
(31)

We have dropped the superscript α since we only consider one cluster and one centroid. Differentiating w. r. t. q i (i = 1,…,4n) and equating to zero yields a set of equations:

L ~ q i = a = 1 M L a p i a q i Λ=0.
(32)

These equations imply

q i = a = 1 M L a p i a Λ .
(33)

Normalization of q implies that

Λ= a = 1 M L a p i a .
(34)

Substituting for Λ yields

q i = a = 1 M L a p i a b = 1 M j = 1 4 n L b p j b .
(35)

Raw word counts c i a are related to the frequencies p i a as c i a = L a p i a . This proves Equation (9).

Evaluation of centroids for d 2 distance

This derivation follows that for the EM clustering very closely. Distortion as defined by Equation (15) is a sum of distortions of different clusters. We can minimize the distortion in each cluster separately. Assuming that the word count vectors for each sequence have a unit norm, we can write the distortion within a cluster as

D=M a = 1 M p a ·q,q=1.
(36)

Again, without a loss of generality we assume that the sequences within a cluster are numbered from 1 to M. We need to minimizeD under the constraint that q2 = 1. This can be achieved by minimizing the auxiliary function

D ~ = M a = 1 M p a · q Λ ( q 2 1 ) = M a = 1 M i = 1 4 n p i a q i Λ i = 1 4 n q i 2 1 .
(37)

Differentiating D ~ w. r. t. to q i and equating to zero yields

D ~ q i = a = 1 M p i a 2Λ q i =0.
(38)

This solves for q as7

q= 1 2 Λ a = 1 M p a ,Λ= 1 2 a = 1 M p a .
(39)

This proves Equation (16).

Evaluation of centroids in the soft clustering

We use the same techniques as those used for the hard EM clustering. The difference is that now we cannot consider clusters independently as there is no “hard” assignment of each data point to a single cluster. We add Lagrange multipliers Λ α to Equation (26) to account for the constraints i q i α =1:

L ~ = α = 1 K a = 1 N Z a α L a i = 1 4 n p i a log p i a q i α α = 1 K Λ α i = 1 4 n q i α 1 .
(40)

Differentiating L ~ w. r. t. q i α and equating to zero yields the following set of equations:

L ~ q i α = 1 q i α a = 1 N Z a α L a p i a Λ α =0.
(41)

These equations imply

q i α = 1 Λ α a = 1 N Z a α L a p i a .
(42)

Imposing normalization constraints on q i α yields

Λ α = a = 1 N i = 1 4 n Z a α L a p i a .
(43)

This proves Equations (27) and (28).

Abbreviations

AUC:

Area under the curve

bp:

base pairs

EM:

Expectation maximization

FPR:

False positive rate

HTS:

High throughput sequencing

KL divergence:

Kullback-Leibler divergence

ROC:

Receiver operating characteristic

TPR:

True positive rate.

References

  1. Durbin R, Eddy S, Krogh A, Mitchison G: Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. 2006, Cambridge: Cambridge University Press

    Google Scholar 

  2. Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ: Basic local alignment search tool. J Molecul Biol. 1990, 215 (3): 403-410. http://www.sciencedirect.com/science/article/pii/S0022283605803602,

    Article  CAS  Google Scholar 

  3. Karlin S, Burge C: Dinucleotide relative abundance extremes: a genomic signature. Trends Genet. 1995, 11 (7): 283-290. 10.1016/S0168-9525(00)89076-9.

    Article  CAS  PubMed  Google Scholar 

  4. Karlin S, Mrázek J: Compositional differences within and between eukaryotic genomes. Proc Natl Acad Sci USA. 1997, 94 (19): 10227-10232. 10.1073/pnas.94.19.10227.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  5. Vinga S, Almeida J: Alignment-free sequence comparison-a review. Bioinformatics. 2003, 19 (4): 513-523. 10.1093/bioinformatics/btg005.

    Article  CAS  PubMed  Google Scholar 

  6. Song K, Ren J, Zhai Z, Liu X, Deng M, Sun F: Alignment-free sequence comparison based on next-generation sequencing reads. J Comput Biol. 2013, 20 (2): 64-79. 10.1089/cmb.2012.0228.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  7. Fofanov Y, Luo Y, Katili C, Wang J, Belosludtsev Y, Powdrill T, Belapurkar C, Fofanov V, Li T, Chumakov S, Pettitt B: How independent are the appearances of n-mers in different genomes?. Bioinformatics. 2004, 20 (15): 2421-2428. 10.1093/bioinformatics/bth266.

    Article  CAS  PubMed  Google Scholar 

  8. Chor B, Horn D, Goldman N, Levy Y, Massingham T: Genomic DNA k-mer spectra: models and modalities. Genome Biol. 2009, 10 (10): R108-10.1186/gb-2009-10-10-r108.

    Article  PubMed Central  PubMed  Google Scholar 

  9. Sims G, Jun S, Wu G, Kim S: Alignment-free genome comparison with feature frequency profiles (FFP) and optimal resolutions. Proc Natl Acad Sci USA. 2009, 106 (8): 2677-2682. 10.1073/pnas.0813249106. [Epub 2009 Feb 2.].

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  10. Greenbaum B, Levine A, Bhanot G, Rabadan R: Patterns of Evolution and Host Gene Mimicry in Influenza and Other RNA Viruses. PLoS Pathog. 2008, 4 (6): e1000079-10.1371/journal.ppat.1000079. doi:10.1371/journal.ppat.1000079

    Article  PubMed Central  PubMed  Google Scholar 

  11. Rabadan R, Levine A, Robins H: Comparison of avian and human influenza A viruses reveals a mutational bias on the viral genomes. J Virol. 2006, 80 (23): 11887-11891. 10.1128/JVI.01414-06. [Epub 2006 Sep 20.].

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  12. Trifonov V, Rabadan R: Frequency analysis techniques for identification of viral genetic data. MBio. 2010, 1 (3): [Pii: e00156-10.]..

    Article  Google Scholar 

  13. Manning CD, Raghavan P, Schtze H: Introduction to Information Retrieval. 2008, New York: Cambridge University Press

    Book  Google Scholar 

  14. Ester M, Kriegel HP, Sander J, Xu X: A density-based algorithm for discovering clusters in large spatial databases with noise. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96). Edited by: Evangelos Simoudis UMF Jiawei Han. 1996, Palo Alto: AAAI Press, 226-231.

    Google Scholar 

  15. Reinert G, Chew D, Sun F, Waterman MS: Alignment-free sequence comparison (I): statistics and power. J Comput Biol. 2009, 16 (12): 1615-1634. 10.1089/cmb.2009.0198.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  16. Wan L, Reinert G, Sun F, Waterman MS: Alignment-free sequence comparison (II): theoretical power of comparison statistics. J Comput Biol. 2010, 17 (11): 1467-1490. 10.1089/cmb.2010.0056.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  17. Holtgrewe M: Mason - a read simulator for second generation sequencing data. Tech Rep TR-B-10-06, Institut für Mathematik und Informatik, Freie Universität Berlin. 2010, http://publications.mi.fu-berlin.de/962/2/mason.201009.pdf,

    Google Scholar 

  18. Jones E, Oliphant T, Peterson P, et al: SciPy: Open source scientific tools for Python. http://www.scipy.org/,

  19. Zerbino DR, Birney E: Velvet: algorithms for de novo short read assembly using de Bruijn graphs. Genome Res. 2008, 18 (5): 821-829. 10.1101/gr.074492.107.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  20. Li H, Durbin R: Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009, 25 (14): 1754-1760. 10.1093/bioinformatics/btp324.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

This work was supported by National Institutes of Health award AI57158 (Northeast Biodefence Center-Lipkin) and the Defense Threat Reduction Agency.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Solovyov.

Additional information

Competing interests

Both authors declare that they have no competing interests.

Authors’ contributions

AS wrote the algorithms and performed the analyses. Both AS and WIL reviewed the data and wrote the manuscript. Both authors read and approved the final manuscript.

Electronic supplementary material

12859_2013_6074_MOESM1_ESM.gz

Additional file 1: Source code and documentation forafcluster software. Also available from https://github.com/luscinius/afcluster. (GZ 17 KB)

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Solovyov, A., Lipkin, W.I. Centroid based clustering of high throughput sequencing reads based on n-mer counts. BMC Bioinformatics 14, 268 (2013). https://doi.org/10.1186/1471-2105-14-268

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-14-268

Keywords