Skip to main content

Bayesian semiparametric regression models to characterize molecular evolution

Abstract

Background

Statistical models and methods that associate changes in the physicochemical properties of amino acids with natural selection at the molecular level typically do not take into account the correlations between such properties. We propose a Bayesian hierarchical regression model with a generalization of the Dirichlet process prior on the distribution of the regression coefficients that describes the relationship between the changes in amino acid distances and natural selection in protein-coding DNA sequence alignments.

Results

The Bayesian semiparametric approach is illustrated with simulated data and the abalone lysin sperm data. Our method identifies groups of properties which, for this particular dataset, have a similar effect on evolution. The model also provides nonparametric site-specific estimates for the strength of conservation of these properties.

Conclusions

The model described here is distinguished by its ability to handle a large number of amino acid properties simultaneously, while taking into account that such data can be correlated. The multi-level clustering ability of the model allows for appealing interpretations of the results in terms of properties that are roughly equivalent from the standpoint of molecular evolution.

Background

The structural and functional role of a codon in a gene determines its ability to freely change. For example, nonsynonymous (amino acid altering) substitutions may not be tolerated at certain codon sites due to strong negative selection, while at other sites some nonsynonymous substitutions may be allowed if they do not affect key physicochemical properties associated with protein function[1]. Thus, at such preferentially changing sites, more frequent substitutions occur between physicochemically similar amino acids (or codons which lead to those amino acids) than dissimilar ones[24]. Methods which use changes in physicochemical amino acid properties have thus been proposed in the study of evolution. For example,[57] use distances to calculate deviations from neutrality for a particular amino acid property. Alternative approaches model the evolution of protein coding sequences as continuous-time Markov chains with rate matrices that distinguish between property-altering and property-conserving mutations as in[8] and[9]. More recently,[10] proposed a Bayesian hierarchical regression model that compares the observed amino acid distances to the expected distances under neutrality for a given set of amino acid properties and incorporates mixture priors for variable selection. The hierarchical mixture priors enable the model in[10] to identify neutral, conserved and radically changing sites, while automatically adjusting for multiple comparisons and borrowing information across properties and sites.

A common feature of all the methods listed above is the implicit assumption that properties are independent from each other in terms of their effect on evolution. A review of the amino acid index database (available for example at http://www.genome.jp/dbget/aaindex.html), which lists more than 500 amino acid properties, shows that a large number of them are highly correlated. Although the correlations we observe in the data can be different from those computed from the raw amino acid scores due to the influence of factors such as codon bias, by ignoring these correlations we are also ignoring the fact that correlated properties may affect a particular site in similar ways. Hence, approaches that do not take into account the correlations in the rates of mutations on different codons do not make use of key information about the relative importance of different physicochemical properties on molecular evolution.

A natural way to account for correlations in the data is by considering a factor structure, see for example[11]. However, selecting the number and order of the factors can be a difficult task in this type of factor models. In addition, the particular structure of the model in[11] makes it difficult to incorporate the effect of the factors on regions that are very strongly conserved. This paper extends the Bayesian hierarchical regression model in[10] by placing a nonparametric prior on the distribution of the regression coefficients describing the effect of properties on molecular evolution. The prior is an extension of the well known Dirichlet process prior[12, 13] to model separately exchangeable arrays[14, 15]. As in[10], the main goal of the model described in this paper is to identify sites that are either strongly conserved or radically changing. In order to account for correlations across properties, our model clusters properties with similar effects on evolution, and within each such group, clusters sites with similar regression coefficients and nonparametrically estimates their distribution. In addition to accounting for correlations across properties, this structure allows us to dramatically reduce the number of parameters in the model and generate interpretable insights about molecular evolution at the codon level.

Although the clusters of properties can in principle be considered nuisance parameters that are of no direct interest, in practice posterior inference on the clustering structure can provide interesting insights about the molecular evolution process of a given gene. Indeed, as will become clear in the following sections, our approach incorporates the effect of amino acid usage bias. Hence, any significant differences between the cluster structure estimated from the observed protein-coding sequence alignment and the correlation structure derived from the raw distances between the properties in such cluster can be interpreted a signal of extreme amino acid usage bias in that particular region of the genome.

The rest of the paper is organized as follows. A brief review of DP mixture models along with the details of our model is provided in the Methods section. This section also includes a review of some of the currently available methods for characterizing molecular evolution that take into account changes amino acid properties. The model is then evaluated via simulation studies and illustrated through a real data example. The simulated and real data analyses, as well as comparisons between the proposed semiparametric regression approach and other methods, are presented in Results and discussion. Finally, the Conclusions section provides our concluding remarks.

Methods

Dirichlet process mixture models

The Dirichlet process (DP) was formally introduced by[12] as a prior probability model for random distributions G. A DP(ρ, G0) prior for G is characterized by two parameters, a positive scalar parameter ρ, and a parametric base distribution (or centering distribution) G0. ρ can be interpreted as the precision parameter, with larger values of ρ resulting in realizations of G that are closer to the base distribution G0.

One of the most commonly used definitions of the DP is its constructive definition[13], which characterizes DP realizations as countable mixtures of point masses. Specifically, a random distribution G generated from DP(ρ, G0) is almost surely of the form

G ( · ) = l = 1 w l δ ϕ l ( · ) ,

where δ ϕ l ( · ) denotes a point mass at ϕ l . The locations ϕ l are i.i.d. draws from G0, while the corresponding weights w l are generated using the following “stick-breaking” mechanism. Let w1=v1and define w l = v l r = 1 l 1 ( 1 v r ) for l=2,3,…, where {v l :l=1,2,…} are i.i.d. draws from a Beta(1, ρ) distribution. Defining the weights in this way ensures l = 1 w l = 1 . Furthermore, the sequences {v l :l=1,2,…} and {ϕ l :l=1,2,…} are independent.

The DP is most often used to model the distribution of random effects in hierarchical models. In the simplest case where no covariates are present, these models reduce to nonparametric mixture models (e.g.,[1618]). Assume that we have an independent sample of observations y1y2,…,y n such that y i | θ i ind k ( · ; θ i ) , where k(·;θ i ) is a parametric density. Then, the DP mixture model places a DP prior on θ i as

θ i | G i.i.d. G , i = 1 , , n G | ρ DP ( ρ , G 0 )

The almost sure discreteness of realizations of G from the DP prior allows ties in θ i , making DP mixture models appealing in applications where clustering is expected. The clustering nature is easier to see from the Pólya urn characterization of the DP[19] which gives the induced joint distribution for the θ i s, by marginalizing G over its DP prior. Under that representation, we can write θ i = θ ξ i where θ 1 , θ 2 , is an independent and identically distributed sample from G0 and the indicators ξ1,…,ξ n are discrete indicators sequentially generated with ξ1=1 and

Pr ( ξ i + 1 = k | ρ , ξ i , , ξ 1 ) = r k i i + ρ k max j i { ξ i } ρ i + ρ k = max j i { ξ i } + 1 ,

where r k i = j = 1 i I ( ξ j = k ) and

I ( ξ j = k ) = 1 ξ j = k 0 otherwise.

One advantage of DP mixture models over other approaches to clustering and classification is that they allow us to automatically estimate the number of components in the mixture. Indeed, from the Pólya urn representation of the process it should be clear that, although the number of potential mixture components is infinite, the model implicitly places a prior on the number of components that, for moderate values of ρ, favors the data being generated by an effective number of components K=maxin{ξ i }<n.

The model

Our data consist of observed and expected amino acid distances derived from a DNA sequence alignment, a specific phylogeny, a stochastic model of sequence evolution, and a predetermined set of physicochemical amino acid properties. In the analyses presented here, we disregard uncertainty in the alignment/phylogeny/ancestral sequence level since our main focus is the development and implementation of models that allow us to make inferences on the latent effects that several amino acid properties may have on molecular evolution for a given phylogeny and an underlying model of sequence evolution. Extensions of these analyses that take into account these uncertainties are briefly described in Conclusions. For further discussion on this issue, see also[10].

In order to calculate the observed distances, we first infer the ancestral sequences under a specific substitution model and a given phylogeny. In our applications, we use PAML version 3.15[20] and the codon substitution model of[21], which accounts for the possibility of multiple substitutions at a given site. Nonsynonymous substitutions are then counted by comparing DNA sequences between two neighboring nodes in the phylogeny. The observed mean distance, denoted as yi,j for site i and property j, is obtained as the mean absolute difference in the property scores due to all nonsynonymous substitutions at site i. Only those sites with at least one nonsynonymous change from the ancestral level are retained for further analysis.

To compute the expected distances, note that each codon can mutate to one of at most nine alternative codons through a single nucleotide substitution[5], only some of which are nonsynonymous (changes to stop codons are ignored). Let N k be the number of nonsynonymous mutations possible through a single nucleotide change, corresponding to a particular codon k (k=1,…,61). Let D k , l i , j be the absolute difference in property j between nonsynonymous codon pairs at site i differing at one codon position, where l=1,…,N k . The frequency of codon k at a particular site i in the DNA sequence under study is denoted by F k i . Then, the expected mean distance for a particular site i and a given property j is given by

x i , j D E i , j = k = 1 61 F k i l = 1 N k D k , l i , j k = 1 61 F k i N k .

We consider a hierarchical regression model that relates xi,j to yi,jand allows us to compare the expected and observed distances at the codon level for several properties simultaneously with the following rationale. If a given site i is neutral with respect to property j, then yi,jxi,j. If property j is conserved at site i, then yi,j<<xi,j and finally, if property j is radically changing at site i, then yi,j>>xi,j.

To construct our model, we first standardize the distances xi,j and yi,j by dividing them by the maximum possible distance for each property. This enables us to use priors with the same scale for all the regression coefficients. Our regression model for the standardized distances y i , j and x i , j , for sites i=1,…,I and properties j=1,…,J, can be written as

y i , j | β i , j , σ i , j 2 N ( β i , j x i , j , σ i , j 2 ) if β i , j = 0 N ( β i , j x i , j , σ i , j 2 / n i O ) if β i , j 0 ,
(1)

where n i O is the observed number of nonsynonymous changes at a particular site i and βi,j and σ i , j 2 are the regression coefficient and variance parameter associated with site i and property j. The mixture model accounts for the fact that some of the y ij s can be equal to zero as some nonsynonymous changes do not change the value of the property being measured (e.g., Aspargine, Aspartic acid, Glutamine, Glutamic acid all have the same hydropathy score).

To complete the model, we need to describe a model for the matrix of regression coefficients βi,j. There are a number of possible models for this type of data which utilize Bayesian nonparametric methods; some recent examples include the infinite relational model (IRM)[22, 23], the matrix stick breaking process (MSBP)[24], and the nested infinite relational model (NIRM)[14, 15].

In this paper we focus on the NIRM, which is constructed by partitioning the original matrix into groups corresponding to entries with similar behavior. This is done by generating partitions in one of the dimensions of the matrix (say, rows) that are nested within clusters of the other dimension (columns). This structure allows us to identify groups of (typically correlated) properties with similar pattern and then, within each such group, identify clusters of sites with similar values of βi,j(Figure1 provides a graphical representation of this idea). In our setting, we take [ θ ij ] = [ β i , j , σ i , j 2 ] and employ a NIRM to generate a prior for [θ ij ].

Figure 1
figure 1

Stylized representation of our model. Each sub table at the second level of clustering shares a common value for the regression coefficient βi,j. Rows correspond to properties, while columns correspond to sites.

More specifically, we denote by θ j =(θ1,j,…,θ Ij )the vector of regression coefficients and the associated variances corresponding to property (column) j. To obtain clusters for the properties, we assume that θ j F, where

F= k = 1 Π k δ θ k
(2)

is a random distribution such that Π k = v k s < k ( 1 v s ) , v k Beta(1,ρ), and θ k H k . Indeed, the discrete nature of F ensures that ties among the θ j happen with non-zero probability.

To obtain cluster-specific partitions for the sites (rows), H k (the joint distribution associated with all sites for a given cluster of properties) has to be chosen carefully. In particular, we write θ k = ( θ 1 , k , , θ I , k ) for any specific specific cluster of properties k and let

θ i , k l = 1 w l , k δ φ l , k ,
(3)

with w l , k = u l , k r < l { 1 u r , k } , ul,kBeta(1,γ k ) for every k, and φl,k are independently drawn from the baseline measure G0,l,k.

The baseline measure G0,l,k is chosen to accommodate the fact that some y i , j s can be zero, since some nonsynonymous changes can keep the value of the property being measured unchanged. Thus, G0,l,kis a mixture with a point mass at zero and a continuous density otherwise. To allow for a more flexible model we assume that different prior variances are associated with the y i , j s which are zero and those y i , j s that are different from zero, with the specific form of G0lkas below.

φ l , k = ( ϕ l , k , ϑ l , k 2 ) | G 0 lk G 0 lk
(4)

with

G 0 lk =λ 1 { ϕ l , k = 0 } p 1 ( ϑ l , k 2 )+(1λ)p( ϕ lk | ϑ l , k 2 ) p 2 ( ϑ l , k 2 ),

where p 1 ( ϑ l , k 2 ) Inv-Ga(a κ ,b κ ), p ( ϕ l , k | ϑ l , k 2 ) N ( α k , ϑ l , k 2 / V 0 ) and p 2 ( ϑ l , k 2 ) Inv-Ga ( a σ , b σ ) . Here ϕl,k and ϑ l , k 2 respectively denote the unique values βi,jand σ i , j 2 can take, whereas λ is the prior probability that ϕl,khas the value zero (i.e., the properties associated with this cluster are strongly conserved at this cluster of sites).

Note that our model implies that both sites and properties are exchangeable a priori. If no additional prior information is available, this type of assumption seems reasonable. However, a posteriori, it is possible to have sites behave differently in different clusters.

To complete the model we place hyperpriors on all parameters of the resulting model. Conjugate priors are chosen for ease of computation. α k denotes the mean for the ϕl,ks that are different from zero belonging to a specific cluster of properties k and is assumed to have a N(m α ,C α ) prior for all k. The DP concentration parameters ρ and γ k are assumed to follow Ga(a ρ ,b ρ ) with mean a ρ /b ρ , and Ga(a γ ,b γ ) with mean a γ /b γ for all k, respectively. λ, which is the prior probability for the point mass at 0 in G0lk, follows a Beta(a λ ,b λ ). The specific choice of hyperparameters is discussed later as part of each data analysis. In general, we use Ga(1,1) priors for the DP concentration parameters and a N(1,C α ) prior for α k to correspond to our assumption of neutrality a priori for the properties.

Related work

We compare results from our proposed method with results from a few currently available methods that aim to characterize molecular evolution while also taking into account changes in amino acid properties, namely, the regression model in[10], TreeSAAP[25], and EvoRadical[9].

In[10], the first level of the model is the regression equation on y i , j as in equation (1), but it implicitly assumes independence among properties and independence among sites unlike our current model. The model in[10] is suitable for use when a few mostly independent amino acid properties are being analyzed whereas the new semiparametric model is better suited to the analysis of a large number of possibly correlated properties.

TreeSAAP uses the methods of[6] to classify nonsysnonymous substitutions into one of M categories, with higher numbered categories corresponding to sites showing radical changes and lower numbered categories used for sites showing conserved changes for a given property. For the analysis considered here, we used 8 categories where categories 6, 7, and 8 corresponded to sites showing radical changes, and categories 1 and 2 to sites showing conserved changes. Nonsynonymous changes are inferred from the ancestral reconstruction using the nucleotide substitution models in baseml implemented in PAML. We used a Bonferroni correction to correct for multiple comparisons.

EvoRadical implements the models of[9], which use partitions of amino acids to parameterize the rates of property-conserving and property-altering codon substitutions in a maximum likelihood framework. The model considers three types of substitutions: synonymous, property-conserving nonsynonymous and property-altering nonsynonymous which is a slight improvement from[8]. For analyses with multiple properties, one has to create different partitions for the different properties and run EvoRadical for each property.

Posterior simulation

Various algorithms exist for posterior inference of DP mixtures - some of the most popular ones use (i) the Pólya urn characterization to marginalize out the unknown distribution(s)[26, 27], (ii) a truncation approximation to the stick-breaking representation of the process which paves the way for the use of methods employed in finite mixture models[28, 29], (iii) reversible jump MCMC or split-merge methods[30, 31]. Some other recent approaches have also used variational methods[32] and slice samplers[33].

We use an extension of the finite mixture approximation discussed in[28] for its ease of implementation. Truncating F at a sufficiently large K, we write F ( K ) = k = 1 K Π k δ θ k , with the weights Π k and locations θ k generated as described earlier in this Section. Next we introduce configuration variables {ζ j } such that, for k=1,…,K, ζ j =k if and only if θ j = θ k . Similarly for G k , we truncate at a sufficient level L, and introduce another set of configuration variables {ξi,k} where ξi,k=l, with l=1,…,L, if and only if θ i , k = φ l , k . Additional details about the algorithm are provided in the Appendix.

To determine the truncation levels K and L, we follow[29]. In particular, note that conditional on ρ (the DP concentration parameter), the tail probability k = K Π k has expectation {ρ/(1 + ρ)}K−1. Using prior guesses for ρ and acceptable tolerance levels for the tail probability to be small, one can then solve for the truncation level K. In our analyses, we used K and L in the range of 25 to 35. These values are in line with those used in other applications (for example, see[34]).

Results and discussion

Empirical exploration via simulation studies

We present two simulation studies to check the performance of the model under different scenarios. Additional simulation scenarios that may be of interest are available as an Additional file1.

Simulation study 1

The setup for the first simulation is as follows. We generate values for the distinct regression coefficients (ϕl,k) from a N(1,0.25). The number of distinct regression coefficients depends on the particular clustering structure for the corresponding simulation. Once we obtain the regression coefficients, we generate observations yi,j from N(ϕl,kxi,j,σ2=0.001). The xi,js are obtained from the lysin data set described below with analyses for 32 properties, which implies J=32 and I=94.

We fitted the model in The Model subsection to the y i , j s and x i , j s, with the following modifications: (i) the NIRM is imposed on βi,j, so φl,k=ϕl,k and (ii) ϕl,kG0where G0N(α,τ2). We used K=25 and L=25 for the simulations. The MCMC algorithm was run with the following hyperpriors: ρGa(1,1), γ k Ga(1,1) for all k, αN(1,0.25). σ2Inv-Ga(100, 10) and τ2Inv-Ga(2,4) were chosen such that the prior means corresponded to the true values for these hyperparameters. Results are based on 15000 iterations, with the first 5000 discarded as burn-in. Convergence was assessed by running two chains where each chain was initialized by randomly assigning the βi,js to different partitions. Posterior summaries based on the two chains were consistent with each other.

In this scenario, we had four clusters for the columns, each with differing number of groups, leading to twelve distinct cluster combinations for the entire matrix of βi,js (Figure2, left panel). Figure3 shows the marginal probabilities for any two columns (properties) of belonging to the same cluster. The model correctly identifies that there are 4 clusters for the columns and assigns each set of columns to its corresponding cluster with no uncertainty.

Figure 2
figure 2

Image plots for true β i,j values (left panel) and posterior means β ̂ i , j s (right panel).

Figure 3
figure 3

Marginal posterior probabilities of each pair of columns belonging to the same cluster.

Similar graphical summaries obtained for the structure of rows within each cluster of columns show that the correct clustering structures for the rows, within each cluster of columns, are inferred (see Figure4). For this level, however, there is some uncertainty about the membership of the clusters for a few rows. See, for example, the right panel of Figure4. Some rows in cluster 1 (in the lower left) are sometimes being assigned to cluster 3 (top right). The distinct values of ϕ used for these two clusters were 0.73 and 0.98, therefore, it does not seem unreasonable to see some uncertainty in the assignment of clusters. Posterior means of β ̂ i , j s agree closely with the true values as shown in Figure2.

Figure 4
figure 4

Marginal posterior probabilities of each pair of rows belonging to the same cluster for two different clusters of columns.

This scenario corresponds to the type of situation we expect on most real datasets: properties will cluster into groups and, within each group of properties, clusters of sites with similar responses can be clearly identified. Our results suggest that, as expected, the model is capable of identifying these multiple clusters with high accuracy and therefore accurately estimate the value of the regression coefficients. Other scenarios, including extreme cases where all properties belong to a common cluster while sites belong to one of several clusters, and cases where each property has a different effect on amino acid rates are available as Additional file1.

To investigate the effect of the truncation levels and the priors on our model, we performed sensitivity analysis by varying the truncation levels as well as the different hyperparameters. Increasing the truncation level to 35 did not affect the results and the estimated posterior means of the β s showed close agreement with the true values. The analyses was also fairly robust to the choice of the priors, since varying the hyperparameters had almost no effect on the results. Decreasing the prior variance of τ2 makes the results marginally better, i.e., posterior means of the βi,js, β ̂ i , j s, are slightly closer to the true values.

Simulation study 2 - data simulated from a biological model

In our second simulation study the model is evaluated in the context of biological sequences generated from an evolutionary model. In particular, a Markov model was used to generate 20 sequences of 90 codons each. For the first one-third of the sites (sites 1-30) we used transition probabilities obtained from the codon-substitution model of[21] with equal equilibrium probabilities for all 61 codons. For the second one-third of the sites (sites 31-60), we modified the transition probability matrix from the previous step by increasing the probabilities of transitions between codons that have small distances for volume and decreasing the probabilities of transitions between codons that have large distances for volume - this was done to encourage only those changes that conserve volume in this part of the sequences. Finally, for the last one-third of the sites (sites 61-90), we modified the original transition probability to encourage radical changes in hydropathy. Thus, we increased some transition probabilities between codons that have very different hydropathy scores and decreased a few of those that have similar hydropathy scores. Note that, since the equilibrium probabilities are either uniform or roughly uniform across all sites, the correlation structure across properties is retained in the expected distances, which simplifies the interpretation of the results.

Once we obtained the sequences, we generated ancestral sequences using PAML, version 3.15,[20] and calculated observed and expected distances yi,j and xi,j for five properties, namely, hydropathy (h), volume (M v ), polarity (p), isoelectric point (p H i ) and partial specific volume (V0). Of these, h and p are correlated and so are M v and V0.

Our model was fitted with K=25 and L=25 as truncation levels. The prior distributions were the same as the ones used for our previous simulation. Results are based on 15000 iterations, of which the first 5000 were burn-in. There did not seem to be any obvious problems with convergence, which was assessed by visual inspection of trace plots of some of the parameters.

The analyses found that there were three clusters of properties - the first cluster has properties h and p, the second cluster comprised of properties M v and V0 and the third cluster only had property p H i as shown in Figure5. Figure6 shows the posterior means of βi,js for representative properties of the three clusters in Figure5. Sites 24, 65, 67, 71, 81, 82, and 89 have large posterior means β ̂ ij s for cluster 1 (h and p). These are also the same sites that show up in the small cluster at the top right in Figure7. Specifically, Figure7 shows how often any two sites in cluster 1 are grouped together. The sites in the lower left (16, 28, 46, 51) have small posterior means β ̂ i , j s for these properties (h and p) and are grouped together more often. The big group of sites in the middle mostly seem to have mean β ̂ i , j s around 1 while sites 81, 89, 71, and 65 have the largest β ̂ i , j values and very large probabilities of being clustered together in cluster 1. Thus, the model successfully identifies sites that have similar βi,jvalues in a specific cluster and groups them together. Groups of sites that change a property can also be identified for clusters 2 and 3 in Figure5. In particular, for cluster 2 (M v and V0), there is a big group of sites which conserve these properties. Most of these sites are in the central one-third portion (i.e., the portion that includes sites 31-60) which were simulated under a transition probability matrix that favors transitions that conserve volume. Finally, for cluster 3 (p H i ) there is one large group of sites which conserve the property and one group comprising sites 39 and 80 which change the property greatly.

Figure 5
figure 5

Marginal posterior probabilities of any two properties being in the same cluster for the data simulated under a biological model.

Figure 6
figure 6

Posterior means of β i , j s for the three clusters in Figure 5 for the simulated data under a biological model. The sites are sorted according to the increasing value of posterior means.

Figure 7
figure 7

Marginal posterior probabilities of any two sites for the simulated data being grouped together in the first cluster in Figure 5. The sites are sorted according to the increasing value of posterior means of βi,js.

To better understand the performance of our method, we also analyzed the sequences generated above with the parametric regression model in[10], TreeSAAP[25], and EvoRadical[9]. Table1 lists the thirty sites with the largest posterior means β ̂ i , j s for h, and the thirty sites with the smallest posterior means β ̂ i , j s for M v for the regression model of[10] and also for our new semiparametric approach. Many of the same sites are identified by both methods, however, our new method performs slightly better than the regression model in[10]. In particular the new method identifies two additional sites in the 61-90 region as sites that change h.

Table 1 Comparing results between models in [[10]] and the new semiparametric model, for the data simulated under a biological model

Table2 lists sites that TreeSAAP finds significant for the different properties. All of the sites that TreeSAAP finds significant are also identified by our methods. However, note that once we correct for multiple comparisons in the TreeSAAP results, only one site (74) still remains significant. We note that the hierarchical specification of the priors in our models automatically accounts for multiple comparisons and no corrections are needed (see[10] for more discussion on this).

Table 2 Sites identified as significant by TreeSAAP for the different properties for the simulation study based on a biological model

Finally, we analyzed the sequences generated previously with EvoRadical using two different partitions[8] - one for p and the other for M v . We chose to run Evoradical with p instead of h, since a partition of the amino acids for polarity was already available in[8]. Additionally, given that h and p are correlated, we expect to see somewhat similar results for these two properties.

Table3 lists site-specific results from EvoRadical. The sites listed have high posterior probabilities (>0.95) of being in the different site classes. This was the criterion that was used to identify significant sites in[9]. The results presented here correspond to Model A1 in[9] which uses ω for the nonsynonymous to synonymous substitution rate ratio for codons encoding amino acids with properties in the same partition, and γ measures the nonsynonymous to synonymous substitution rate ratio between codons for properties belonging to different partitions. While the sites listed for p somewhat match results from the other methods, the results for M v are not in agreement. This is probably due to the fact that partitions are not always directly comparable with the amino acid distances. For example, under the volume partition of[8], both glycine and valine are small and glutamine is large, while looking at the volume scores glycine is very different from valine and glutamine. Thus, our models would consider a change from glycine to valine as radical, whereas for the partition-based method of[9], there would be no change. The fact that the user has to define a property-specific partition in advance, as opposed to directly working with the physicochemical distances, is one of the disadvantages of partition-based methods.

Table 3 Sites that have high posterior probabilities (>0.95) of belonging to each site class for the different partitions for EvoRadical for the simulated data

Illustration with Lysin data

Our proposed model was applied to the sperm lysin data set which consisted of cDNA from 25 abalone species with 135 codons in each sequence[35]. Sites with alignment gaps were removed from all sequences, which resulted in 122 codons for the analysis presented here. The phylogeny of[35] and the codon substitution model M8 in PAML, version 3.15,[20] was used to generate the ancestral sequences. The model M8 uses a discretized beta distribution to model ω values between zero and one with probability p0 and allows for an additional positive selection category with ω>1 and probability p1.

The lysin data was analyzed with the model in The Model subsection with the 32 amino acid properties listed in Table4. A few of the properties were chosen because of their functional importance. Some of the other properties have been previously used in analyses by[25]. Only sites which showed at least one nonsynonymous change were retained for the final analysis, which led to a data set with 94 sites. We used K=25 and L=35 as truncation levels for this data. The prior distributions with the following hyperparameters were used in the analysis. The DP concentration parameters ρ and γ k were assumed to follow a Ga(1,1). λ, the prior probability for ϕl,k being 0, was assumed to follow a Beta(2,8) which implied that about 20% of the unique βi,js were expected to be 0 a priori. a κ and b κ , the hyperparameters for the prior of ϑ l , k 2 when ϕl,kis 0, were chosen as 2 and 100 which implied a prior mean of 0.01. When ϕl,k is different from zero, a σ = 2 and b σ = 10 control the prior for ϑ l , k 2 . V0, the scale factor for ϑ l , k 2 , was fixed at the ratio of prior means of σ2and τ i 2 (the variance terms in the regression model in[10] for which we had used prior means of 0.1 and 0.01 respectively). Finally, the α k s were assumed to follow a N(1, 0.25) to conform to our prior assumption of neutrality for the properties. Results are based on 20000 iterations, of which the first 10000 were burn-in. Convergence was assessed by visual inspection of trace plots of some of the parameters and there did not seem to be any obvious problems with convergence.

Table 4 List of 32 amino acid properties used in the analysis

Figure8 shows the marginal posterior probabilities of any two properties being assigned to the same cluster. There seem to be four mostly distinct clusters in the properties in our list. The biggest cluster consists of 20 properties that are related to polarity and hydropathy. All 20 properties are assigned to this cluster with very high probability. The next cluster is comprised of the properties B l , and c. There is also a fairly big cluster whose members are related to volume (M v ,V0,M w ,C α ,μ). p zim , which is correlated with p to some extent, is clustered with p H i with which it shows a large correlation value (about 0.9). There is some uncertainty regarding the membership of K0 and E sm , since both of them are assigned to the largest cluster about 50% of the time, while E sm is clustered with properties related to volume to a lesser extent. p K1is the only property that is almost never clustered with other properties.

Figure 8
figure 8

Marginal posterior probabilities of any two properties being in the same cluster for the lysin data.

Site specific results based on the posterior means (denoted by β ̂ i , j s), for one representative property each from the four clusters in Figure8 are shown in Figure9. The sites are sorted according to the increasing value of mean β ̂ i , j for each image. Sites on the far right radically change properties in each group. For example, most of the sites that appear on the far right, like sites 15, 16, 21, 75, 82, 99 and 126, for cluster 1 (represented by h) have β ̂ i , j values of 1.2-1.4. There seem to be more sites radically changing properties in cluster 1 than in clusters 2 (represented by c) or 3 (represented by M v ). The first three clusters also have a fairly large number of sites with mean β ̂ i , j between 0 and 1. This is different from what we see for cluster 4 (represented by p zim ), which corresponds to properties p zim and p H i . A large number of sites in cluster 4 strongly conserve the properties (e.g., sites 35, 43, 49, 51, 64, 114, 117, 121), as is evident by the very small mean β ̂ i , j s for sites in the far left, unlike in the other clusters.

Figure 9
figure 9

Posterior means β ̂ i , j s for the four clusters (denoted by representative properties) in Figure 8 for lysin. The sites are sorted according to the increasing value of posterior means.

Figure10 shows the posterior summaries of βi,js different from zero for sites 82, 99, 120 and 127 for properties belonging to different clusters. Of these, sites 120 and 127 were found to be under positive selection by PAML, while sites 82, 99 and 127 were identified as radically changing some of the properties by the regression model in[10]. The sites show different behavior for the different properties, for example, site 82 shows radical changes for h, while it conserves M v . We can also see similarities in the posterior summaries across sites. For example, for property p K1 sites 82, 120 and 127 have similar values for βi,j. One of the advantages of using the semiparametric approach is that we can identify groups of sites that either conserve or radically change a set of similar amino acid properties. For example, sites 122 and 127 both seem to be altering the amino acid properties in the first large cluster of properties related to p and h. However, sites 122 and 127 have a very different behavior in cluster 4 related to p zim : site 122 strongly conserves properties in this cluster while site 127 radically changes them.

Figure 10
figure 10

Posterior summaries of β i, j s different from zero for sites 82, 99, 120 and 127 in lysin data. The first 4 properties on the x-axis belong to 4 different clusters and the next 2 do not belong to any specific cluster all the time. The vertical lines are 90% posterior intervals of the βi,js that are different from 0, the medians (filled circles) and the 25th and 75th percentiles (stars) are highlighted.

Table5 lists sites that are highly conserved with posterior mean β ̂ i , j s less than 0.4 for the different clusters. The largest number of highly conserved sites appears in cluster 4, which includes properties p zim and p H i , in agreement with Figure9. Some of these sites like 35, 51, 111 and 117 also conserve properties in clusters 2 and 3. A number of them, such as sites 28, 35, 58, 66, 94, 104, 117, and 128 are also identified as sites under negative selection by methods that take into account the relative rate of nonsynonymous to synonymous rate ratio, such as those implemented in PAML[20]. In order to determine which sites are under positive and negative selection by PAML, we follow an approach similar to that used by[35] in the analysis of the lysin data. In particular,[35] found that PAML model M8, which supports positive selection, is the model that better fits the lysin data. Therefore, we classified sites as negatively selected if the estimated ω was smaller than 0.3 and if Pr(ω>1|data)<0.5 using PAML model M8. Results comparing sites conserving or radically changing a small group of properties with sites inferred to be under positive or negative selection by PAML was also presented in[10].

Table 5 Strongly conserved sites ( β ̂ ij < 0 . 4 ) for lysin data for different clusters

The results are fairly robust to the choice of different hyperparameter values. Note that the scale factor for ϑ l , k 2 ultimately affects the variation in the βi,jvalues, and it is advisable to choose it so that the prior variance for the unique βi,js is not too large.

Conclusions

In this paper, we present a Bayesian hierarchical regression model with a nested infinite relational model on the regression coefficients. The model is capable of identifying sites which show radical or conserved amino acid changes. The (almost sure) discreteness of the DP realizations induces clustering at the level of properties which is analogous to the factor model in[11], with the advantage being that the nonparametric method automatically determines the appropriate number of clusters. The multi-level clustering ability of the NIRM also induces clustering at the level of sites and allows us to capture skewness and heterogeneity in the distribution of the random effects distribution associated with each cluster of properties.

The main advantage of the models we have described is their ability to simultaneously handle multiple properties with potentially correlated effects on molecular evolution. Our simulations suggest that our models are flexible but robust, being capable of dealing with a range of situations including those where properties are perfectly correlated, as well as those where all properties are uncorrelated. Our semiparametric regression models also work well, particularly in comparison with the regression model in[10], TreeSAAP and EvoRadical, when applied to DNA sequence data generated from an evolutionary model. In addition, the analysis of the lysin data suggests that the model leads to reasonable results.

The NIRM that is the basis of our model defines a separately exchangeable prior on matrices. This means that the prior is invariant to the order in which properties and sites are included. This is due to the fact that the rows as well as the columns of the parameter of interest are independent draws from a DP. From the point of view of modeling multiple properties, this is a highly desirable property. However, assuming that DNA sites are exchangeable can be questionable. Although this is a potential limitation of our model, we should note that the assumption of independence across sites (which is a stronger assumption than exchangeability) underlies all the methods discussed in the Background section. If information about the 3-dimensional structure of the encoded protein or other sequence specific information that can guide the construction of the dependence model is available, our model could be easily extended to account for this feature. In the absence of such information, exchangeability across DNA sites seems to be a reasonable prior assumption. Indeed, in contrast to the most common independence assumption, our exchangeability assumption allows us to explain correlations at the level of sites.

In our applications, we have used codon substitution models for reconstructing ancestral sequences as we wished to compare our methods to other methods for detecting selective sites that also use codon substitution models, such as those implemented in PAML and EvoRadical. However, it is possible to perform the proposed Bayesian semiparametric analyses using amino acid substitution models instead of codon substitution models. Note that the substitution model is only used in the calculation of the observed distances. First, we infer the ancestral sequences under a specific substitution model and a given phylogeny. We then compute the observed distances for a given property and a given site as the mean absolute difference in property scores due to all nonsynonymous substitutions at that site, where the nonsynonymous substitutions are counted by comparing the DNA sequences between two neighboring nodes in the phylogeny. The reconstructed ancestral sequences, and therefore the observed distances in our model, may differ under different substitution models, but the method can be implemented under any substitution model, including amino acid substitution models. The gain in execution time from using amino acid substitution models instead of codon-based ones could potentially be significant if the uncertainty in the alignment/phylogeny/ancestral level is taken into account.

Finally, it is important to note that the “observed” distances are not really directly observed, but are instead constructed from ancestral sequences and, therefore, subject to error. A simple way to account for this additional level of uncertainty is to modify the computation of expected distances by incorporating the ideas of[37]. This approach was previously employed in[10], with little impact on the final results.

Appendix: details about the Gibbs sampler

The truncations and the introduction of the configuration variables imply that (2) and (3) can be written as

ζ j |{ Π k } k = 1 K Π k δ θ k ξ i , k |{ w l , k } l = 1 L w l , k δ φ l , k
(5)

with φl,kG0lkand Π k and wl,k being the appropriate stick breaking weights. Writing the model as in (5) helps in obtaining the forms of the full conditionals as below.

The column indicators ζ j for j=1,…,J are sampled from a multinomial distribution with probabilities

P ( ζ j = k | ) = q j k l = 1 L { i : ξ i , k = l } Π k N ( y i , j | ϕ l , k x i , j , ϑ l , k 2 ) ,

where ϑ l , k 2 is ϑ l , k 2 if ϕl,k=0 or is ϑ l , k 2 / n i O if ϕl,k is different from zero. Π k is sampled in two parts: first, by generating v k from a Beta ( 1 + m k , ρ + s = k + 1 K m s ) for k=1,…,K−1 and v K =1, where m k is the number of columns assigned to cluster k and then, by constructing Π k = v k s = 1 k 1 ( 1 v s ) .

For i=1,…,I and k=1,…,K, the indicators ξi,kare also sampled from a multinomial with probabilities of the form

P ( ξ i , k = l | ) = p i , k l { j : ζ j = k } w l , k N ( y i , j | ϕ l , k x i , j , ϑ l , k 2 ) .

The updated weights wl,k are sampled in a manner similar to the Π k , i.e., ul,k are generated from a Beta ( 1 + n l , k , γ k + r = l + 1 L n lr ) for l=1,…,L−1 and u Lk =1, where nl,k is the number of βi,js assigned to atom l of cluster k and then, by constructing w l , k = u l , k r = 1 l 1 ( 1 u r , k ) .

Following[18], the DP concentration parameters ρ and γ k are sampled in two steps by introducing auxiliary variables η1and η2. First, sample η1from

p ( η 1 | ρ , ) = Beta ( ρ + 1 , J )

and then ρ from

p ( ρ | η 1 , ) = a ρ + n ζ 1 a ρ + n ζ 1 + J ( b ρ log ( η 1 ) ) × Ga ( a ρ + n ζ , b ρ log ( η 1 ) ) + J ( b ρ log ( η 1 ) ) a ρ + n ζ 1 + J ( b ρ log ( η 1 ) ) × Ga ( a ρ + n ζ 1 , b ρ log ( η 1 ) ) ,

where n ζ is the number of unique column indicators ζ j . Similarly, for each k=1,…,K,

p ( η 2 | γ k , ) = Beta ( γ k + 1 , I ) ,
p ( γ k | η 2 , ) = a γ + m ξ , k 1 a ρ + m ξ , k 1 + I ( b γ log ( η 2 ) ) × Ga ( a γ + m ξ , k , b γ log ( η 2 ) ) + I ( b γ log ( η 2 ) ) a γ + m ξ , k 1 + I ( b γ log ( η 2 ) ) × Ga ( a γ + m ξ , k 1 , b γ log ( η 2 ) ) ,

where m ξ , k is the number of unique row indicators ξi,k, for a specific cluster of columns k.

To sample the unique φ l , k = ( ϕ l , k , ϑ l , k 2 ) s given in (4), we introduce a set of indicator variables ψl,kwhich take the value 1 when ϕl,kis different from zero. For l=1,…,L and k=1,…,K, ψl,k, ϑ l , k 2 and ϕl,k are jointly sampled in the following way - ψl,k is sampled by integrating out ϕl,kand ϑ l , k 2 from its full conditional, ϑ l , k 2 is sampled conditional on ψl,k and ϕl,k is sampled conditional on both the corresponding ψl,kand ϑ l , k 2 , i.e.,

p ( ψ l , k , ϑ l , k 2 , ϕ l , k | ) = p ( ψ l , k | ) p ( ϑ l , k 2 | ψ l , k , ) × p ( ϕ l , k | ψ l , k , ϑ l , k 2 , )

with the individual expressions obtained as follows.

First, let Ω l , k i , j = { ( i , j ) : ξ i ζ j = l , ζ j = k } . Then,

p ( ψ l , k | ) λ Ω l , k i , j N ( y i , j | 0 , ϑ l , k 2 ) IG ( ϑ l , k 2 | a κ , b κ ) d ( ϑ l , k 2 ) + ( 1 λ ) Ω l , k i , j N ( y i , j | ϕ l , k x i , j , ϑ l , k 2 / n i O ) × N ( ϕ l , k | α k , ϑ l , k 2 / V 0 ) IG ( ϑ l , k 2 | a σ , b σ ) d ( ϕ l , k ) d ( ϑ l , k 2 ) .
p ( ϑ l , k 2 | ψ l , k , ) = IG I J 2 + a κ , 1 b κ + σ 1 , scale 1 if ψ l , k = 0 IG I J 2 + a σ , 1 b σ + σ 2 , scale 1 if ψ l , k = 1 ,

where I J = i , j 1 { ξ i ζ j = l , ζ j = k } and the update terms are given by σ 1 , scale = Ω l , k i , j y i , j 2 2 and σ 2 , scale = α k 2 V 0 2 + Ω l , k i , j n i O y i , j 2 2 ( α k V 0 + Ω l , k i , j n i O y i , j x i , j ) 2 2 ( V 0 + Ω l , k i , j n i O x i , j 2 ) .

p ( ϕ l , k | ψ l , k , ϑ l , k 2 , ) = 0 if ψ l , k = 0 N ( m ϕ , C ϕ ) if ψ l , k = 1 ,

where m ϕ = α k V 0 + Ω l , k i , j n i O y i , j x i , j V 0 + Ω l , k i , j n i O x i , j 2 and C ϕ = ϑ l , k 2 V 0 + Ω l , k i , j n i O x i , j 2 .

The full conditional of λ is given by

p ( λ | ) Beta ( a λ + l , k 1 { ψ l , k = 0 } , b λ + l , k 1 { ψ l , k = 1 } ) .

Finally, for k=1,…,K, the full conditional of α k is given by

p ( α k | ) N ( m α , C α )

where

C α = 1 1 C α + { l : ψ l , k = 1 } V 0 ϑ l , k 2

and

m α = C α m α C α + { l : ψ l , k = 1 } V 0 ϕ l , k ϑ l , k 2

.

Software availability

The R code implementing the models in the paper is freely available at http://www.ams.ucsc.edu/~raquel/software/.

References

  1. Pakula AA, Sauer RT: Genetic analysis of protein stability and function. Annu Rev Genet 1989, 23: 289–310. 10.1146/annurev.ge.23.120189.001445

    Article  CAS  PubMed  Google Scholar 

  2. Zuckerkandl E, Pauling L: Evolutionary divergence and convergence in proteins. In Evolving Genes and Proteins. New York: Academic Press; 1965:97–166.

    Google Scholar 

  3. Sneath PHA: Relations between chemical structure and biology. J Theor Biol 1966, 12: 157–195. 10.1016/0022-5193(66)90112-3

    Article  CAS  PubMed  Google Scholar 

  4. Miyata T, Miyazawa S, Yasunaga T: Two types of amino acid substitutions in protein evolution. J Mol Evol 1979, 12(3):219–236. 10.1007/BF01732340

    Article  CAS  PubMed  Google Scholar 

  5. Xia X, Li WH: What amino acid properties affect protein evolution? J Mol Evol 1998, 47: 557–564. 10.1007/PL00006412

    Article  CAS  PubMed  Google Scholar 

  6. McClellan DA, McCracken KG: Estimating the influence of selection on the variable amino acid sites of the cytochrome b protein functional domains. Mol Biol Evol 2001, 18: 917–925. 10.1093/oxfordjournals.molbev.a003892

    Article  CAS  PubMed  Google Scholar 

  7. McClellan D, Palfreyman E, Smith M, Moss J, Christensen R, Sailsbery J: Physicochemical evolution and molecular adaptation of the cetacean and artiodactyl cytochrome b proteins. Mol Biol Evol 2005, 22: 437–455.

    Article  CAS  PubMed  Google Scholar 

  8. Sainudiin R, Wong WSW, Yogeeswaran K, Nasrallah JB, Yang Z, Nielsen R: Detecting site-specific physicochemical selective pressures: applications to the class I HLA of the human major histocompatibility complex and the SRK of the plant sporophytic self-incompatibility system. J Mol Evol 2005, 60: 315–326. 10.1007/s00239-004-0153-1

    Article  CAS  PubMed  Google Scholar 

  9. Wong WSW, Sainudiin R, Nielsen R: Identification of physicochemical selective pressure on protein encoding nucleotide sequences. BMC Bioinf 2006, 7: 148–157. 10.1186/1471-2105-7-148

    Article  Google Scholar 

  10. Datta S, Prado R, Rodriguez A, Escalante AA: Characterizing molecular evolution: a hierarchical approach to assess selective influence of amino acid properties. Bioinformatics 2010, 26: 2818–2825. 10.1093/bioinformatics/btq532

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  11. Datta S, Prado R, Rodriguez A: Bayesian factor models in characterizing molecular adaptation. 2012. Tech. rep., University of California, Santa Cruz

    Google Scholar 

  12. Ferguson T: A Bayesian analysis of some nonparametric problems. Ann Stat 1973, 1: 209–230. 10.1214/aos/1176342360

    Article  Google Scholar 

  13. Sethuraman J: A constructive definition of Dirichlet priors. Statistica Sinica 1994, 4: 639–650.

    Google Scholar 

  14. Shafto P, Kemp C, Mansinghka V, Gordon M, Tenenbaum JB: Learning cross-cutting systems of categories. In Proceedings of the 28th Annual Conference of the Cognitive Science Society. Erlbaum; 2006:2146–2151.

    Google Scholar 

  15. Rodriguez A, Ghosh K: Nested partition models. Tech. rep., University of California, Santa Cruz. 2009 Tech. rep., University of California, Santa Cruz. 2009

  16. Lo AY: On a class of Bayesian nonparametric estimates: I. density estimates. Ann Stat 1984, 12: 351–357. 10.1214/aos/1176346412

    Article  Google Scholar 

  17. Escobar MD: Estimating normal means with a Dirichlet process prior. J Am Stat Assoc 1994, 89: 268–277. 10.1080/01621459.1994.10476468

    Article  Google Scholar 

  18. Escobar MD, West M: Bayesian density estimation and inference using mixtures. J Am Stat Assoc 1995, 90: 577–588. 10.1080/01621459.1995.10476550

    Article  Google Scholar 

  19. Blackwell D, Macqueen JB: Ferguson distribution via Pólya urn schemes. Ann Stat 1973, 1: 353–355. 10.1214/aos/1176342372

    Article  Google Scholar 

  20. Yang Z: Phylogenetic analysis using parsimony and likelihood methods. J Mol Evol 1997, 42: 294–307.

    Article  Google Scholar 

  21. Nielsen R, Yang Z: Likelihood models for detecting positively selected amino acid sites and applications to the HIV–1 envelope gene. Genetics 1998, 148: 929–936.

    PubMed Central  CAS  PubMed  Google Scholar 

  22. Kemp C, Tenenbaum JB, Griffiths TL, Yamada T, Ueda N: Learning systems of concepts with an infinite relational model. In Proceedings of the 21st National Conference on Artificial Intelligence - Volume 1. AAAI Press; 2006:381–388.

    Google Scholar 

  23. Xu Z, Tresp V, Yu K, Kriegel HP: Infinite hidden relational models. In Proceedings of the 22nd Annual Conference on Uncertainty in Artificial Intelligence. AUAI Press; 2006:544–551.

    Google Scholar 

  24. Dunson DB, Xue Y, Carin L: The matrix stick-breaking process: flexible Bayes meta-analysis. J Am Stat Assoc 2008, 103: 317–327. 10.1198/016214507000001364

    Article  CAS  Google Scholar 

  25. Woolley S, Johnson J, Smith MJ, Crandall KA, McClellan DA: TreeSAAP: Selection on Amino Acid Properties using phylogenetic trees. Bioinformatics 2003, 19: 671–672. 10.1093/bioinformatics/btg043

    Article  CAS  PubMed  Google Scholar 

  26. MacEachern SN: Estimating normal means with a conjugate style Dirichlet process prior. Commnunications Stat, Part B - Simul Comput 1994, 23: 727–741. 10.1080/03610919408813196

    Article  Google Scholar 

  27. MacEachern SN, Muller P: Estimating mixture of Dirichlet process models. J Comput Graphical Stat 1998, 7: 223–238.

    Google Scholar 

  28. Ishwaran H, James LF: Gibbs sampling methods for stick-breaking priors. J Am Stat Assoc 2001, 96: 161–173. 10.1198/016214501750332758

    Article  Google Scholar 

  29. Ishwaran H, Zarepour M: Dirichlet process sieves in finite normal mixtures. Statistica Sinica 2002, 12: 941–963.

    Google Scholar 

  30. Green PJ, Richardson S: Modelling heterogeneity with and without the Dirichlet process. Scand J Stat 2001, 28: 355–375. 10.1111/1467-9469.00242

    Article  Google Scholar 

  31. Jain S, Neal RM: A split-merge Markov Chain Monte Carlo procedure for the Dirichlet process mixture model. J Comput Graphical Stat 2004, 13: 158–182. 10.1198/1061860043001

    Article  Google Scholar 

  32. Blei DM, Jordan MI: Variational inference for Dirichlet process mixtures. Bayesian Anal 2006, 1: 121–144. 10.1214/06-BA104

    Article  Google Scholar 

  33. Walker SG: Sampling the Dirichlet mixture model with slices. Commun Stat - Simul Comput 2007, 36: 45. 10.1080/03610910601096262

    Article  Google Scholar 

  34. Rodriguez A, Dunson DB, Gelfand AE: The nested Dirichlet process. J Am Stat Assoc 2008, 103: 534–546. 10.1198/016214507000000554

    Article  Google Scholar 

  35. Yang Z, Swanson W, Vacquier V: Maximum-likelihood analysis of molecular adaptation in abalone sperm lysin reveals variable selective pressures among lineage and sites. Mol Biol Evol 2000, 17: 1446–1455. 10.1093/oxfordjournals.molbev.a026245

    Article  CAS  PubMed  Google Scholar 

  36. Gromiha MM, Oobatake M, Sarai A: Important amino acid properties for enhanced thermostability from mesophilic to thermophilic proteins. Biophys Chem 1999, 82: 51–67. 10.1016/S0301-4622(99)00103-9

    Article  CAS  PubMed  Google Scholar 

  37. Minin VN, Suchard MA: Counting labeled transitions in continuous-time Markov models of evolution. J Math Biol 2008, 56: 391–412.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

RP and SD were supported by the NIH/NIGMS grant R01GM072003-02. AR was supported by the NIH/NIGMS grant R01GM090201-01.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saheli Datta.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

SD, AR and RP formulated the model. SD performed the analyses and drafted the manuscript. AR and RP revised the manuscript draft. All authors read and approve the final version of the manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Datta, S., Rodriguez, A. & Prado, R. Bayesian semiparametric regression models to characterize molecular evolution. BMC Bioinformatics 13, 278 (2012). https://doi.org/10.1186/1471-2105-13-278

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-13-278

Keywords