Abstract
Background
For understanding cellular systems and biological networks, it is important to analyze functions and interactions of proteins and domains. Many methods for predicting proteinprotein interactions have been developed. It is known that mutual information between residues at interacting sites can be higher than that at noninteracting sites. It is based on the thought that amino acid residues at interacting sites have coevolved with those at the corresponding residues in the partner proteins. Several studies have shown that such mutual information is useful for identifying contact residues in interacting proteins.
Results
We propose novel methods using conditional random fields for predicting proteinprotein interactions. We focus on the mutual information between residues, and combine it with conditional random fields. In the methods, proteinprotein interactions are modeled using domaindomain interactions. We perform computational experiments using proteinprotein interaction datasets for several organisms, and calculate AUC (Area Under ROC Curve) score. The results suggest that our proposed methods with and without mutual information outperform EM (Expectation Maximization) method proposed by Deng et al., which is one of the best predictors based on domaindomain interactions.
Conclusions
We propose novel methods using conditional random fields with and without mutual information between domains. Our methods based on domaindomain interactions are useful for predicting proteinprotein interactions.
Background
Understanding of protein functions and proteinprotein interactions is one of important topics in the field of molecular biology and bioinformatics. Recently, many researchers have focused on the investigation of amino acid residues of proteins to reveal interactions and contacts between residues [14]. If residues at important sites for interactions between proteins are substituted in one protein, the corresponding residues in interacting partner proteins are expected to be also substituted by selection pressure. Otherwise, such mutated proteins may lose the interactions. Fraser et al. confirmed that interacting proteins evolve at similar evolutionary rates by comparing putatively orthologous protein sequences between S. cerevisiae and C. elegans[5]. It means that substitutions for contact residues occur in both interacting proteins as long as the proteins keep interacting with each other. Therefore, mutual information (MI) between residues is useful for predicting proteinprotein interactions for proteins of unknown function. MI is calculated from multiple sequence alignments for homologous protein sequences. Weigt et al. identified direct residue contacts between sensor kinase and response regulator proteins by message passing, which is an improvement of MI [4]. Burger and van Nimwegen used a dependence tree where a node corresponds to a position of amino acid sequences, and predicted interactions using a Bayesian network method [2]. On the other hand, Markov random field and conditional random field models have been well studied in fields of natural language processing [6,7]. Also in bioinformatics, protein function prediction methods from proteinprotein interaction network and other biological networks were developed using Markov random fields [8,9]. On the other hand, several prediction methods have been developed based on domaindomain interactions. Deng et al. proposed a domainbased probabilistic model of proteinprotein interactions, and developed EM (Expectation Maximization) method [10]. Based on this probabilistic model, LP (Linear Programming)based methods were developed [11], and Chen et al. improved the accuracy of interaction strength prediction by APM (Association Probabilistic Method) [12]. In this paper, we propose prediction methods based on domaindomain interactions using conditional random fields with and without mutual information. Furthermore, we perform computational experiments for several proteinprotein interaction datasets, compare the methods with the EM method proposed by Deng et al. [10], which is one of the best predictors based on domaindomain interactions, and the association method proposed by Sprinzak and Margalit [13] (the APM method for binary interaction data is equivalent to the association method), and show that our methods outperform the EM method and the association method.
Mutual information between domains
In order to investigate the relationship between two positions of proteins, MI for distributions of amino acids at the positions is used. Such distributions can be obtained from multiple alignments of protein sequences and domain sequences. In this section, we briefly review MI for distributions of amino acids, and explain MI between domains.
We assume that multiple sequence alignments for domains D_{m} and D_{n} are obtained, respectively (see Figure 1). In order to calculate MI, we need joint appearance frequencies. However, we cannot see which sequence in the multiple alignment of domain D_{m} corresponds to a specified sequence in that of D_{n}. Therefore, we assume that sequences contained in the same organism can be paired. In the example of Figure 1, the second sequence of D_{m} is paired with the first one of D_{n}, the third one of D_{m} is paired with the second one of D_{n}, and so on. The first sequence of D_{m} is not counted into the appearance frequencies because it is not paired with any sequence of D_{n} although it may be paired with sequences of other domains than D_{n}.
Figure 1. Illustration on the calculation of mutual information from multiple alignments of domains Domains D_{m} and D_{n} have multiple alignments of sequences from several organisms, respectively. Mutual information is calculated for each pair of positions i and j.
Let A be a set of amino acids, f_{i}(A) be the appearance frequency of amino acid A at position i in domains D_{m} and D_{n}, and f_{ij}(A, B) be the joint appearance frequency of a pair of amino acids A at position i in D_{m} and B at position j in D_{n}, where each frequency is divided by the number of paired sequences M in the multiple alignments such that ∑_{A∈A}f_{i}(A) = ∑_{A,B∈A}f_{ij}(A,B) = 1.
Multiple alignments often include some gaps. Weigt et al. counted the frequencies of gaps as well as amino acids [4]. Therefore, we also consider gaps to be a kind of amino acids, that is, the number of distinct amino acids is A = 21. Then, mutual information for positions i in D_{m} and j in D_{n} is defined as the KullbackLeibler divergence between the multiplication of appearance frequencies, f_{i}(A)f_{j}(B), and the joint appearance frequencies, f_{ij}(A,B), as follows.
If frequency distributions of amino acids at positions i and j are independent from each other, f_{ij}(A,B) ≈ f_{i}(A)f_{j}(B), and MI_{ij} approaches to zero. This means that the two positions are not related with each other in the evolutionary process. If domains D_{m} and D_{n} interact at the positions, it is considered that MI_{ij} becomes high because the positions have coevolved through the evolutionary process in order to keep the interaction. It should be noted that two positions i and j do not always directly interact even if MI_{ij} is high [4]. However, such proteins with high values of MI have a possibility to directly interact with each other at other positions in the proteins.
However, we need to reduce MI_{ij} because it can be unnecessarily high depending on distributions of f_{i}(A) and f_{j}(B). For that purpose, we make use of , which is the mutual information MI_{ij} from the joint frequency, f_{ij}(A, B), obtained by shuffling at random the combinations of sequences in multiple alignments. In this paper, we repeat the procedure 400 times according to [4], and take the average. For practical uses of MI, f_{i}(A), f_{j}(B) and f_{ij}(A,B) should be positive values. Otherwise, we cannot calculate MI_{ij} by using computers. Therefore, we use the following pseudocount as in [4],
where η is a constant value, in this paper we use η = 1. It should be noted that the sum over all amino acids A, and because ∑_{A∈A}f_{i}(A) = ∑_{A,B∈A}f_{ij}(A,B) = 1.
In order to investigate interactions between proteins, we need MI between domains included in the proteins. Thus, we define MI between domains D_{m} and D_{n}, M_{mn}, to be the maximum of MI over all positions i and j as follows.
where 〈v〉 means the average of v, i and j are positions of D_{m} and D_{n}, respectively. Since MI_{ij} is calculated to be high for the positions i and j that include many gaps, we exclude positions that include more than 20% gaps as in [14].
Conditional random field model for PPI
In this section, we propose a probabilistic model for proteinprotein and domaindomain interactions using conditional random fields [6,7] because it can be considered that two domains D_{m} and D_{n} do not always interact even if the mutual information M_{mn} is large. For example, Weigt et al. improved MI and proposed direct information (DI) because residues do not always contact with each other even if the MI is large [4]. Most proteins contain domains as is well known. If two proteins do not interact with each other, any two domains contained in the proteins must not interact with each other. In the left example of Figure 2, protein P_{i} consists of domains D_{1} and D_{2} and protein P_{j} consists of domain D_{3} respectively. If P_{i} and P_{j} do not interact, any pair of (D_{1}, D_{3}) and (D_{3}, D_{3}) does not interact. Deng et al. proposed a probabilistic model for a pair of proteins as follows [10]. By assuming that proteins P_{i} and P_{j} interact if and only if at least a pair of domains included in the proteins interacts, and events that domains interact are independent from each other, they defined
Figure 2. Markov random field model for proteinprotein interactions Left: Example of proteins P_{i} and P_{j}. P_{i} consists of domains D_{1} and D_{2}, and P_{j} consists of domain D_{3}, respectively. Right: Factor graph G(U,V,E). There exists an edge between P_{ij} ∈ U and D_{mn} ∈ V if and only if D_{mn} ∈ P_{ij}.
where P_{ij} = 1 means that proteins P_{i} and P_{j} interact, D_{mn} = 1 means that domains D_{m} and D_{n} interact, D_{mn} ∈ P_{ij} means that domain D_{m} is included in protein P_{i} and D_{n} is included in P_{j} and the product in the right hand side is calculated for all domain pairs (D_{m}, D_{n}) included in the protein pair (P_{i}, P_{j}). By transforming equation (5), we have
where λ^{(mn)} = log(1 – Pr(D_{mn} = 1)).
From this equation, we can consider the following Markov random field model for protein pair (P_{i}, P_{j}) (see Figure 2).
where p_{ij} ∈ {0, 1}, d means a set of events on domaindomain interactions, D_{mn} = d_{mn} (d_{mn} ∈ {0, 1}), denotes a local feature, is the corresponding weight parameter and related to the joint probability Pr(P_{ij} = s, D_{mn} = t), and Z_{ij} denotes the normalization constant. For instance, equation (8) for p_{ij} = 0 is equivalent to equation (7) in the case that for all protein pairs (P_{i}, P_{j}) and if s = t = 0, otherwise 0.
In Markov random fields, random variables have Markov properties represented as an undirected graph [15]. The factor graph for our model is represented to be a bipartite graph G(U, V, E) with a set of vertices U corresponding to proteinprotein interactions P_{ij}, a set of vertices V corresponding to domaindomain interactions D_{mn}, and a set of edges E between U and V as the right figure of Figure 2. There exists an edge between P_{ij} ∈ U and D_{mn} ∈ V if and only if D_{mn} ∈ P_{ij}. For the left example of Figure 2, protein pair (P_{i}, P_{j}) includes domain pairs (D_{1}, D_{3}) and (D_{2}, D_{3}). Then, in the factor graph, the vertex of P_{ij} is connected with vertices of D_{13} and D_{23}, respectively. Although the vertex of P_{ij} does not have other adjacent vertices than the vertices of D_{13} and D_{23}, those of D_{13} and D_{23} can be connected with other vertices than that of P_{ij}
Since Pr(P_{ij} = 0D_{mn} =t) = 1 – Pr(P_{ij} = 1D_{mn} = t), it is redundant to consider both s = 0, 1, and it is sufficient to consider only s = 1. Therefore, in order to simplify the model, we substitute , , and for all protein pairs (P_{i}, P_{j}). Then, we have the following joint probability,
where p means a set of events on proteinprotein interactions, P_{ij} = p_{ij}.
We here introduce mutual information between domains M = {M_{mn}} as given conditional data in order to combine it with the probabilistic model. Then, equation (9) can be written as
where
σ(x) = 1/(1 + e^{–x}) is an increasing function, and c is a positive constant. It should be noted that a negative value, –1, is given to because it is undesired that a pair of domains interact although proteins having the pair do not interact. In this way, the local feature correlates proteinprotein interactions P_{ij} with domaindomain interactions D_{mn} (see Figure 2).
For a conditional random field model without MI, we use the following local feature instead of .
Parameter estimation
In this section, we discuss how to estimate the parameters . We assume that proteinprotein interaction data p = {p_{ij}} are given. Then, the likelihood function is represented by
where Z(M) = ∏_{pij∈p}Z_{ij}(M). By taking the logarithm, we have
We estimate the parameters by maximizing the loglikelihood function, l(λ). Since log(e^{x} + e^{y}) is a convex function for variables x and y, that is, l(λ) is a concave function, we are able to obtain a global maximum. For maximizing such functions, various methods such as the steepest descent method, Newton’s method, and the BroydenFletcherGoldfarbShanno (BFGS) [16] method have been developed. Newton’s method calculates the inverse of the Hessian matrix for the objective function. However, the computational cost is high. Therefore, the quasiNewton method approximates the matrix by some efficient method using the first derivatives, the gradient. In this paper, we use the BFGS method, which is one of the quasiNewton methods. By differentiating equation (15) partially with respect to each parameter , we have
In the BFGS method, this equation is repeatedly applied for updating a solution.
Computational experiments
Data and implementation
We used proteinprotein interaction data of H. sapiens, D. melanogaster, and C. elegans from the DIP database [17], the file name is ’dip20091230.txt’. We used the UniProt Knowledgebase database (version 15.4) [18] as protein domain inclusion data. We deleted proteins that did not have any domain, and obtained 294 interacting protein pairs as positive data that included 300 distinct proteins and 320 domains for H. sapiens, 449 interacting pairs that included 562 proteins and 449 domains for D. melanogaster, and 250 interacting pairs that included 602 proteins and 476 domains for C. elegans.
We used the Pfam database (version 24.0) [19] to obtain multiple sequence alignments for domains, and calculated MI, M_{mn}, for each pair of domains. Figure 3 shows the distributions of domain MI M_{mn} for H. sapiens, D. melanogaster, and C. elegans. We can see from the figure that most domain MIs are distributed in the part of less than about 0.8 for all organisms. It is considered that domains D_{m} and D_{n} with M_{mn} less than 0.8 may not interact, and domains with M_{mn} more than 0.8 have more possibilities to interact with each other. Therefore, we set the constant c in equation (12) to be 0.8. Although we tried several values from 0.6 to 1.0 for c, the results were similar to the case of c = 0.8.
Figure 3. Distributions of domain MIs for H. sapiens, D. melanogaster, and C. elegans
We selected noninteracting protein pairs as negative data uniformly at random such that negative data did not overlap with the positive data. The number of negative data was the same as that of positive data for each organism.
We used libLBFGS (version 1.9) with default parameters to estimate the parameters , which is a C implementation of the limited memory BFGS method [20], and is available on the web page, http://www.chokkan.org/software/liblbfgs/ webcite.
Result
In order to evaluate our method, we compared the proposed CRF method with MI and that without MI with the EM method by Deng et al. [10] and the association method proposed by Sprinzak and Margalit [13]. The association method and the APM method [12] estimate probabilities λ_{mn} that domains D_{m} and D_{n} interact as and , respectively, where N_{mn} (I_{mn}) denotes the number of (interacting) protein pairs that include domain pair (D_{m}, D_{n}), and ρ_{ij} denotes the interaction strength of protein pair (P_{i}, P_{j}), 0 ≤ ρ_{ij} ≤ 1. However, our input interaction data are binary, that is, ρ_{ij} takes only 0 or 1. Then, the numerator of the APM method becomes I_{mn}. It means that the APM method for binary interaction data is equivalent to the association method. In the EM method, probabilities λ_{mn} that domains D_{m} and D_{n} interact are estimated by the recursive formula, , where o_{ij} = 1 denotes that it was observed that proteins P_{i} and P_{j} interact with each other, and fn = 0.8. In this paper, the solution of the association method was given as the initial value of the EM method.
We performed fivefold crossvalidation, that is, split the data into 5 datasets (4 for training and 1 for test), estimated from the training datasets, and calculated Pr(P_{ij} = 1M) of equation (10) for each protein pair in the test dataset and AUC (Area Under ROC Curve) score, where among the test dataset only protein pairs that included at least a parameter estimated from the corresponding training dataset were always used. We repeated 5 times, and took the average. Tables 1, 2, and 3 show the results on AUC for training and test datasets by the CRF method with MI, that without MI, the EM method, and the association method for H. sapiens, D. melanogaster, and C. elegans, respectively. An AUC score is the area under an ROC (Receiver Operating Characteristic) curve, and takes a value between 0 and 1. The ROC curve of a random classifier lies on the diagonal line, and the AUC score is 0.5. The ROC curve of a perfect classifier goes through the point (0 (false positive rate), 1 (true positive rate)), and the AUC score is 1. A classifier with the AUC score closer to 1 has better performance. We can see from these tables that the results by the CRF method with MI are better than those by the CRF method without MI, and that the results by the CRF method without MI are better than those by the EM method and the association method. It is also seen that the results by the EM method are almost the same as those by the association method. It might be because the parameters of the EM method were estimated from the solution of the association method and the solution of the EM method already reached a local optimum. Figures 4, 5, and 6 show the average ROC curves for training and test datasets by the CRF method with MI, that without MI, the EM method, and the association method. For training datasets, the results by all of the methods were almost perfect. For test datasets, the CRF method with MI outperformed that without MI, the EM method, and the association method. It should be noted that the ROC curves of the EM method are almost the same as those of the association method for the same reason discussed above.
Table 1. The AUC results for training and test datasets of H. sapiens by the CRF method with MI, that without MI, the EM method, and the association method
Table 2. The AUC results for training and test datasets of D. melanogaster by the CRF method with MI, that without MI, the EM method, and the association method
Table 3. The AUC results for training and test datasets of C. elegans by the CRF method with MI, that without MI, the EM method, and the association method
Figure 4. Average ROC curves for test datasets of H. sapiens by the CRF method with MI, that without MI, the EM method, and the association method
Figure 5. Average ROC curves for test datasets of D. melanogaster by the CRF method with MI, that without MI, the EM method, and the association method
Figure 6. Average ROC curves for test datasets of C. elegans by the CRF method with MI, that without MI, the EM method, and the association method
Conclusions
We proposed novel methods which combine conditional random fields with the domainbased model of proteinprotein interactions. In order to give better performance, we introduced mutual information to the probabilistic model. In the improved model, mutual information between domains is given as conditions, where MI between domains is defined as the maximum of MIs between residues in the domains. This method was developed based on the fact that amino acid residues at important sites for interactions have coevolved with each other, and MI has been used for identifying contact residues in interactions. We performed fivefold crossvalidation experiments, and calculated AUC for probabilities that two proteins interact. The results suggested that our proposed methods, especially the CRF method with mutual information, are useful. However, the results of AUC for training datasets implied that estimated parameters were overfitting to training datasets. For avoiding that problem, we can improve the methods, for instance, by adding regularization terms, l_{1}norm of parameters to the loglikelihood function. Since CRF has an advantage to be able to incorporate large number of features, it remains as a future work to improve the model itself to obtain better accuracy by, for instance, modifying the local feature and adding new features.
Authors contributions
JS proposed the use of mutual information for predicting proteinprotein interactions. Methods were developed and implemented by MH. MK and TA participated in the discussion during development of the methods. The manuscript was prepared by MH, JS, and TA.
Competing interests
The authors declare that they have no competing interests.
Acknowledgements
This work was partially supported by GrantsinAid #22240009 and #21700323 from MEXT, Japan. JS would like to thank the National Health and Medical Research Council of Australia (NHMRC) and the Chinese Academy of Sciences (CAS) for financially supporting this research via the NHMRC Peter Doherty Fellowship and the Hundred Talents Program of CAS.
This article has been published as part of BMC Systems Biology Volume 5 Supplement 1, 2011: Selected articles from the 4th International Conference on Computational Systems Biology (ISB 2010). The full contents of the supplement are available online at http://www.biomedcentral.com/17520509/5?issue=S1.
References

White RA, Szurmant H, Hoch JA, Hwa T: Features of proteinprotein interactions in twocomponent signaling deduced from genomic libraries.
Methods Enzymol 2007, 422:75101. PubMed Abstract  Publisher Full Text

Burger L, van Nimwegen E: Accurate prediction of proteinprotein interactions from sequence alignments using a Bayesian method.
Molecular Systems Biology 2008, 4:165. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Halabi N, Rivoire O, Leibler S, Ranganathan R: Protein sectors: Evolutionary units of threedimensional structure.
Cell 2009, 138:774786. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Weigt M, White RA, Szurmant H, Hoch JA, Hwa T: Identification of direct residue contacts in proteinprotein interaction by message passing.
Proc. Natl. Acad. Sci. USA 2009, 106:6772. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Fraser HB, Hirsh AE, Steinmetz LM, Scharfe C, Feldman MW: Evolutionary rate in the protein interaction network.
Science 2002, 296:750752. PubMed Abstract  Publisher Full Text

Sha F, Pereira F: Shallow parsing with conditional random fields.

Sutton C, McCallum A: An introduction to conditional random fields for relational learning. In Introduction to statistical relational learning. MIT Press; 2006:93128.

Deng M, Zhang K, Mehta S, Chen T, Sun F: Prediction of protein function using proteinprotein interaction data.
Journal of Computational Biology 2003, 10(6):947960. PubMed Abstract  Publisher Full Text

Deng M, Chen T, Sun F: An integrated probabilistic model for functional prediction of proteins.
Journal of Computational Biology 2004, 11:463475. PubMed Abstract  Publisher Full Text

Deng M, Mehta S, Sun F, Chen T: Inferring domaindomain interactions from proteinprotein interactions.
Genome Research 2002, 12:15401548. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Hayashida M, Ueda N, Akutsu T: Inferring strengths of proteinprotein interactions from experimental data using linear programming.
Bioinformatics 2003, 19(suppl 2):ii58ii65. PubMed Abstract  Publisher Full Text

Chen L, Wu LY, Wang Y, Zhang XS: Inferring protein interactions from experimental data by association probabilistic method.
Proteins 2006, 62(4):833837. PubMed Abstract  Publisher Full Text

Sprinzak E, Margalit H: Correlated sequencesignatures as markers of proteinprotein interaction.
Journal of Molecular Biology 2001, 311:681692. PubMed Abstract  Publisher Full Text

Little DY, Chen L: Identification of coevolving residues and coevolution potentials emphasizing structure, bond formation and catalytic coordination in protein evolution.
PLoS One 2009, 4:e4762. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Moussouri J: Gibbs and Markov random systems with constraints.
Journal of Statistical Physics 1974, 10:1133. Publisher Full Text

Bertsekas DP: Nonlinear Programming. Athena Scientific; 1999.

Salwinski L, Miller CS, Smith AJ, Pettit FK, Bowie JU, Eisenberg D: The Database of Interacting Proteins: 2004 update.
Nucleic Acids Research 2004, 32:D449D451. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

The UniProt Consortium: The Universal Protein Resource (UniProt) in 2010.
Nucleic Acids Research 2010, 38:D142D148. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Finn RD, Mistry J, Tate J, Coggill P, Heger A, Pollington JE, Gavin OL, Gunasekaran P, Ceric G, Forslund K, Holm L, Sonnhammer ELL, Eddy SR, Bateman A: The Pfam protein families database.
Nucleic Acids Research 2010, 38:D211D222. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Nocedal J: Updating quasiNewton matrices with limited storage.
Mathematics of Computation 1980, 35(151):773782. Publisher Full Text