Skip to main content
  • Research article
  • Open access
  • Published:

A detailed error analysis of 13 kernel methods for protein-protein interaction extraction

Abstract

Background

Kernel-based classification is the current state-of-the-art for extracting pairs of interacting proteins (PPIs) from free text. Various proposals have been put forward, which diverge especially in the specific kernel function, the type of input representation, and the feature sets. These proposals are regularly compared to each other regarding their overall performance on different gold standard corpora, but little is known about their respective performance on the instance level.

Results

We report on a detailed analysis of the shared characteristics and the differences between 13 current methods using five PPI corpora. We identified a large number of rather difficult (misclassified by most methods) and easy (correctly classified by most methods) PPIs. We show that kernels using the same input representation perform similarly on these pairs and that building ensembles using dissimilar kernels leads to significant performance gain. However, our analysis also reveals that characteristics shared between difficult pairs are few, which lowers the hope that new methods, if built along the same line as current ones, will deliver breakthroughs in extraction performance.

Conclusions

Our experiments show that current methods do not seem to do very well in capturing the shared characteristics of positive PPI pairs, which must also be attributed to the heterogeneity of the (still very few) available corpora. Our analysis suggests that performance improvements shall be sought after rather in novel feature sets than in novel kernel functions.

Background

Automatically extracting protein-protein interactions (PPIs) from free text is one of the major challenges in biomedical text mining [1-6]. Several methods, which usually are co-occurrence-based, pattern-based, or machine-learning based [7], have been developed and compared using a slowly growing body of gold standard corpora [8]. However, progress always has been slow (if measured in terms of precision / recall values achieved on the different corpora) and seems to have slowed down even over the last years; furthermore, current results still do not cope with the performance that has been achieved in other areas of relationship extraction [9].

In this paper, we want to elucidate the reason of the slow progress by performing a detailed, cross-method study of characteristics shared by PPI instances which many methods fail to classify correctly. We concentrate on a fairly recent class of PPI extraction algorithms, namely kernel methods[10, 11]. The reason for this choice is that these methods were the top-performing in recent competitions [12, 13]. In a nutshell, they work as follows. First, they require a training corpus consisting of labeled sentences, some of which contain PPIs and/or non-interacting proteins, while others contain only one or no protein mentions. All sentences in the training corpus are transformed into structured representations that aims to best capture properties of how the interaction is expressed (or not for negative examples). The representations of protein pairs together with their gold standard PPI-labels are analyzed by a kernel-based learner (mostly an SVM), which builds a predictive model. When analyzing a new sentence for PPIs, its candidate protein pairs are turned into the same representation, then classified by the kernel method. For the sake of brevity, we often use the term kernel to refer to a combination of SVM learner and a kernel method.

Central to the learning and the classification phases is a so-called kernel function. Simply speaking, a kernel function is a function that takes the representation of two instances (here, protein pairs) and computes their similarity. Kernels functions differ in (1) the underlying sentence representation (bag-of-words, token sequence with shallow linguistic features, syntax tree parse, dependency graphs); (2) the substructures retrieved from the sentence representation to define interactions; and (3) the calculation of the similarity function.

In our recent study [14], we analyzed nine kernel-based methods in a comprehensive benchmark and concluded that dependency graph and shallow linguistic feature representations are superior to syntax tree ones. Although we identified three kernels that outperformed the others (APG, SL, kBSPS; see details below), the study also revealed that none of them seems to be a single best approach due to the sensitivity of the methods to various factors—such as parameter settings, evaluation scenario and corpora. This leads to highly heterogeneous evaluation results indicating that methods are strongly prone to over-fit the training corpus.

The focus of this paper is to perform a cross-kernel error analysis at the instance level with the goal to explore possible ways to improve kernel-based PPI extraction. To this end, we determine difficulty classes of protein pairs and investigate the similarity of kernels in terms of their predictions. We show that kernels using the same input representation perform similarly on these pairs and that building ensembles using dissimilar kernels leads to significant performance gain. Additionally, we identify kernels that perform better on certain difficulty classes; paving the road to more complex ensembles. We also show that with a generic feature set and linear classifiers a performance can be achieved that is on par with most kernels. However, our main conclusion is pessimistic: Our results indicate that significant progress in the field of PPI extraction probably can only be achieved if future methods leave the beaten tracks.

Methods

We recently performed a comprehensive benchmark of nine kernel-based approaches (hereinafter we refer to them briefly as kernels) [14]. In the meantime, we obtained another four kernels: three of them were originally proposed by Kim et al. ([15]) and one is its modification described in [16]; we refer to them collectively as Kim’s kernels. In this work, we investigate similarities and differences between these 13 kernels.

Kernels

The shallow linguistic (SL) [17] kernel does not use deep parsing information. It is solely based on bag-of-word features (words occurring in the sentence fore-between, between and between-after relative to the pair of investigated proteins), surface features (capitalization, punctuation, numerals), and shallow linguistic (POS-tag, lemma) features generated from tokens left and right to the two proteins (in general: entities) of the protein pair.

Subtree (ST; [18]), subset tree (SST; [19]), partial tree (PT; [20]) and spectrum tree (SpT; [21]) kernels exploits the syntax tree representation of sentences. They differ in the definition of extracted substructures. ST, SST and PT kernels extract subtrees of the syntax parse tree that contain the analyzed protein pair. SpT uses vertex-walks, that is, sequences of edge-connected syntax tree nodes, as the unit of representation. When comparing two protein pairs, the number of identical substructures are calculated as similarity score.

The next group of kernels applies dependency parse sentence representation. Edit distance and cosine similarity kernels (edit, cosine; [22]), as well as the k-band shortest path spectrum (kBSPS; [14]) use primarily the shortest path among the entities, but the latter optionally allows for the k-band extension of this path in the representation. The most sophisticated kernel, all-path graph (APG; [23]) builds both on the dependency graph and the token sequence representations of the entire sentence, and weighs connections within and outside the shortest path differently.

Kim’s kernels [15] also use the shortest path of the dependency parses. The four kernels differ in the information they use from the parses. The lexical kernel uses only lexical information encoded into the dependency tree, that is, nodes are the lemmas of the sentences connected by dependency relation labeled edges. The shallow kernel retains only the POS-tag information in the nodes. The similarity score is calculated by both kernels as the number of identical subgraphs of two shortest paths with the specific node labeling. The combined kernel is the sum of the former two variants. The syntactic kernel, defined in [16], applies exclusively the structural information from the dependency tree, that is, only the edge labels are considered at similarity score calculation.

Since Fayruzov’s implementation of Kim’s kernels does not determine automatically the threshold where to separate positive and negative classes, it has to be specified for each model separately. Therefore, in addition to the parameter search described in [14] and re-used here, we also performed a coarse-grid threshold searching strategy in [0,1] with step 0.05. Assuming that the test corpus has similar characteristic as the training one—the usual guess in the absence of further knowledge—we selected the threshold between positive and negative classes such that their ratio approximated the best the ratio measured on the training set. Note that APG [23] applies a similar threshold searching strategy but optimizes the threshold against F-score on the training set.

Classifiers and parameters

Typically, kernel functions are integrated into SVM implementations. Several freely available and extensible implementations of SVMs exist, among which SVM light[24] and LibSVM [25] probably are the most renowned ones. Both can be adapted by supplying a user-defined kernel function. In SVM light, kernel functions can be defined as a real function of a pair in the corresponding instance representation. LibSVM, on the other hand, requires the user to pre-compute kernel values, i.e., pass to the SVM learner a matrix containing the pairwise similarity of all instances. Accordingly, most of the kernels we experimented with use the SVM light implementation, except for the SL and Kim’s kernels that use LibSVM, and APG that uses internally a sparse regularized least squares (RLS) SVM.

Corpora

We use the five freely available and widely used PPI-annotated resources also described in [8], i.e., AIMed [26], BioInfer [27], HPRD50 [28], IEPA [29], and LLL [30].

Evaluation method

We report on the standard evaluation measures (precision (P), recall (R), F1-score (F)). As we have shown in our previous study [14], the AUC measure (area under the receiver operating characteristics curve) that is often used in recent literature to characterize classifiers and independent from the distribution of positive and negative classes, depends very much on the learning algorithm of the classifier, and only partially on the kernel. Therefore, in this study we stick to the above three measures, which actually give a better picture on the expected classification performance on new texts. Results are reported in two different evaluation settings: Primarily, we use the document-level cross-validation scheme (CV), which still seems to be the de facto standard in PPI extraction. We also use the cross-learning (CL) evaluation strategy for identifying pairs that behave similarly across various evaluation methods.

In the CV setting, we train and test each kernel on the same corpus using document-level 10-fold cross-validation. We employ the document-level splits used by Airola and many others (e.g., [23, 31, 32]) to allow for direct comparison of results. The ultimate goal of PPI extraction is the identification of PPIs in biomedical texts with unknown characteristics. This task is better reflected in the CL setting, when training and test sets are drawn from different distributions: in such cases, we train on an ensemble of four corpora and test on the fifth one. CL methodology is generally less biased than CV, where the training and the test data sets have very similar corpus characteristics. Note that the difference in the distribution of positive/negative pairs in the five benchmark corpora (ranging from 20 to 100%) accounts for a substantial part of the diversity of the performance of approaches [8]. Differences in the annotation of corpora not limited to distribution but also deviates in their annotation guidelines and the definition of what constitutes a PPI; those differences are dominantly kept in the standardized format [8] obtained by applying a transformation approach to yield the greatest common factor in annotations.

Experimental setup

For the experimental setup we follow the procedure described in [14]. In a nutshell, we applied entity blinding, resolved entity-token mismatch problems and extended the learning format of the sentences with the missing parses. We applied a coarse-grained grid parameter search and selected the best average setting in terms of the averaged F-score measured across the five evaluation corpora as the default setting for each kernel.

Results and discussion

The main goal of our analysis was to better characterize kernel methods and understand their short-comings in terms of PPI extraction. We started by characterizing protein pairs: we divided them into three classes based on their difficulty. Difficulty is defined by the observed classification success level of kernels. We also manually scrutiny some of the pairs that were found to be the most difficult ones, suspecting that the reason for the failure of kernels is in fact an incorrect annotation. We re-labeled a set of such suspicious annotations and re-evaluated if kernels were able to benefit from these modifications. We also compare kernels based on their predictions by defining kernel similarity as prediction agreement on the instance level. We investigate how kernels’ input representations correlate with their similarity. Finally, to quantify the claimed advantage of kernels for PPI extraction, we compare kernels to more simple methods. We used linear, non-kernel based classifiers and a surface feature set also found in the kernel methods.

Difficulty of individual protein pairs

In this experiment we determine the difficulty of protein pairs. The fewer kernel based approaches are able to classify a pair correctly, the more difficult the pair is. Different kernels’ predictions vary heavily as we have reported in [14]. Here, we show that there exists protein pairs that are inherently difficult to classify (across all 13 kernels), and we investigate whether kernels with generally higher performance classify difficult pairs with greater success.

We define the concept of success level as the number of kernels being able to classify a given pair correctly. For CV evaluation we performed experiments with all 13 kernels, and therefore have success levels: 0,…,13. For CL evaluation, we omitted the very slow PT kernel (0,…,12). Figures 1 and 2 show the distribution of PPI pairs in terms of success level for CV and CL evaluation aggregated across the 5 corpora, respectively. We also show the same statistics for each corpus separately (Tables 1 and 2). Figure 3 shows the correlation between success levels of CV and CL.

Figure 1
figure 1

The distribution of pairs according to classification success level using cross-validation setting. The distribution of pairs (total, positive and negative) in terms of the number of kernels that classify them correctly (success level) aggregated across the 5 corpora in cross-validation setting. Detailed data for each corpus can be find in Table 1. All 13 kernels are taken into consideration.

Figure 2
figure 2

The distribution of pairs according to classification success level using cross-learning setting. The distribution of pairs (total, positive and negative) in terms of the number of kernels that classify them correctly (success level) aggregated across the 5 corpora in cross-learning setting. Detailed data for each corpus can be find in Table 2. All kernels except for the very slow PT kernel are taken into consideration.

Table 1 The distribution of pairs for each corpus according to classification success level using cross-validation setting
Table 2 The distribution of pairs for each corpus according to classification success level using cross-learning setting
Figure 3
figure 3

Heatmap of success level correlation in CV and CL evaluations. Correlation ranges from 2 (cyan) through 63 (white) to 1266 (magenta) pairs. Hues are on logarithmic scale.

The 10-15 percentage point difference in F-score between CV and CL settings reported in [14] can be most evidently seen in the slightly better performance of classifiers on difficult pairs in the CV setting. For example, pairs not classified correctly by any kernels in the CL setting (CL00) are most likely correctly classified by some CV classifiers (CV01-CV05), as shown in Figure 3. Not surprisingly, the pairs correctly classified by most classifiers in either of the CV and CL settings correlate well (see upper right corner in Figure 3). The pairs that are difficult in both evaluation settings (D) are reasonable target for further inspection, as improving kernels to better perform on the them would benefit both scenarios; we attempt to characterize such pairs in subsequent Section.

In order to better identify pairs that are difficult or easy to classify correctly, for each corpus, we took the most difficult and the easiest 10% of pairs. For this we cut off the set of pairs at such a success level that the resulting subset of pairs is the closest possible to 10%. Ultimately, we define more universal difficulty classes as the intersection of the respective difficulty classes in CV and CL settings, e.g. D=DCV∩DCL. When ground truth can be considered to be known, we may further define the intuitive subclasses negative difficult (ND), positive difficult (PD), negative easy (NE) and positive easy (PE), respectively.

We investigated whether and in what extent these classes of pairs overlap depending on the evaluation setting (see Table 3). We used the χ2-test to check if there was a significantly higher overlap between the two sets compared to as if drawn at random. A p-value lower than 0.001 is considered significant. There are only few cases where correlation is not significant; we discuss these cases separately (1) where the ground truth is known (e.g., PD for HPRD50), and (2) where the ground truth is unknown (e.g., D for LLL).

Table 3 The overlap of the pairs that are the most difficult and the easiest to classify correctly by the collection of kernels using cross-validation (CV) and cross-learning (CL) settings

For case (1), the very few exceptions (PD and PE at HPRD50, and PE at LLL) account only for a mere 1% of PD and 6% of PE pairs. We can also see that the larger a corpus, the better CV and CL evaluations “agree” on the difficulty class of pairs: the strongest correlations can be observed at BioInfer and AIMed.

Considering case (2), for LLL, the intersection of difficult pairs in CV and CL happens to be empty. It was shown in [8, 14] that kernels tend to preserve the distribution of positive/negative classes from training to test. LLL has a particularly high ratio of positive examples (50% compared to the average of 25% in the other four corpora). Therefore, kernels predict positive pairs easier for LLL at the CV evaluation, in contrast to CL: in CV evaluation, negative pairs are difficult and in CL evaluation positive ones are difficult. These factors and the relatively small size of the LLL corpus (2% of all five corpora) should explain the empty intersection.

We conclude that our method for identifying the difficult and easy pairs of each class finds meaningful subsets of pairs. We identified 521 ND (negative difficult), 190 PD (positive difficult), 1510 NE (negative easy) and 219 PE (positive easy) pairs.

How kernels perform on difficult and easy pairs

In Table 4 we show how the different kernels perform on the 521 ND pairs. We publish the same results for the PD, NE, and PE pairs, as well as for all four experiments for CL setting (Tables 5, 6, 7, 8, 9, 10 and 11).

Table 4 Classification results on the 521 ND pairs with CV evaluation
Table 5 Classification results on the 521 ND pairs with CL evaluation
Table 6 Classification results on the 190 PD pairs with CV evaluation
Table 7 Classification results on the 190 PD pairs with CL evaluation
Table 8 Classification results on the 1510 NE pairs with CV evaluation
Table 9 Classification results on the 1510 NE pairs with CL evaluation
Table 10 Classification results on the 219 PE pairs with CV evaluation
Table 11 Classification results on the 219 PE pairs with CL evaluation

On difficult pairs (ND&PD), the measured number of true negatives (TN) is smaller than expected based on the class distribution of kernels’ prediction. This phenomenon can be attributed to the difficulty of pairs. The same tendency can be observed for easy pairs (PE&NE) in the opposite direction.

The difference in performance between CV and CL settings reported in [14] cannot be observed on ND pairs: kernels tend to create more general models in the CL setting and identify ND pairs with greater success in average. For PD pairs, kernels produce equally low results in both settings. On the other hand, kernels perform far better for easy pairs (both PE&NE) in CV than in CL setting. This shows that the more general CL models do not work so well on easy pairs than the rather corpus specific CV models; that is, the smaller variability in training examples is also reflected in performance of the learnt model.

As for individual kernels, edit kernel shows the best performance for ND pairs both in terms of TNs and relative to its expected performance. This can be attributed to the low probability of the positive class in edit’s prediction, which is also manifested in the below average performance on positive pairs (PD&PE), and the very good results on NE pairs. SpT, which exhibits by far the highest positive class ratio, performs relatively well on PD pairs both in terms of FNs and the expected relative performance (esp. at CV); this kernel shows analog performance pattern on PD and NE pairs. As for the top performing kernels (APG, SL, kBSPS; [14]) APG performs on all pair subsets equally well (above average or among the best), except at CL on positive pairs; SL is always above the average, except at CV on NDs; however kBSPS works particularly well on easy pairs, and pretty bad on difficult ones (esp. on NDs).

We observed that for difficult (D) pairs, some kernels perform equally better independently of the class label: SST (CL and CV) and ST (CL only). However, this advantage cannot be easily exploited unless difficult pairs are identified in advance. Therefore, next we investigate whether difficulty classes can be predicted by observing only obvious surface features.

Relation between sentence length, entity distance and pair difficulty

In Figure 4 we show the characteristics of sentence difficulty in terms of the average length of the sentence, the average distance between entities, and the size of the shortest path in parse tree. It can be observed that positives pairs are more difficult to classify in longer, and negative pairs in shorter sentences. This correlates with the average length of sentences with positive/negative pairs being 27.6 and 37.2 words - these numbers coincide with the average length of neutral sentences. This is also in accordance with the distribution of positive and negative pairs in terms of the sentence length. Positive pairs occur more often in shorter sentences with less proteins (see Figures 5 and 6), and most of the analyzed classifiers fail to capture the characteristics of rare positive pairs in longer sentences. Long sentences have typically more complicated sentence structure, thus deep parsers are also prone to produce more erroneous parses, which makes the PPI relation extraction task especially difficult.

Figure 4
figure 4

Characteristics of pairs by difficulty class. Characteristics of pairs by difficulty class (average sentence length in words, average word distance between entities, average distance in the dependency graph (DG) and syntax tree (ST) shortest path). ND - negative difficult, NN - negative neutral, NE - negative easy, PD - positive difficult, PN - positive neutral, PE - positive easy.

Figure 5
figure 5

The number of positive and negative pairs vs. the length of the sentence containing the pair.

Figure 6
figure 6

The positive ground truth rate vs. the length of the sentence containing the pair.

The distance in words between entities in a sentence seems to be more independent from the difficulty of the pair (see Figure 6). The entities in NE pairs are closer to each other than neutral or more difficult ones, while for positive pairs no such tendency can be observed: the distance in both PE and PD pairs are shorter than at neutral ones. On the other hand, one can observe also at this level that entities of negative pairs are further (9.67) from each others than positives ones (7.15). On the dependency tree level, the difference has a smaller extent: 4.56 (negative) and 4.15 (positive).

We conclude that according to all the three distance measures (word, dependency tree, syntax distance), the farther the entities of negative pairs are located the more difficult are to classify. We also found that positive pairs are typically closer than negative pairs.

Note that similar characteristics were observed at the BioNLP’09 event extraction task regarding the size of minimal subgraph of the dependency graph that includes all triggers and arguments. It was shown in [33] that the size of this subgraph correlates with the class of the event: positive instances are present typically in smaller subgraphs. For the same dataset, in [34] it is shown that the distance between trigger and potential arguments is much smaller for positive than for negative instances.

Next we looked into the relationship between pair difficulty and number of entities in a sentence. In general, long sentences have more protein mentions, and the number of pairs increases quadratically with the number of mentions. We investigated the class distribution of pairs depending on the number of proteins in the sentence (see Figure 7). We can see that the more protein mentions a sentence exhibits, the lower the ratio of positive pairs. This is consistent with the previous experiment on PD pairs: in long sentences there are only few positive pairs, and those are difficult to classify.

Figure 7
figure 7

Class distribution of pairs depending on the number of proteins in the sentence.

Finally, to predict the difficulty class of pairs based on their surface features, we applied a decision tree classifier, results shown in Table 12. We found that predicting the difficult (D) class is particularly hard, with a recall of 20.8 and an F-score of 28.2, indicating that difficult pairs share very few characteristics.

Table 12 Classification of difficulty classes based on pair surface features by decision tree

Still, we found a number of correlations between pair difficulty and simple surface features that cannot be exploited in kernels as they use a different feature set. Later on, we will show that such features already suffice to build a classifier that is almost on par with the state-of-the-art, without using any sophisticated (and costly to compute) kernels.

Semantic errors in annotation

For some of the very hardest pairs (60 PD and 60 ND), we manually investigated whether their difficulty is actually caused by annotation errors. We identified 23 PD and 28 ND pairs that we considered as incorrectly annotated (for the list of the pair identifiers, see Table 13). Since the selection was drawn from the most difficult pairs, the relatively large number of incorrect annotations does not necessarily make the entire experimentation invalid, though raises the issue of the necessity of a possible re-annotation (see also [35]).

Table 13 Incorrectly annotated protein pairs selected from the very hardest positive and negative pairs

We investigated if kernels (we only used APG and SL) could benefit from re-annotation by resetting the ground truth (GT) value of the above 51 pairs and re-running the experiments. Recall that only a mere 0.3% of GT values were changed, most of them in BioInfer (36) and AIMed (12) corpora. We analyzed the performance change both using the original and the re-trained model on the re-annotated corpora (see Table 14). We observed a slight performance improvement using the original model (F-score gain 0.2-0.6). With the re-trained model the performance of APG and SL could be further improved on both corpora (F-score gain 0.25-1.0). This shows that the re-annotation of corpora yield performance uplift even if only a small fraction of pairs is concerned.

Table 14 The effect on F-score when changing the ground truth of incorrectly annotated pairs with APG and SL kernels

Similarity of kernel methods

Classifier similarity is a key factor when constructing ensemble classifiers. We define the similarity of two kernels as the number of shared annotations versus the total number of annotations. Performing hierarchical clustering with this similarity measure reveals that kernels using the same parsing information group together almost perfectly, i.e., classify pairs much more similarly to each other than to kernels using different parsing information (see Figure 8). Syntax tree based kernels form a clear and separated cluster. Kim’s kernels build a proper sub-cluster within dependency-based kernels. The only kernel that does not use neither dependency nor syntax data, SL, is grouped in the cluster of dependency-based kernels. Interestingly, the outlier in this cluster is kBSPS and not SL. The best two kernels according to [14], APG and SL, are the most similar to each other as they agree on 81% of the benchmark pairs.

Figure 8
figure 8

Similarity of kernels as dendrogram and heat map. Colors below the dendrogram indicate the parsing information used by a kernel. Similarity of kernel outputs ranges from full agreement (red) to 33% disagreement (yellow) on the five benchmark corpora. Clustering is performed with R’s hclust (http://stat.ethz.ch/R-manual/R-devel/library/stats/html/hclust.html).

Clearly, such characteristics can be exploited in building ensembles as they allow a rationale choice of base classifiers; we will report on using such a strategy in the discussion.

Feature analysis

To assess the importance of the aforementioned features we constructed a feature space representation of all pairs. We derived surface features from sentences and pairs (see Table 15), including tokens on the dependency graphs (same holds for dependency trees) and syntax tree shortest path, therefore also incorporating parsing information. We then performed feature selection by information gain using each difficulty class as label. The ten most relevant features of the difficult (D) and easy (E) classes are tabulated in Table 16 according to an independent feature analysis. Indicative features of the D-class negatively correlate with the class label: sentence length, the entropy of POS labels along the syntax tree shortest path, number of dependency labels of type dep (dependent - fall-back dependency label assigned by the Stanford Parser when no specific label could be retrieved), number of proteins in sentence. The importance of feature dep suggests that pairs in sentences having more specific dependency labels are more difficult to correctly predict. For the E class, the entropy of edge labels in the entire syntax tree and dependency graph, and the sentence length correlate positively, while frequency of nn, appos, conj_and, dep, det, etc. correlate negatively.

Table 15 Surface and parsing features generated from sentence text used for training non-kernel based classifiers
Table 16 The ten most important features related to difficult (D) and easy (E) classes measured by information gain

This experiment justifies that pairs in longer sentences may become more distant and more likely to be negative, thus easier to predict. Several dependency labels are correlated with positive pairs thus their absence render the pair easier to classify (as negative).

Non-kernel based classifiers

We also compared kernel based classifiers with some linear, non-kernel based classifiers as implemented in Weka [36]. We used the surface feature space created for feature analysis (see Table 15). We ran experiments with 9 different methods (decision trees (J48, LADTree, RandomForest), k-NN (KStar), rule learners (JRip, PART), Bayesian (NaiveBayes, BayesNet) and regression methods (Logistic).) We found that the best surface-based classifier, BayesNet, is on par with or better than all kernel based classifiers except APG, SL and kBSPS (see Figure 9). On larger corpora, BayesNet attains 43.4 F-score on AIMed and 54.6 on BioInfer which is outperformed only by the above three kernels. On smaller corpora that are easier to classify having more positive examples, the advantage of kernel based approaches shrinks further.

Figure 9
figure 9

Comparison of some non-kernel based and kernel based classifiers in terms of F-score (CV evaluation). The first 9 are non-kernel based classifiers, the last four are kernel based classifiers.

Conclusions

In this paper we performed a thorough instance-level comparison of kernel based approaches for binary relation (PPI) extraction on benchmark corpora.

First, we proposed a method for identifying different difficulty classes of protein pairs independently from evaluation setting. Protein interactions are expressed at the linguistic level in diverse ways; its complexity influences the performance of automated methods to classify the pairs correctly. We hypothesized that linguistic complexity of expressing an interaction correlates with classification performance in general, that is, there are PPs on which kernels tend to err independently from the applied evaluation setting (CV or CL). Difficulty classes of PPs were defined based on the success level of kernels in classifying them. We showed that difficulty classes correlate with certain surface features of the pair/the sentence containing the pair, especially word distance, shortest path length between the two proteins in the dependency graph and in the syntax tree. Using these and other surface features, we build linear classifiers that yield results comparable to many of the much more sophisticated kernels. Similar vector space classifiers have been used previously for PPI extraction by [31], however, without an in-depth comparison with existing kernels and in a different evaluation setting. These observations suggest that PPI extraction performance depends far more on the feature set than on the similarity function encoded in kernels, and that future research in the field should focus on finding more expressive features rather than more complex kernel functions. However, it also should be noted that using ever larger feature sets requires considerably more computational resources, considerably increasing the runtime, especially for large-scale experiments. Since the size of currently available training corpora do not keep up with the linguistic diversity, we see two alternatives as possible solutions. The first, computationally more economic strategy focuses on decreasing the linguistic variability using graph rewriting rules on the parse level (see, for instance, [37, 38]). Another alternative is to extend available training corpora e.g. by converting certain PPI specific event-level annotations (e.g. regulation, phosphorylation) to PPI annotations in event databases, such as GENIA event data [39]. As an existing example, BioInfer originally also contains richer event information and was transformed to a PPI corpus using some simplifying rules [8].

Second, we built an ensemble by combining three kernels with a simple majority voting scheme. We chose kBSPS, SL and APG as these show above average results across various evaluation settings, but still exhibit considerable disagreement at the instance level (see Figure 8). Combining them leads to a performance improvement of more than 2 percentage points in F-score over the best member’s performance (see Table 17). We also observed a performance increase when combining other kernels, but the results were not on par with that of the better performing kernels, showing that a detailed comparison of kernels in terms of their false positives and false negatives is very helpful for choosing base classifiers for ensembles. Furthermore, we expect that even a higher performance gain can be achieved by employing more sophisticated ensemble construction methods, such as bagging or stacking [40, 41]. An alternative approach by [42] was to build a meta-classifier: they classified dependency trees into five different classes depending on the relative position of the verb and the two proteins and learnt a separate classifier for each of these classes.

Table 17 Results of some simple majority vote ensembles and comparison with best single methods in terms of F-score

Third, the identification of difficult protein pairs was found to be highly useful to spot likely incorrect annotations in the benchmark corpora. We deemed 45% of the 120 manually checked difficult pairs to be incorrectly annotated. We also showed that even very few re-annotated pairs (below 1% of total) influence the kernels’ performance: the re-trained models could generalize the information beyond the affected pairs, and showed a systematic performance gain over the original model. Since our method for finding incorrect annotations is fully automatic, it could be used to decrease the workload of curators at corpus revision.

Overall, we showed that 1-2% of PPI instances are misclassified by all the 13 kernels we considered, independent of which evaluation setting (and hence which training set) was used. Vastly more, 19-30% of PPI instances are misclassified by the majority of these kernels. We also showed that, although a number of features correlate with the “difficulty” of instances, simple combinations of those are not able to tell apart true and false protein pairs. These observations lower the hope that novel types of kernels (using the same input representation) will be able to achieve a breakthrough in PPI extraction performance.

We conclude that one should be rather pessimistic in terms of expecting breakthroughs in kernel-based methods to PPI extraction. Current methods do not seem to do very well in capturing the characteristics shared by positive PPI pairs, which must also be attributed to the heterogeneity of the (still very few) available corpora. We see three main possibilities to escape this situation, some of which have already proven successful in other domains or in other extraction tasks (see references below). For all the three directions we provided below examples found among the 120 examined difficult cases.

The first is to switch focus to more specific forms of interactions, such as regulation, phosphorylation, or complex-building [43, 44]. Among the difficult cases it can be observed that incorrectly classified indirect PPIs among the difficult cases (e.g. B.d14.s1.p2, A.d78.s669.p2) tend to be regulatory relationships. As other types of PPIs may be less affected by this issue, the move from generic PPIs to more specific relations should allow for a higher performance for those PPI subtypes. Looking at such more crisply defined problems likely will lead to more homogeneous data and thus raises the chances of classifiers to find the shared characteristics between positive and negative instances, respectively.

Second, we believe that advances could be achieved if methods considered additional background knowledge, for instance by adding them as features of the pair. This encompasses detailed knowledge on the proteins under consideration (like their function, participation in protein families, evolutionary relationships, etc.) and on the semantics of the terms surrounding them. For instance, some false positives pairs were found to contain two proteins that have nearly identical functional properties or that are orthologs. As such co-occurrences are less likely to describe actual interactions, a more informed approach could benefit from taking such aspects into consideration.

Third, pattern-based methods, which are capable of capturing even exotic instances, might be worth looking into again. Even early pattern-based methods are only slightly worse than machine learning approaches [28, 45], although those did not fully leverage advances which the NLP community has made especially in terms of telling apart “good” patterns from bad ones [46, 47]. Many difficult false positives turned out to be misinterpreted linguistic constructs like enumerations and coreferences. Such constructs might be more appropriately dealt with by using linguistic/syntactical patterns. Note, however, that some other pairs found in sentences with such constructs (e.g. B.d765.s0.p10, A.d28.s234.p1) were correctly annotated by all kernel methods in our assessment. Combining intelligent pattern-selecting with semi-supervised methods for pattern generation [38, 48] seems especially promising.

Abbreviations

PPI:

Protein-protein interaction

SVM:

Support vector machine

RLS:

Regularized least squares

POS-tag:

Part-of-speech tag

NLP:

Natural language processing

SL:

Shallow linguistic kernel

ST:

Subtree kernel

SST:

Subset tree kernel

PT:

Partial tree kernel

SpT:

Spectrum tree kernel

edit:

Edit distance kernel

cosine:

Cosine similarity kernel

kBSPS:

k-band shortest path spectrum kernel

APG:

All-paths graph kernel

CV:

Cross-validation

CL:

Cross-learning

T:

True

F:

False

GT:

Ground truth

TP:

True positive

TN:

True negative

FP:

False positive

FN:

False negative

D:

Difficult

N:

Neutral

E:

Easy

ND:

Negative difficult

PD:

Positive difficult

NE:

Negative easy

PE:

Positive easy

dep:

dependent

nn:

noun compound modifier

appos:

appositional modifier

conj:

conjunct.

References

  1. Blaschke C, Andrade MA, Ouzounis C, Valencia A: Automatic extraction of biological information from scientific text: protein-protein interactions. Proc Int Conf Intell Syst Mol Biol 1999, 7: 60-67.

    Google Scholar 

  2. Ono T, Hishigaki H, Tanigami A, Takagi T: Automated extraction of information on protein-protein interactions from the biological literature. Bioinformatics 2001,17(2):155. 10.1093/bioinformatics/17.2.155

    Article  CAS  PubMed  Google Scholar 

  3. Marcotte EM, Xenarios I, Eisenberg D: Mining literature for protein-protein interactions. Bioinformatics 2001,17(4):359. 10.1093/bioinformatics/17.4.359

    Article  CAS  PubMed  Google Scholar 

  4. Huang M, Zhu X, Hao Y, Payan DG, Qu K, Li M: Discovering patterns to extract protein-protein interactions from full texts. Bioinformatics 2004,20(18):3604. 10.1093/bioinformatics/bth451

    Article  CAS  PubMed  Google Scholar 

  5. Cohen AM, Hersh WR: A survey of current work in biomedical text mining. Brief Bioinformatics 2005, 6: 57. 10.1093/bib/6.1.57

    Article  CAS  PubMed  Google Scholar 

  6. Krallinger M, Valencia A, Hirschman L: Linking genes to literature: text mining, information extraction, and retrieval applications for biology. Genome Biol 2008,9(Suppl 2):S8. 10.1186/gb-2008-9-s2-s8

    Article  PubMed Central  PubMed  Google Scholar 

  7. Zhou D, He Y: Extracting interactions between proteins from the literature. J Biomed Inform 2008,41(2):393-407. [http://dx.doi.org/10.1016/j.jbi.2007.11.008] [] 10.1016/j.jbi.2007.11.008

    Article  CAS  PubMed  Google Scholar 

  8. Pyysalo S, Airola A, Heimonen J, Björne J, Ginter F, Salakoski T: Comparative analysis of five protein-protein interaction corpora. BMC Bioinformatics 2008,9(Suppl 3):S6. [http://dx.doi.org/10.1186/1471-2105-9-S3-S6] [] 10.1186/1471-2105-9-S3-S6

    Article  PubMed Central  PubMed  Google Scholar 

  9. Sarawagi S: Information extraction. Found Trends Databases 2008, 1: 261-377. [http://dl.acm.org/citation.cfm?id=1498844.1498845] []

    Article  Google Scholar 

  10. Haussler D: Convolution kernels on discrete structures. Tech. Rep. UCS-CRL-99-10, University of California at Santa Cruz, Santa Cruz, CA, USA 1999 Tech. Rep. UCS-CRL-99-10, University of California at Santa Cruz, Santa Cruz, CA, USA 1999

  11. Schölkopf B, Smola A: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. Cambridge, MA, USA: MIT Press; 2002.

    Google Scholar 

  12. Arighi C, Lu Z, Krallinger M, Cohen K, Wilbur W, Valencia A, Hirschman L, Wu C: Overview of the BioCreative III workshop. BMC Bioinformatics 2011,12(Suppl 8):S1. [http://www.biomedcentral.com/1471-2105/12/S8/S1] [] 10.1186/1471-2105-12-S8-S1

    Article  PubMed Central  PubMed  Google Scholar 

  13. Kim JD, Pyysalo S, Ohta T, Bossy R, Nguyen N, Tsujii J: Overview of BioNLP shared task 2011. Proceedings of the BioNLP Shared Task 2011 Workshop, Association for Computational Linguistics 2011, 1-6. [http://www.aclweb.org/anthology/W11-1801] []

    Google Scholar 

  14. Tikk D, Thomas P, Palaga P, Hakenberg J, Leser U: A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature. PLoS Comput Biol 2010,6(7):e1000837. [http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000837] [] 10.1371/journal.pcbi.1000837

    Article  PubMed Central  PubMed  Google Scholar 

  15. Kim S, Yoon J, Yang J: Kernel approaches for genic interaction extraction. Bioinformatics 2008, 24: 118-126. [http://dx.doi.org/10.1093/bioinformatics/btm544] [] 10.1093/bioinformatics/btm544

    Article  CAS  PubMed  Google Scholar 

  16. Fayruzov T, De Cock M, Cornelis C, Hoste V: Linguistic feature analysis for protein interaction extraction. BMC Bioinformatics 2009, 10: 374. [ ] [http://www.biomedcentral.com/1471-2105/10/374] [[]] 10.1186/1471-2105-10-374

    Article  PubMed Central  PubMed  Google Scholar 

  17. Giuliano C, Lavelli A, Romano L: Exploiting Shallow Linguistic Information for Relation Extraction from Biomedical Literature. In Proc. of the 11st Conf. of the European Chapter of the ACL (EACL’06). Trento: The Association for Computer Linguistics; 2006:401-408. [http://acl.ldc.upenn.edu/E/E06/E06-1051.pdf] []

    Google Scholar 

  18. Vishwanathan SVN, Smola AJ: Fast kernels for string and tree matching. In Proc. of Neural Information Processing Systems (NIPS’02). Vancouver, BC, Canada; 2002:569-576.

    Google Scholar 

  19. Collins M, Duffy N: Convolution kernels for natural language. In Proc. of Neural Information Processing Systems (NIPS’01). Vancouver, BC, Canada; 2001:625-632.

    Google Scholar 

  20. Moschitti A: Efficient convolution kernels for dependency and constituent syntactic trees. In Proc. of The 17th European Conf. on Machine Learning. Berlin, Germany; 2006:318-329.

    Google Scholar 

  21. Kuboyama T, Hirata K, Kashima H, Aoki-Kinoshita KF, Yasuda H: A spectrum tree kernel. Inf Media Technol 2007, 2: 292-299.

    Google Scholar 

  22. Erkan G, Özgür A, Radev DR: Semi-supervised classification for extracting protein interaction sentences using dependency parsing. In Proc. of the 2007 Joint Conf. on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Prague, Czech Republic; 2007:228-237. [http://www.aclweb.org/anthology/D/D07/D07-1024] []

    Google Scholar 

  23. Airola A, Pyysalo S, Björne J, Pahikkala T, Ginter F, Salakoski T: All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning. BMC Bioinformatics 2008,9(Suppl 11):S2. [http://dx.doi.org/10.1186/1471-2105-9-S11-S2] [] 10.1186/1471-2105-9-S11-S2

    Article  PubMed Central  PubMed  Google Scholar 

  24. Joachims T: Making Large-Scale Support Vector Machine Learning Practical, Advances in Kernel Methods: Support Vector Learning. Cambridge, MA: MIT Press; 1999.

    Google Scholar 

  25. Chang CC, Lin CJ: LIBSVM: a library for support vector machines. 2001.http://www.csie.ntu.edu.tw/cjlin/libsvm Software available at,

    Google Scholar 

  26. Bunescu R, Ge R, Kate RJ, Marcotte EM, Mooney RJ, Ramani AK, Wong YW: Comparative experiments on learning information extractors for proteins and their interactions. Artif Intell Med 2005,33(2):139-155. [http://dx.doi.org/10.1016/j.artmed.2004.07.016] [] 10.1016/j.artmed.2004.07.016

    Article  PubMed  Google Scholar 

  27. Pyysalo S, Ginter F, Heimonen J, Bjorne J, Boberg J, Jarvinen J, Salakoski T: BioInfer: a corpus for information extraction in the biomedical domain. BMC Bioinformatics 2007, 8: 50. 10.1186/1471-2105-8-50

    Article  PubMed Central  PubMed  Google Scholar 

  28. Fundel K, Küffner R, Zimmer R: RelEx - relation extraction using dependency parse trees. Bioinformatics 2007,23(3):365-371. [http://dx.doi.org/10.1093/bioinformatics/btl616] [] 10.1093/bioinformatics/btl616

    Article  CAS  PubMed  Google Scholar 

  29. Ding J, Berleant D, Nettleton D, Wurtele E: Mining Medline: abstracts, sentences, or phrases? Pac Symp Biocomput 2002, 7: 326-337.

    Google Scholar 

  30. Nedellec C: Learning language in logic-genic interaction extraction challenge. In Proc. of the ICML05 workshop: Learning Language in Logic (LLL’05), Volume 18. Bonn, Germany; 2005:97-99.

    Google Scholar 

  31. Miwa M, Sætre R, Miyao Y, Tsujii J: A rich feature vector for protein-protein interaction extraction from multiple corpora. In Proc. of the 2009 Conf. on Empirical Methods in Natural Language Processing (EMNLP’09). Stroudsburg: ACL; 2009:121-130. [http://portal.acm.org/citation.cfm?id=1699510.1699527] []

    Google Scholar 

  32. Kim S, Yoon J, Yang J, Park S: Walk-weighted subsequence kernels for protein-protein interaction extraction. BMC Bioinformatics 2010, 11: 107. [http://www.biomedcentral.com/1471-2105/11/107] [] 10.1186/1471-2105-11-107

    Article  PubMed Central  PubMed  Google Scholar 

  33. Van Landeghem S, De Baets B, Van de Peer Y, Saeys Y: High-precision bio-molecular event extraction from text using parallel binary classifiers. Comput Intell 2011,27(4):645-664. 10.1111/j.1467-8640.2011.00403.x

    Article  Google Scholar 

  34. Buyko E, Faessler E, Wermter J, Hahn U: Syntactic simplification and semantic enrichment-trimming dependency graphs for event extraction. Comput Intell 2011,27(4):610-644. 10.1111/j.1467-8640.2011.00402.x

    Article  Google Scholar 

  35. Cusick M, Yu H, Smolyar A, Venkatesan K, Carvunis A, Simonis N, Rual J, Borick H, Braun P, Dreze M, et al.: Literature-curated protein interaction datasets. Nat Methods 2008, 6: 39-46.

    Article  Google Scholar 

  36. Witten IH, Frank E: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edition. San Francisco: Morgan Kaufmann; 2005.

    Google Scholar 

  37. Miwa M, Pyysalo S, Hara T, Tsujii J: Evaluating dependency representations for event extraction. In Proc. of the 23rd Int. Conf. on Computational Linguistics (Coling’10). Beijing, China; 2010:779-787. [http://www.aclweb.org/anthology/C10-1088] []

    Google Scholar 

  38. Thomas P, Pietschmann S, Solt I, Tikk D, Leser U: Not all links are equal: exploiting dependency types for the extraction of protein-protein interactions from text. In Proc. of BioNLP’11. Portland: ACL; 2011:1-9. [http://www.aclweb.org/anthology/W11-0201] []

    Google Scholar 

  39. Kim JD, Ohta JTandTsujii: Corpus annotation for mining biomedical events from literature. BMC Bioinformatics 2008, 9: 10. [http://www.biomedcentral.com/1471-2105/9/10] [] 10.1186/1471-2105-9-10

    Article  PubMed Central  PubMed  Google Scholar 

  40. Breiman L: Bagging predictors. Mach Learn 1996, 24: 123-140. [http://portal.acm.org/citation.cfm?id=231986.231989] []

    Google Scholar 

  41. Wolpert D: Stacked generalization. Neural Netw 1992,5(2):241-259. 10.1016/S0893-6080(05)80023-1

    Article  Google Scholar 

  42. Bui QC, Katrenko S, Sloot PMA: A hybrid approach to extract protein-protein interactions. Bioinformatics 2011,27(2):259. [http://bioinformatics.oxfordjournals.org/content/early/2010/11/08/bioinformatics.btq620.abstract] [] 10.1093/bioinformatics/btq620

    Article  CAS  PubMed  Google Scholar 

  43. Koike A, Kobayashi Y, Takagi T: Kinase pathway database: an integrated protein-kinase and NLP-based protein-interaction resource. Genome Res 2003,13(6A):1231-1243. [http://www.ncbi.nlm.nih.gov/pubmed/12799355] []

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  44. Miwa M, Saetre R, Kim JD, Tsujii J: Event extraction with complex event classification using rich features. J Bioinform Comput Biol 2010, 8: 131-146. [http://www.ncbi.nlm.nih.gov/pubmed/20183879] [] 10.1142/S0219720010004586

    Article  CAS  PubMed  Google Scholar 

  45. Plake C, Schiemann T, Pankalla M, Hakenberg J, Leser U: AliBaba: PubMed as a graph. Bioinformatics 2006,22(19):2444-2445. 10.1093/bioinformatics/btl408

    Article  CAS  PubMed  Google Scholar 

  46. Banko M, Cafarella MJ, Soderl S, Broadhead M, Etzioni O: Open information extraction from the web. Proc. of IJCAI’07 2007, 2670-2676. [http://turing.cs.washington.edu/papers/ijcai07.pdf] []

    Google Scholar 

  47. Xu F, Uszkoreit H, Li H: A seed-driven bottom-up machine learning framework for extracting relations of various complexity. ACL’07 2007, 584-591.

    Google Scholar 

  48. Liu H, Komandur R, Verspoor K: From graphs to events: a subgraph matching approach for information extraction from biomedical text. In Proc. of BioNLP’11. Portland, OR, USA; 2011:164-172. [http://www.aclweb.org/anthology/W11-1826] []

    Google Scholar 

Download references

Acknowledgements

D Tikk was supported by the Alexander von Humboldt Foundation. I Solt was supported by TÁMOP-4.2.2.B-10/1-2010-0009. PT was supported by the German Ministry for Education and Research (BMBF grant no 0315417B). A part of this work was done while D. Tikk was with the Budapest University of Technology and Economics (Hungary).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Domonkos Tikk.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

Conceived and designed the experiments: DT, IS, UL. Performed the experiments: DT, IS, PT. Analyzed the data: DT, IS, PT. Wrote the paper: DT, IS, PT, UL. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tikk, D., Solt, I., Thomas, P. et al. A detailed error analysis of 13 kernel methods for protein-protein interaction extraction. BMC Bioinformatics 14, 12 (2013). https://doi.org/10.1186/1471-2105-14-12

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-14-12

Keywords