Email updates

Keep up to date with the latest news and content from BMC Bioinformatics and BioMed Central.

This article is part of the supplement: A critical assessment of text mining methods in molecular biology

Open Access Report

Learning Statistical Models for Annotating Proteins with Function Information using Biomedical Text

Soumya Ray12 and Mark Craven12

Author affiliations

1 Department of Computer Sciences, University of Wisconsin, Madison, Wisconsin 53706, USA

2 Department of Biostatistics and Medical Informatics, University of Wisconsin, Madison, Wisconsin 53706, USA

Citation and License

BMC Bioinformatics 2005, 6(Suppl 1):S18  doi:10.1186/1471-2105-6-S1-S18

Published: 24 May 2005

Abstract

Background

The BioCreative text mining evaluation investigated the application of text mining methods to the task of automatically extracting information from text in biomedical research articles. We participated in Task 2 of the evaluation. For this task, we built a system to automatically annotate a given protein with codes from the Gene Ontology (GO) using the text of an article from the biomedical literature as evidence.

Methods

Our system relies on simple statistical analyses of the full text article provided. We learn n-gram models for each GO code using statistical methods and use these models to hypothesize annotations. We also learn a set of Naïve Bayes models that identify textual clues of possible connections between the given protein and a hypothesized annotation. These models are used to filter and rank the predictions of the n-gram models.

Results

We report experiments evaluating the utility of various components of our system on a set of data held out during development, and experiments evaluating the utility of external data sources that we used to learn our models. Finally, we report our evaluation results from the BioCreative organizers.

Conclusion

We observe that, on the test data, our system performs quite well relative to the other systems submitted to the evaluation. From other experiments on the held-out data, we observe that (i) the Naïve Bayes models were effective in filtering and ranking the initially hypothesized annotations, and (ii) our learned models were significantly more accurate when external data sources were used during learning.