A corpus of full-text journal articles is a robust evaluation tool for revealing differences in performance of biomedical natural language processing tools
1 Computational Bioscience Program, U. Colorado School of Medicine, 12801 E 17th Ave, Aurora, MS 8303, CO 80045, USA
2 Department of Linguistics, University of Colorado Boulder, Boulder, 290 Hellems, CO 80309, USA
3 Institute of Cognitive Science, University of Colorado Boulder, Boulder, MUEN PSYCH Building D414, CO 80309, USA
4 Department of Computer Science, Brandeis University, Waltham, MS 018, MA 02454, USA
BMC Bioinformatics 2012, 13:207 doi:10.1186/1471-2105-13-207Published: 17 August 2012
We introduce the linguistic annotation of a corpus of 97 full-text biomedical publications, known as the Colorado Richly Annotated Full Text (CRAFT) corpus. We further assess the performance of existing tools for performing sentence splitting, tokenization, syntactic parsing, and named entity recognition on this corpus.
Many biomedical natural language processing systems demonstrated large differences between their previously published results and their performance on the CRAFT corpus when tested with the publicly available models or rule sets. Trainable systems differed widely with respect to their ability to build high-performing models based on this data.
The finding that some systems were able to train high-performing models based on this corpus is additional evidence, beyond high inter-annotator agreement, that the quality of the CRAFT corpus is high. The overall poor performance of various systems indicates that considerable work needs to be done to enable natural language processing systems to work well when the input is full-text journal articles. The CRAFT corpus provides a valuable resource to the biomedical natural language processing community for evaluation and training of new models for biomedical full text publications.