Email updates

Keep up to date with the latest news and content from BMC Bioinformatics and BioMed Central.

This article is part of the supplement: Proceedings of the BioNLP 08 ACL Workshop: Themes in biomedical language processing

Open Access Research

How to make the most of NE dictionaries in statistical NER

Yutaka Sasaki1*, Yoshimasa Tsuruoka1, John McNaught12 and Sophia Ananiadou12

Author Affiliations

1 School of Computer Science, University of Manchester, 131 Princess Street, Manchester, M1 7DN, UK

2 National Centre for Text Mining, Manchester Interdisciplinary Biocentre, 131 Princess Street, Manchester, M1 7DN, UK

For all author emails, please log on.

BMC Bioinformatics 2008, 9(Suppl 11):S5  doi:10.1186/1471-2105-9-S11-S5

Published: 19 November 2008



When term ambiguity and variability are very high, dictionary-based Named Entity Recognition (NER) is not an ideal solution even though large-scale terminological resources are available. Many researches on statistical NER have tried to cope with these problems. However, it is not straightforward how to exploit existing and additional Named Entity (NE) dictionaries in statistical NER. Presumably, addition of NEs to an NE dictionary leads to better performance. However, in reality, the retraining of NER models is required to achieve this. We chose protein name recognition as a case study because it most suffers the problems related to heavy term variation and ambiguity.


We have established a novel way to improve the NER performance by adding NEs to an NE dictionary without retraining. In our approach, first, known NEs are identified in parallel with Part-of-Speech (POS) tagging based on a general word dictionary and an NE dictionary. Then, statistical NER is trained on the POS/PROTEIN tagger outputs with correct NE labels attached.


We evaluated performance of our NER on the standard JNLPBA-2004 data set. The F-score on the test set has been improved from 73.14 to 73.78 after adding protein names appearing in the training data to the POS tagger dictionary without any model retraining. The performance further increased to 78.72 after enriching the tagging dictionary with test set protein names.


Our approach has demonstrated high performance in protein name recognition, which indicates how to make the most of known NEs in statistical NER.