Email updates

Keep up to date with the latest news and content from BMC Bioinformatics and BioMed Central.

This article is part of the supplement: Machine Learning for Biomedical Literature Analysis and Text Retrieval

Open Access Research

Building a biomedical tokenizer using the token lattice design pattern and the adapted Viterbi algorithm

Neil Barrett* and Jens Weber-Jahnke

Author Affiliations

Department of Computer Science, University of Victoria, Victoria, Canada

For all author emails, please log on.

BMC Bioinformatics 2011, 12(Suppl 3):S1  doi:10.1186/1471-2105-12-S3-S1

Published: 9 June 2011

Abstract

Background

Tokenization is an important component of language processing yet there is no widely accepted tokenization method for English texts, including biomedical texts. Other than rule based techniques, tokenization in the biomedical domain has been regarded as a classification task. Biomedical classifier-based tokenizers either split or join textual objects through classification to form tokens. The idiosyncratic nature of each biomedical tokenizer’s output complicates adoption and reuse. Furthermore, biomedical tokenizers generally lack guidance on how to apply an existing tokenizer to a new domain (subdomain). We identify and complete a novel tokenizer design pattern and suggest a systematic approach to tokenizer creation. We implement a tokenizer based on our design pattern that combines regular expressions and machine learning. Our machine learning approach differs from the previous split-join classification approaches. We evaluate our approach against three other tokenizers on the task of tokenizing biomedical text.

Results

Medpost and our adapted Viterbi tokenizer performed best with a 92.9% and 92.4% accuracy respectively.

Conclusions

Our evaluation of our design pattern and guidelines supports our claim that the design pattern and guidelines are a viable approach to tokenizer construction (producing tokenizers matching leading custom-built tokenizers in a particular domain). Our evaluation also demonstrates that ambiguous tokenizations can be disambiguated through POS tagging. In doing so, POS tag sequences and training data have a significant impact on proper text tokenization.