This article is part of the supplement: Selected articles from the 10th International Workshop on Computational Systems Biology (WCSB) 2013: Bioinformatics

Open Access Research

Multi-scale Gaussian representation and outline-learning based cell image segmentation

Muhammad Farhan1*, Pekka Ruusuvuori1, Mario Emmenlauer2, Pauli Rämö2, Christoph Dehio2 and Olli Yli-Harja1

Author Affiliations

1 Department of Signal Processing, Tampere University of Technology, 33720 Tampere, Finland

2 Biozentrum, Universität Basel, 4056 Basel, Switzerland

For all author emails, please log on.

BMC Bioinformatics 2013, 14(Suppl 10):S6  doi:10.1186/1471-2105-14-S10-S6

Published: 12 August 2013



High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.


We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information.

Results and conclusions

We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.