Open Access Open Badges Research article

Image-level and group-level models for Drosophila gene expression pattern annotation

Qian Sun13, Sherin Muckatira13, Lei Yuan13, Shuiwang Ji4, Stuart Newfeld2, Sudhir Kumar125 and Jieping Ye13*

Author Affiliations

1 Center for Evolutionary Medicine and Informatics, The Biodesign Institute, Arizona State University, Tempe, AZ, 85287, USA

2 School of Life Sciences, Arizona State University, Tempe, AZ 85287, USA

3 Ira A.Fulton Schools of Engineering, Arizona State University, Tempe, AZ, 85287, USA

4 Department of Computer Science, Old Dominion University, Norfolk, VA, 23529, USA

5 Center of Excellence in Genomic Medicine Research, King Abdulaziz University, Jeddah, Saudi Arabia

For all author emails, please log on.

BMC Bioinformatics 2013, 14:350  doi:10.1186/1471-2105-14-350

Published: 3 December 2013



Drosophila melanogaster has been established as a model organism for investigating the developmental gene interactions. The spatio-temporal gene expression patterns of Drosophila melanogaster can be visualized by in situ hybridization and documented as digital images. Automated and efficient tools for analyzing these expression images will provide biological insights into the gene functions, interactions, and networks. To facilitate pattern recognition and comparison, many web-based resources have been created to conduct comparative analysis based on the body part keywords and the associated images. With the fast accumulation of images from high-throughput techniques, manual inspection of images will impose a serious impediment on the pace of biological discovery. It is thus imperative to design an automated system for efficient image annotation and comparison.


We present a computational framework to perform anatomical keywords annotation for Drosophila gene expression images. The spatial sparse coding approach is used to represent local patches of images in comparison with the well-known bag-of-words (BoW) method. Three pooling functions including max pooling, average pooling and Sqrt (square root of mean squared statistics) pooling are employed to transform the sparse codes to image features. Based on the constructed features, we develop both an image-level scheme and a group-level scheme to tackle the key challenges in annotating Drosophila gene expression pattern images automatically. To deal with the imbalanced data distribution inherent in image annotation tasks, the undersampling method is applied together with majority vote. Results on Drosophila embryonic expression pattern images verify the efficacy of our approach.


In our experiment, the three pooling functions perform comparably well in feature dimension reduction. The undersampling with majority vote is shown to be effective in tackling the problem of imbalanced data. Moreover, combining sparse coding and image-level scheme leads to consistent performance improvement in keywords annotation.