Email updates

Keep up to date with the latest news and content from BMC Bioinformatics and BioMed Central.

This article is part of the supplement: Ninth Annual MCBIOS Conference. Dealing with the Omics Data Deluge

Open Access Proceedings

Empirical evaluation of scoring functions for Bayesian network model selection

Zhifa Liu12, Brandon Malone13 and Changhe Yuan14*

Author Affiliations

1 Department of Computer Science and Engineering, Mississippi State University, Mississippi State, MS 39762, USA

2 Department of Epidemiology and Public Health, School of Medicine, Yale University, New Haven, CT 06511, USA

3 Department of Computer Science, Helsinki Institute for Information Technology, Fin-00014 University of Helsinki, Finland

4 Department of Computer Science, Queens College/City University of New York, Flushing, NY 11367, USA

For all author emails, please log on.

BMC Bioinformatics 2012, 13(Suppl 15):S14  doi:10.1186/1471-2105-13-S15-S14

Published: 11 September 2012

Abstract

In this work, we empirically evaluate the capability of various scoring functions of Bayesian networks for recovering true underlying structures. Similar investigations have been carried out before, but they typically relied on approximate learning algorithms to learn the network structures. The suboptimal structures found by the approximation methods have unknown quality and may affect the reliability of their conclusions. Our study uses an optimal algorithm to learn Bayesian network structures from datasets generated from a set of gold standard Bayesian networks. Because all optimal algorithms always learn equivalent networks, this ensures that only the choice of scoring function affects the learned networks. Another shortcoming of the previous studies stems from their use of random synthetic networks as test cases. There is no guarantee that these networks reflect real-world data. We use real-world data to generate our gold-standard structures, so our experimental design more closely approximates real-world situations. A major finding of our study suggests that, in contrast to results reported by several prior works, the Minimum Description Length (MDL) (or equivalently, Bayesian information criterion (BIC)) consistently outperforms other scoring functions such as Akaike's information criterion (AIC), Bayesian Dirichlet equivalence score (BDeu), and factorized normalized maximum likelihood (fNML) in recovering the underlying Bayesian network structures. We believe this finding is a result of using both datasets generated from real-world applications rather than from random processes used in previous studies and learning algorithms to select high-scoring structures rather than selecting random models. Other findings of our study support existing work, e.g., large sample sizes result in learning structures closer to the true underlying structure; the BDeu score is sensitive to the parameter settings; and the fNML performs pretty well on small datasets. We also tested a greedy hill climbing algorithm and observed similar results as the optimal algorithm.