Email updates

Keep up to date with the latest news and content from BMC Bioinformatics and BioMed Central.

This article is part of the supplement: Selected articles from The 8th Annual Biotechnology and Bioinformatics Symposium (BIOT-2011)

Open Access Open Badges Research

Model averaging strategies for structure learning in Bayesian networks with limited data

Bradley M Broom1*, Kim-Anh Do2 and Devika Subramanian3

Author Affiliations

1 Department of Bioinformatics and Computational Biology, UT MD Anderson Cancer Center, Houston, Texas 77030, USA

2 Department of Biostatistics, UT MD Anderson Cancer Center, Houston, Texas 77030, USA

3 Department of Computer Science, Rice University, Houston, Texas 77005, USA

For all author emails, please log on.

BMC Bioinformatics 2012, 13(Suppl 13):S10  doi:10.1186/1471-2105-13-S13-S10

Published: 24 August 2012



Considerable progress has been made on algorithms for learning the structure of Bayesian networks from data. Model averaging by using bootstrap replicates with feature selection by thresholding is a widely used solution for learning features with high confidence. Yet, in the context of limited data many questions remain unanswered. What scoring functions are most effective for model averaging? Does the bias arising from the discreteness of the bootstrap significantly affect learning performance? Is it better to pick the single best network or to average multiple networks learnt from each bootstrap resample? How should thresholds for learning statistically significant features be selected?


The best scoring functions are Dirichlet Prior Scoring Metric with small λ and the Bayesian Dirichlet metric. Correcting the bias arising from the discreteness of the bootstrap worsens learning performance. It is better to pick the single best network learnt from each bootstrap resample. We describe a permutation based method for determining significance thresholds for feature selection in bagged models. We show that in contexts with limited data, Bayesian bagging using the Dirichlet Prior Scoring Metric (DPSM) is the most effective learning strategy, and that modifying the scoring function to penalize complex networks hampers model averaging. We establish these results using a systematic study of two well-known benchmarks, specifically ALARM and INSURANCE. We also apply our network construction method to gene expression data from the Cancer Genome Atlas Glioblastoma multiforme dataset and show that survival is related to clinical covariates age and gender and clusters for interferon induced genes and growth inhibition genes.


For small data sets, our approach performs significantly better than previously published methods.