Estimating mutual information using B-spline functions – an improved similarity measure for analysing gene expression data
1 Max Planck Institute of Molecular Plant Physiology, Potsdam, 14424, Germany
2 Nonlinear Dynamics Group, Institute of Physics, University of Potsdam, Potsdam, 14415, Germany
3 Scienion AG, Volmerstrasse 7a, Berlin, 12489, Germany
4 Center for Genomics and Bioinformatics, Karolinska Institutet, Stockholm, 17177, Sweden
BMC Bioinformatics 2004, 5:118 doi:10.1186/1471-2105-5-118Published: 31 August 2004
The information theoretic concept of mutual information provides a general framework to evaluate dependencies between variables. In the context of the clustering of genes with similar patterns of expression it has been suggested as a general quantity of similarity to extend commonly used linear measures. Since mutual information is defined in terms of discrete variables, its application to continuous data requires the use of binning procedures, which can lead to significant numerical errors for datasets of small or moderate size.
In this work, we propose a method for the numerical estimation of mutual information from continuous data. We investigate the characteristic properties arising from the application of our algorithm and show that our approach outperforms commonly used algorithms: The significance, as a measure of the power of distinction from random correlation, is significantly increased. This concept is subsequently illustrated on two large-scale gene expression datasets and the results are compared to those obtained using other similarity measures.
A C++ source code of our algorithm is available for non-commercial use from firstname.lastname@example.org upon request.
The utilisation of mutual information as similarity measure enables the detection of non-linear correlations in gene expression datasets. Frequently applied linear correlation measures, which are often used on an ad-hoc basis without further justification, are thereby extended.