Email updates

Keep up to date with the latest news and content from BMC Bioinformatics and BioMed Central.

This article is part of the supplement: Proceedings of the Neural Information Processing Systems (NIPS) Workshop on Machine Learning in Computational Biology (MLCB)

Open Access Research

Inferring latent task structure for Multitask Learning by Multiple Kernel Learning

Christian Widmer1*, Nora C Toussaint2, Yasemin Altun3 and Gunnar Rätsch1

Author Affiliations

1 Friedrich Miescher Laboratory, Max Planck Society, Spemannstr. 39, 72076 Tübingen, Germany

2 Center for Bioinformatics Tübingen, Eberhard-Karls-Universitüt, Sand 14, 72076 Tübingen, Germany

3 Max Planck Institute for Biological Cybernetics, Spemannstr. 38, 72076 Tübingen, Germany

For all author emails, please log on.

BMC Bioinformatics 2010, 11(Suppl 8):S5  doi:10.1186/1471-2105-11-S8-S5

Published: 26 October 2010

Abstract

Background

The lack of sufficient training data is the limiting factor for many Machine Learning applications in Computational Biology. If data is available for several different but related problem domains, Multitask Learning algorithms can be used to learn a model based on all available information. In Bioinformatics, many problems can be cast into the Multitask Learning scenario by incorporating data from several organisms. However, combining information from several tasks requires careful consideration of the degree of similarity between tasks. Our proposed method simultaneously learns or refines the similarity between tasks along with the Multitask Learning classifier. This is done by formulating the Multitask Learning problem as Multiple Kernel Learning, using the recently published q-Norm MKL algorithm.

Results

We demonstrate the performance of our method on two problems from Computational Biology. First, we show that our method is able to improve performance on a splice site dataset with given hierarchical task structure by refining the task relationships. Second, we consider an MHC-I dataset, for which we assume no knowledge about the degree of task relatedness. Here, we are able to learn the task similarities ab initio along with the Multitask classifiers. In both cases, we outperform baseline methods that we compare against.

Conclusions

We present a novel approach to Multitask Learning that is capable of learning task similarity along with the classifiers. The framework is very general as it allows to incorporate prior knowledge about tasks relationships if available, but is also able to identify task similarities in absence of such prior information. Both variants show promising results in applications from Computational Biology.