Rebooting the human mitochondrial phylogeny: an automated and scalable methodology with expert knowledge
1 Departamento de Informática e Ingeniería de Sistemas, Universidad de Zaragoza, María de Luna 1, 50018 Zaragoza, Spain
2 Instituto de Investigación en Ingeniería de Aragón, Universidad de Zaragoza, María de Luna 1, 50018 Zaragoza, Spain
3 Departamento de Bioquímica y Biología Molecular y Celular, Universidad de Zaragoza, Miguel Servet 177, 50013 Zaragoza, Spain
4 Centro de Investigación Biomédica en Red de Enfermedades Raras, Miguel Servet 177, 50013 Zaragoza, Spain
5 Agencia Aragonesa para la Investigación y el Desarrollo, Miguel Servet 177, 50013 Zaragoza, Spain
BMC Bioinformatics 2011, 12:174 doi:10.1186/1471-2105-12-174Published: 19 May 2011
Mitochondrial DNA is an ideal source of information to conduct evolutionary and phylogenetic studies due to its extraordinary properties and abundance. Many insights can be gained from these, including but not limited to screening genetic variation to identify potentially deleterious mutations. However, such advances require efficient solutions to very difficult computational problems, a need that is hampered by the very plenty of data that confers strength to the analysis.
We develop a systematic, automated methodology to overcome these difficulties, building from readily available, public sequence databases to high-quality alignments and phylogenetic trees. Within each stage in an autonomous workflow, outputs are carefully evaluated and outlier detection rules defined to integrate expert knowledge and automated curation, hence avoiding the manual bottleneck found in past approaches to the problem. Using these techniques, we have performed exhaustive updates to the human mitochondrial phylogeny, illustrating the power and computational scalability of our approach, and we have conducted some initial analyses on the resulting phylogenies.
The problem at hand demands careful definition of inputs and adequate algorithmic treatment for its solutions to be realistic and useful. It is possible to define formal rules to address the former requirement by refining inputs directly and through their combination as outputs, and the latter are also of help to ascertain the performance of chosen algorithms. Rules can exploit known or inferred properties of datasets to simplify inputs through partitioning, therefore cutting computational costs and affording work on rapidly growing, otherwise intractable datasets. Although expert guidance may be necessary to assist the learning process, low-risk results can be fully automated and have proved themselves convenient and valuable.