Abstract
Background
Distance matrix methods constitute a major family of phylogenetic estimation methods, and the minimum evolution (ME) principle (aiming at recovering the phylogeny with shortest length) is one of the most commonly used optimality criteria for estimating phylogenetic trees. The major difficulty for its application is that the number of possible phylogenies grows exponentially with the number of taxa analyzed and the minimum evolution principle is known to belong to the hard class of problems.
Results
In this paper, we introduce an Ant Colony Optimization (ACO) algorithm to estimate phylogenies under the minimum evolution principle. ACO is an optimization technique inspired from the foraging behavior of real ant colonies. This behavior is exploited in artificial ant colonies for the search of approximate solutions to discrete optimization problems.
Conclusion
We show that the ACO algorithm is potentially competitive in comparison with stateoftheart algorithms for the minimum evolution principle. This is the first application of an ACO algorithm to the phylogenetic estimation problem.
Background
The Minimum Evolution (ME) principle is a commonly used principle to estimate phylogenetic trees of a set Γ of n species (taxa) given an n × n symmetric matrix D = {d_{ij}} of evolutionary distances. First introduced by Kidd and SgaramellaZonta [1] and subsequently reinterpreted by Rzhetsky and Nei [2,3], the ME principle aims at finding a phylogeny characterized by minimal sum of branch lengths, under the auxiliary criteria that branches have a positive length and the pairwise distances on the tree are not smaller than the directly observed pairwise differences. Its biological justification is based on the fact that, when unbiased estimates of the true distances are available, the correct phylogenetic tree has an expected length shorter than any other possible tree [2,3] compatible with the distances in D. More formally, the ME principle can be expressed in terms of the following optimization problem:
Problem 1. Minimum Evolution under Least Square (LS)
where  · _{1 }is the vector norm; v is a vector of the 2n  3 edge lengths; X is a n(n  1)/2 × (2n  3) topological matrix[4] encoding a phylogenetic tree as an unrooted binary tree with the n taxa in Γ as terminal vertices (leaves); is the set of all the topological matrices; finally, f(· , · , ·) defines the level of compatibility among the distances in D and the distances induced by the phylogenetic tree edges. Any optimal solution (X*, v*) of problem (1) defines a phylogenetic tree satisfying the minimum evolution principle. A topological matrix X is an EdgePath incidence matrix of a Tree (EPT) (see [5], and additional files 1 and 2) that encodes a tree as follows: any generic entry x_{ij,k }is set to 1 if the edge k belongs to the path from the leaf i to the leaf j, 0 otherwise. In the rest of the paper we refer to problem (1) as the ME problem.
Additional File 1. An ant colony optimization algorithm for phylogenetic estimation under the minimum evolution principle – supplementary material. The supplementary file includes discussions on the structure of the EPT matrices as well as how we generate and enumerate topologies in ACOME.
Format: TEX Size: 22KB Download file
The distance matrix D of problem (1) is estimated from the dataset, e.g., accordingly to any method described in [612]. Condition f(D, X, v) = 0 typically imposes that, for any given EPT matrix X, v minimizes the (weighted) sum of the square values of the differences between the distances in D and the corresponding distances induced by the phylogenetic tree edges [6,13]. In particular, under the unweighted leastsquare (also called Ordinary LeastSquares (OLS)) [2]:
where X^{† }is the MoorePenrose pseudoinverse of X, and D^{Δ }is a vector whose components are obtained by taking row per row the entries of the strictly upper triangular matrix of D.
Others [14] and [15] have suggested the use of a Weighted LeastSquares (WLS) function:
where W is a strictly positive definite diagonal matrix whose entries w_{ij }represent weights associated to leaves i and j. Finally, Hasegawa et al. [16] introduced a Generalized LeastSquares (GLS) function in which v is computed using:
where C is a strictly positive definite symmetric matrix representing the covariance matrix of D. To avoid the occurrence of negative branch lengths [14,17], problem (1) can be modified as follows:
Problem 2. Minimum Evolution under Linear Programming (LP)
Unfortunately, both problems (1) and (2) are hard [18]. In this context, let us observe that, given Γ, the cardinality of is:
where n!! is the double factorial of n. Hence, the number of topological matrices grows exponentially with the number of leaves ([6], p. 25, and see additional files 1 and 2).
Problem (1) has received great attention from the scientific community such that exact and approximate algorithms to solve it have been developed. Exact algorithms for solving problem (1) are typically based on an exhaustive approach (i.e., enumerating all possible trees X). As an example, PAUP* 4.0 [19] allows exhaustive search for datasets containing up to 12 taxa. A number of heuristics were also developed in the last 20 years. E.g., Rzhetsky and Nei [2,3] (i) start from a NeighborJoining (NJ) tree [20,21], (ii) apply a local search generating topologies within a given topological distance (see [2]) from the NJ tree, and (iii) return the best topology found. Kumar [22] further improved the approach as follows: starting from a topology, a leaf l is selects at each step and all possible assignments of l on the topology are tested. Despite that the neighborhood size in Kumar's approach is larger than in Rzhetsky and Nei's algorithm, it requires examining a number of topologies that is, at most, an exponential function of the number of leaves n: (n  1)!/2, and generates solution in a shorter computing time. Finally, Bryant and Waddell [4] implemented programming optimisation and Desper and Gascuel [23] introduced a greedy search that both improved speed and accuracy of the search.
Here, we introduce the Ant Colony Optimization (ACO) algorithm for estimating phylogenies under the minimum evolution principle, and show that ACO has the potential to compete with other widelyused methods. ACO (see [24,25] for an introduction, and [26,27] for recent reviews) is a widelyused metaheuristic approach for solving hard combinatorial optimization problems. ACO is inspired from the pheromone trail laying and following behavior of real ants. ACO implements indirect communication among simple agents, called (artificial) ants. Communication is mediated by (artificial) pheromone trails implemented as a probabilistic model to which the ants adapt during the algorithm's execution to reflect their search experience. ACO has proven a successful technique for numerous hard combinatorial optimization problems (see [28]), although no application to the ME phylogeny problem is currently known. Our specific implementation of the ACO algorithm exploits a stochastic version of the NeighborJoining (NJ) algorithm [20,21] to explore tree space.
Results and Discussion
Iterative addition
Given a set Γ of taxa, let us define a partial tree as a mleaf tree whose leaves are taxa of a subset Γ' ⊂ Γ, with m = Γ'. Moreover, given a partial tree, with node set V and edge set E, let us say that we add/insert a leaf i (not yet in Γ') on the edge (r, s) ∈ E (i.e., the edge joining the nodes r, s ∈ V), and generate a new partial tree with node set = V ∪ {i, t} and edge set = E ∪ {(r, t), (t, s), (t, i)}\{(r, s)}. In other words, we add a leaf i on an edge, divide that edge with a new node t, and join the leaf i to t. All algorithms described here build complete phylogenetic trees by iteratively adding one leaf at a time on the edges of a partial tree.
Primal bound
To generate a first upper bound [5] of the ME problem, we adapted the Sequential Addition (SA) greedy algorithm [6]. The Sequential Addition algorithm is less prone, than NJ, to generate a systematic local optimum at the end of the search (i.e., starting from "too good" a primal bound may lead to inefficient results [29]).
The pseudocode of our version of the Sequential Addition algorithm is presented in Figure 1. In the initialization step, we arbitrarily chose a subset Γ' ⊆ Γ of m ≤ n leaves, and we generate as initial mleaf partial tree, i.e., an optimal solution of the problem (1) when only m leaves are considered. A each iteration, we join the leaf i to all possible leaves already present in Γ', and choose the solution that minimize tree length (we break possible ties randomly), hence, generating a new partial tree and new set Γ' = Γ' ∪ {i}. We iterate the procedure until a tree with n leaves is obtained. Finally, fixing the topology matrix , we determine the optimal edge weights by imposing f (D, , v = 0, and return the length of the tree, i.e., the upper bound on the optimal solution of the ME problem.
Figure 1. Principle of the ACOME algorithm. The core iteration includes three main steps: (i) the pheromone update phase during which artificial ants walk on a graph with all possible connections among the n taxa and (n  2) internal nodes, and lay a trail of volatile pheromone on the branches of the starting tree; (ii) the stochastic construction phase during which new trees are built using both the heuristic information of the pairwise distances and the stochastic process guided by the newlyupdated pheromone trail matrix (ants follow a given edge with a probability which is a function of the amount of pheromone on that edge); and (iii) the 2OPT local search phase that corresponds to a local search using taxon swapping. The curved arrow indicates the stochastic jump of an ant from one edge to another. See text for details.
Unfortunately, the computation complexity of our heuristic is O((2m  5)!! + n(n  m)^{2}). AT each iteration, given a partial tree, i.e., a kleaf phylogenetic tree of the leaves in Γ' ⊂ Γ with k = Γ', and a leaf i not in Γ', the procedure generates all the different (k + 1)leaf partial trees that can be obtained by adding the leaf i in each edge of the current partial tree.
The ant colony optimization algorithm
The specific ACO algorithm for the minimum evolution problem (hereafter ACOME) that we introduce here (cf. pseudocode in Figure 2), is a hybrid between the MaxMin Ant System (MMAS) [30,31] and the Approximate Nondeterministic Tree Search (ANTS) [32]. Both methods are modifications of the original Ant System approach [33].
Figure 2. Highlevel pseudocode for the Sequential Addition heuristic.
The core of the ACOME algorithm is the iteration phase, where the ants generate a set of trees. Then, starting from the trees in , a local search is performed until a locally optimal tree is found and compared with the currentbest tree. If stopping conditions are met the procedure ends, otherwise the iteration phase is repeated.
Each ant builds a phylogenetic tree by iteratively adding a leaf at a time to a partial tree. Following a relationlearning model [34], the choices performed by an ant about (i) which leaf to insert, and (ii) where to add it on the partial tree are based of a set of parameters {τ_{ij}} called pheromone trails. The values of the pheromone trail parameters {τ_{ij}} represent a stochastic desirability that a leaf i shares a direct common ancestor with a vertex j on a partial tree. The ants generate a new set of trees and the pheromone trail parameters are updated at the end of the main iteration phase.
Let us now consider the algorithm in more details. It uses two identical data structures: s* and s_{k}. The former stores the currentbest complete reconstruction (solution) known, whereas the latter stores the best complete reconstruction obtained by the ants during iteration k. The algorithm also uses a variable n_{a}, i.e., the number of artificial ants. How to set the value of n_{a }is discussed in the Parameter settings section. In the initialization phase, s* is first set to the reconstruction obtained by the Sequential Addition algorithm and s_{k }is set to null, then the pheromone trail parameters are updated. We implemented the MMAS [30,31] method of pheromone update, where τ_{min }≤ τ_{ij }≤ τ_{max}. Here, we set τ_{min }and τ_{max }to 0.0001 and 0.9999, respectively [35]. In the initialization phase, the pheromone trail parameters {τ_{ij}} are set to 0.5, i.e., all positions for leaf insertion have the same desirability.
Before describing the iteration phase, let us introduce some definitions. Let be a partial tree with k leaves, V() the set of vertices of , and the set of leaves of . Let us also use the recursive distance definition of [23,36]: if A and B are two nonintersecting subtrees from a tree , then the average distance between A and B is:
In the iteration phase, each artificial ant r generates a complete phylogenetic tree using the ConstructCompleteReconstruction(r) procedure, as illustrated in Figure 3: ant r randomly selects four leaves from the set Γ, and builds a partial tree , k = 4, then, ant r (i) chooses, among the leaves not yet inserted in the partial topology, the leaf i defining the smallest distance d_{ij}, j ∈ , and (ii) computes the probability that i has a common ancestor with the vertex j ∈ V () using the formula suggested by ANTS [32]:
Figure 3. Highlevel pseudocode for the ACO algorithm.
where η_{ij }represents the "heuristic desirability" that leaf i shares a common ancestor with a vertex j of V() (whereas τ_{ij }represents the corresponding "stochastic desirability"). Finally, α ∈ [01] allows the relative weighting of heuristic and stochastic desirabilities. The heuristic desirability η_{ij }is computed as:
where , i.e., the sum of the distances from i to the leaves not yet inserted in the partial tree divided the number of leaves inserted in the partial tree.
Note that η_{ij}, Δ_{ij}, and u_{i }correspond to the quantities used in the NeighborJoining algorithm[20,21](see also[23]). Hence, computation of the vector p_{i }= {p_{ij}}, for all i ∈ Γ, can be interpreted as the stochastic application of the NeighborJoining algorithm. A possible problem (not observed yet in practice in our analyses) is that η_{ij }can take negative values. Finally, ant r randomly chooses a vertex j on the basis of the probabilities p_{i}, and the leaf i is added to the tree.
At the end of the construction phase, a set of trees is obtained and a 2OPT local search (with bestimprovement and without candidate list [37,38]) is iteratively performed on each tree: two randomlychosen leaves are swapped and the tree length is evaluated. Swap i is performed on the new tree if swap i1 generated an improvement, otherwise it is performed on the old tree. To reduce the 2OPT computational overhead, we perform no more than 10 swappings on each tree in . If the best tree generated by the 2OPT local search is shorter than the tree in s*, both s* and s_{k }are updated, otherwise only s_{k }is updated.
The pheromone update completes the iteration phase: each entry τ_{ij }is updated following:
where
where κ ∈ ℝ and ρ, the pheromone evaporation rate, are two tuning constants, s^{best }is one of the tree s* or s_{k }(see below), and the length of s^{best}. When applying equation (8), if τ_{ij }is greater than τ_{max }or smaller than τ_{min}, then its value is set to τ_{max }or τ_{min}, respectively. We set to ρ 0.1, κ to κρ ∈ [10^{2}, 10^{1}], and α to 0.7. Finetuning of these parameters might have a significant impact on search efficiency but such a systematic analysis is out of the scope of a proofof concept for the use of ACOME. Finally, if the objective function does not decrease after 30 iteration, ACOME chooses s_{k }as s^{best }instead of s* for the pheromone updating; if the objective function does not decrease after 30 additional iterations, then all {τ_{ij}} are reset to 0.5 and s* is used for pheromone updating.
Parameter settings
We evaluated the performances of the ACOME algorithm under different values of the parameter κ(0.1, 0.5, and 1), and different numbers of ants (1 to 10). For each of the 30 possible combinations of these parameters values, we run ACOME for 1000 iterations. As suggested elsewhere (see [29]), we do not consider colony sizes larger than 10.
Relative performances are measured using a normalized index as in [3941]:
where is the best solution found under parameter value k using dataset j, whereas and are respectively the best and worst solutions found on the instance j using the parameter value k. By definition, performance index values are in the interval [0, 1]. The optimal parameter value exhibits the smallest relative performance index (see boxandwhisker plot histograms in Figure (4, 5, 6). Figures 4, 5, and 6 indicate that, for small, medium, and large datasets, the optimal combinations of number of ants/κ are 7/1, 10/0.5, and 8/0.5, respectively. However, differences of performances are not spectacular among different combinations of parameter values (except that performances are generally very low when a single at is used).
Figure 4. Normalized ranking of the ACO algorithm performances with small datasets (20 taxa) and κ = 0.1 (a), κ = 05 (b), and κ = 1 (c) versus colony size n_{a}.
Figure 5. Normalized ranking of the ACO algorithm performances with medium datasets (50 taxa) and κ = 0.1 (a), κ = 0.5 (b), and κ = 1 (c) versus colony size n_{a}.
Figure 6. Normalized ranking of the ACO algorithm performances with large datasets (100 taxa) and κ = 0.1 (a), κ = 0.5 (b), and κ = 1 (c) versus colony size n_{a}.
Experimental evaluation
We first used a set of distance matrices generated from real datasets: the dataset "551314.nex" that includes 55 RBCL sequences of 1314 nucleotides each, and the dataset "Zilla500.nex" that includes 500 RBCL sequences of 1428 nucleotides each. These datasets are available at [42]. Note that sequences in these datasets were aligned using ClustalX [43] and columns including gaps were excluded before computing pairwise distances. Second, we generated (i) 10 artificial instances of 20 taxa (also called small instances); (ii) 10 artificial instances of 50 taxa (also called medium instances); and (iii) 10 artificial instances of 100 taxa (also called large instances). Each artificial instance was generated by random sampling of taxa and partial character reshuffling of the Zilla500.nex data set. More explicitly, after random selection of the 20 or 50 or 100 taxa, we randomly reshuffled characters among taxa, for 50 percents of the aligned columns. As the reshuffling makes the dataset prone to yield undefined pairwise distances [44], we simply used the absolute number of differences between sequence pairs for generating the distance matrix. Edge lengths were computed using the standard OLS because WLS and GLS can potentially lead to inconsistent results et al. [45]. All numerical experiments were performed on a workstation Apple 64bit Power Mac G5 dual processor dual core, with 8 Gb of RAM, and OS X. The ACOME source code is written in C/C++ and compiled using IBM XL C/C++ compiler version 6.0. We compared the quality (total length) of trees generated by the ACOME algorithm to those obtained using a classical hillclimbing algorithm (implemented in PAUP* 4.0 [19]) after a fixed run time of 1 minute. The starting tree was generated using the NeighborJoining algorithm [20,21], and the TBR branchswapping operator [6] was used for exploring the solution space. PAUP* 4.0 was used with and without the "Steepest Descent" (SD) option. When SD is activated, all possible TBR are tried, and the rearrangement producing the largest decrease in tree length is selected, inducing a computational overhead similar to that of the 2OPT local search implemented in our ACOME algorithm. Each algorithm was run 30 times on each of the two real datasets. Figure 7a and 7b show that ACOME performances are intermediate between hillclimbing with SD, and hillclimbing without SD. Furthermore, Figure 7a and 7b indicate that the relative performances of ACOME, in comparison to hill climbing, increase with larger datasets. Note that, contrary to our simple implementation of ACOME, the implementation of ME in PAUP* 4.0 [19] incorporates procedures [4,23] that greatly speedup the OLS (reaching a complexity O(n^{2})). We trust that implementation of these procedures in combination with further tuning of the ACO parameters (number of ants, relative weights of the heuristic information and stochastic pheromone parameters, etc) would lead to better performances of the ACOME algorithm. Figure 8a and 8b indicate that the relative performances described above are relatively stable trough time, especially for large data sets (at any time during the run, ACOME has similar performances than "hillclimbing without SD" and better performances than "hillclimbing with SD").
Figure 7. Comparison of performances between ACOME and hillclimbing (with and without Steepest Descent, SD) after a fixed run time of 1 minute on datasets of 55 (a) and 500 (b) taxa. A paired Wilcoxon test indicates that ACOME performances are significantly better (pvalue = 3.92e^{2 }for 55 taxa dataset, and pvalue = 6.821e^{4 }for 500 taxa dataset) than those of hillclimbing with SD, but significantly worst (pvalue = 4.71e^{3 }for 55 taxa dataset, and pvalue = 4.53e^{4 }for 500 taxa dataset) than those of hillclimbing without SD.
Figure 8. Comparison of score vs. running time for hillclimbing with steepest descent (line labeled "1"), hillclimbing without steepest decent (line labeled "2"), and ACOME (line labeled "3") on datasets of 55 (a) and 500 (b) taxa.
Conclusion
We introduce here an Ant Colony Optimization algorithm (ACO) for the phylogeny estimation problem under the minimum evolution principle and demonstrate the feasibility of this approach. Although much improvement in performances can probably be obtained through (i) modification of the local search phase, (ii) tuning of the ACO parameters (number of ants, relative weights of the heuristic information and stochastic pheromone parameters, etc), and (iii) implementation of speedup procedures and optimization of the code, the current implementation of our algorithm already demonstrates that the ant colony metaphor can efficiently solve instances of the phylogeny inference problem.
Authors' contributions
All authors read and approved the final manuscript. Daniele Catanzaro, Raffaele Pesenti, and Michel C. Milinkovitch conceived the study and wrote the manuscript, Daniele Catanzaro performed the numerical analyses.
Acknowledgements
Daniele Catanzaro is a Research Fellow at the Belgian National Fund for Scientific Research (FNRS). This work was supported by the "Communauté Française de Belgique" (ARC 11649/20022770) and the "Région Wallone". We thank C. Korostensky, and Mike Steel for helpful discussions, as well as J. L. Deneubourg, L. Keller, and two anonymous reviewers for constructive and helpful comments on a previous version of this manuscript.
References

Kidd KK, SgaramellaZonta LA: Phylogenetic analysis: concepts and methods.
American Journal of Human Genetics 1971, 23:235252. PubMed Abstract  PubMed Central Full Text

Rzhetsky A, Nei M: Statistical properties of the ordinary leastsquares, generalized leastsquares, and minimum evolution methods of phylogenetic inference.
Journal of Molecular Evolution 1992, 35:367375. PubMed Abstract  Publisher Full Text

Rzhetsky A, Nei M: Theoretical foundations of the minimum evolution method of phylogenetic inference.

Bryant D, Waddell P: Rapid evaluation of leastsquares and minimum evolution criteria on phylogenetic trees.

Nemhauser GL, Wolsey LA: Integer and combinatorial optimization. WileyInterscience publication, New York, NY, USA; 1999.

Felsenstein J: Inferring Phylogenies. Sinauer Associates, Sunderland, UK; 2004.

Hasegawa M, Kishino H, Yano T: Evolutionary Trees From DNA Sequences: a Maximum Likelihood approach.
Journal of Molecular Evolution 1981, 17:368376. PubMed Abstract  Publisher Full Text

Jukes TH, Cantor C: Evolution of protein molecules. In Mammalian Protein Metabolism. Edited by Munro HN. Academic Press, New York; 1969:21123.

Kimura M: A simple method for estimating evolutionary rates of base substitutions through comparative studies of nulceotide sequences.
Journal of Molecular Evolution 1980, 16:111120. PubMed Abstract  Publisher Full Text

Lanave C, Preparata G, Saccone C, Serio G: A New Method for Calculating Evolutionary Substitution Rates.
Journal of Molecular Evolution 1984, 20:8693. PubMed Abstract  Publisher Full Text

Rodriguez F, Oliver JL, Marin A, Medina JR: The general stochastic model of nucleotide substitution.
Journal of Theoretical Biology 1990, 142:485501. PubMed Abstract

Waddell PJ, Steel MA: General Time Reversible Distances with Unequal Rates across Sites: Mixing Gamma and Inverse Gaussian Distributions with Invariant Sites.
Molecular Phylogenetics and Evolution 1997, 8:398414. Publisher Full Text

CavalliSforza LL, Edwards AWF: Phylogenetic analysis: Models and estimation procedures.
American Journal of Human Genetics 1967, 19:233257. PubMed Abstract  PubMed Central Full Text

Beyer WA, Stein M, Smith T, Ulam S: A molecular sequence metric and evolutionary trees.
Mathematical Biosciences 1974, 19:925. Publisher Full Text

Fitch WM, Margoliash E: Construction of phylogenetic trees.
Science 1967, 155:279284. PubMed Abstract  Publisher Full Text

Hasegawa M, Kishino H, Yano T: Dating the humanape splitting by a molecular clock of mitochondrial DNA.
Journal of Molecular Evolution 1985, 22:160174. PubMed Abstract  Publisher Full Text

Waterman MS, Smith TF, Singh M, Beyer WA: Additive evolutionary trees.
Journal of Theoretical Biology 1977, 64:199213. PubMed Abstract  Publisher Full Text

Day WHE: Computational complexity of inferring phylogenies from dissimilarity matrices.

Swofford DL: PAUP* version 4.0. Sinauer, Sunderland, MA; 1997.

Saitou N, Nei M: The neighborjoining method: a new method for reconstructing phylogenetic trees.

Studier JA, Keppler KJ: A note on the neighborjoining algorithm of Saitou and Nei.

Kumar S: A stepwise algorithm for finding minimum evolution evolutionary trees.

Desper R, Gascuel O: Fast and accurate phylogeny reconstruction algorithms based on the minimum evolution principle.
Journal of computational biology 2002, 9(5):687705. PubMed Abstract  Publisher Full Text

Dorigo M, Caro GD: The ant colony optimization metaheuristic. In New Ideas in Optimization. McGrawHill; 1999:1132.

Zlochin M, Birattari M, Dorigo M: Modelbased search for combinatorial optimization: A critical review.
Annals of Operations Research 2004, 131:373395. Publisher Full Text

Blum C: Ant colony optimization: Introduction and recent trends.
Physics of Life reviews 2005, 2:353373. Publisher Full Text

Dorigo M, Birattari M, Stützle T: Ant Colony Optimization – Artificial ants as a computational intelligence technique.

Dorigo M, Stützle T: Ant Colony Optimization. MIT Press, Cambridge, MA; 2004.

Stützle T, Hoos HH: Stochastic Local Search : Foundations and Application. Morgan Kaufman, Elsevier; 2004.

Glover F, Kochenberger GA: Handbook of Metaheuristics. Kluwer Academic Publishers, Boston, MA; 2003.

Stützle T, Hoos HH: MAXMIN Ant System.
Future Generation Computer Systems 2000, 16:889914. Publisher Full Text

Maniezzo V: Exact and approximate nondeterministic treesearch procedures for the quadratic assignment problem.

Dorigo M, Maniezzo V, Colorni A: Ant System: optimization by a colony of cooperating agents.
IEEE Trans Syst, Man, Cybern B 1996, 26:2941. Publisher Full Text

Blum C, Roli A: Metaheuristics in combinatorial optimization: overview and conceptual comparison.
ACM Computing Surveys 2003, 35(3):268308. Publisher Full Text

Blum C, Dorigo M: The HyperCube Framework for Ant Colony Optimization.
IEEE Transactions on systems, man, and cybernetics – Part B: Cybernetics 2004, 34(2):11611172. Publisher Full Text

Gascuel O: Mathematics of evolution and phylogeny. Oxford University Press, New York, NY, USA; 2005.

Bentley JL: Fast algorithms for geometric traveling salesman problems.

Martin O, Otto SW, Felten EW: Largestep Markov chains for the traveling salesman problem.

Bianchi L, Birattari M, Chiarandini M, Manfrin M, Mastrolilli M, Paquete L, RossiDoria O, Schiavinotto T: Hybrid metaheuristics for the vehicle routing problem with stochastic demands.
Journal of Mathematical Modelling and Algorithms 2006, 5:91110. Publisher Full Text

Birattari M, Dorigo M: How to assess and report the performance of a stochastic algorithm on a benchmark problem: Mean or best result on a number of runs?
Optimization Letters 2006.
To Appear

Birattari M, Zlochin M, Dorigo M: Towards a theory of practice in metaheuristics design: A machine learning perspective.
RAIRO – Theoretical Informatics and Applications 2006, 40:353369. Publisher Full Text

Thompson JD, Gibson TJ, Plewniak F, Jeanmougin F, Higgins DG: The ClustalX windows interface: flexible strategies for multiple sequence alignment aided by quality analysis tools.
Nucleic Acid Research 1997, 24:48764882. Publisher Full Text

Catanzaro D, Pesenti R, Milinkowitch M: A nonlinear optimization procedure to estimate distances and instantaneous substitution rate matrices under the GTR model.
Bioinformatics 2006., 22(6) PubMed Abstract  Publisher Full Text

Gascuel O, Bryant D, Denis F: Strengths and limitations of the minimum evolution principle.
Systematic Biology 2001, 50:621627. PubMed Abstract  Publisher Full Text