Table 2

Performance comparison on BAliBASE 3.0
Ranking of MSA tools on BAliBASE
RV11 RV12 RV20 RV30 all
POA 0.26 0.279 0.217 0.183 0.239
Prank+F 0.252 0.6*** 0.256 0.272* 0.357***
Prank 0.261 0.607 0.261 0.277 0.363**
Mafft 0.245 0.607 0.293** 0.321** 0.377**
ProGraphMSA D (noCS) 0.313** 0.63* 0.328 0.321 0.41***
ProGraphMSA D 0.343 0.647** 0.368** 0.357** 0.44***
ClustalW 0.309 0.679** 0.338 0.326 0.427
Muscle 0.307 0.663* 0.34 0.358* 0.428
ProGraphMSA 0.361* 0.656 0.383 0.376 0.455
Muscle-i 0.396** 0.716*** 0.358 0.372 0.473***
Mafft-i 0.435** 0.731 0.446*** 0.471*** 0.53***
Mummals 0.404 0.766*** 0.41 0.425* 0.514

Displayed are the average true column scores (CS) for the truncated (BBS*) alignments of the RV11, RV12, RV20, and RV30 sets as well as the average over all these sets. Apart from a few exceptions the listing order of the tools implies significantly improving performance. Between each pair of subsequent scores for two different tools we perform a Wilcoxon signed-rank test. Stars indicate a significant difference at a p<0.05,p<0.01,p<0.001 level, respectively. In particular, the use of context-sensitive profiles significantly improves ProGraphMSA D's alignments, whereas our optimized version of ProGraphMSA significantly outperforms ClustalW (p=0.0024) but does scarcely not outperform Muscle without refinement (p=0.067) at the defined significance level.

Szalkowski

Szalkowski BMC Bioinformatics 2012 13:129   doi:10.1186/1471-2105-13-129

Open Data