Table 2

Prediction time of McRUM on benchmark datasets (in seconds)

Naïve

GBT


AP

OVR

AP

OVR


wine

0.001868

0.001662

7.373405

1.073139


(0.000206)

(0.000164)

(1.182947)

(0.197278)


iris

0.001409

0.001376

6.625228

2.908828


(0.000139)

(0.000141)

(0.678363)

(0.834858)


yeast

0.040737

0.0492920

2722.317544

2795.766035


(0.000534)

(0.001219)

(117.545388)

(139.314294)


thyroid

0.131689

0.123968

939.692526

179.426632


satellite

0.239550

0.139304

10612.598301

2816.632703


We provide the prediction time only for AP and OVR cases since their predictive performances are competitive to each other for all benchmark datasets while the performances for random decompositions are much lower than the AP and OVR cases with Naïve algorithm. The prediction time is averaged over 10-fold cross-validation for the first three datasets while it is estimated once for the last two datasets as their explicit partitioning of training and test sets were given. Included in the parenthesis is the standard deviation of the prediction time. (AP: all-pairs, OVR: one-versus-rest)

Menor et al. BMC Genomics 2013 14(Suppl 2):S6   doi:10.1186/1471-2164-14-S2-S6

Open Data