Table 3

Performance benchmark with the SL dataset under parameters at comparable high specificity level (~0.950).

Method

threshold

sensitivity

specificity

MCC

PE


DISOPRED2

0.08

0.557

0.947

0.559

0.504

IUPred long

0.54

0.544

0.948

0.550

0.492

IUPred short

0.51

0.491

0.948

0.507

0.440

CAST

40

0.448

0.951

0.474

0.399

DisEMBL Rem465

1

0.348

0.969

0.418

0.317

SEG45

3.30;3.60

0.368

0.950

0.402

0.318

SEG25

2.94;3.24

0.335

0.946

0.364

0.281

SEG12

2.29;2.59

0.268

0.950

0.305

0.218

DisEMBL Hotloops

2.7

0.259

0.949

0.295

0.208

DisEMBL Coils

1.94

0.251

0.948

0.286

0.200


Predictors were run with parameters tuned to achieve a comparable specificity level close to 0.950, which corresponds to ~5% of false positive predictions. For DisEMBL Remark 465 (in italic), parameter tuning only allowed a specificity of 0.969 as closest value to our criterion. Ranking is based on the Matthews Correlation Coefficient (MCC) but remains essentially unchanged for other performance measures such as probability excess (PE).

Sirota et al. BMC Genomics 2010 11(Suppl 1):S15   doi:10.1186/1471-2164-11-S1-S15

Open Data