Skip to main content
  • Methodology article
  • Open access
  • Published:

A constrained polynomial regression procedure for estimating the local False Discovery Rate

Abstract

Background

In the context of genomic association studies, for which a large number of statistical tests are performed simultaneously, the local False Discovery Rate (lFDR), which quantifies the evidence of a specific gene association with a clinical or biological variable of interest, is a relevant criterion for taking into account the multiple testing problem. The lFDR not only allows an inference to be made for each gene through its specific value, but also an estimate of Benjamini-Hochberg's False Discovery Rate (FDR) for subsets of genes.

Results

In the framework of estimating procedures without any distributional assumption under the alternative hypothesis, a new and efficient procedure for estimating the lFDR is described. The results of a simulation study indicated good performances for the proposed estimator in comparison to four published ones. The five different procedures were applied to real datasets.

Conclusion

A novel and efficient procedure for estimating lFDR was developed and evaluated.

Background

The use of current high-density microarrays for genomic association studies leads to the simultaneous evaluation of a huge number of statistical hypotheses. Thus, one of the main problems faced by the investigator is the selection of genes (or gene products) worthy of further analysis taking multiple testing into account.

Although the oldest extension of the classical type I error rate is the family-wise error rate (FWER), which is defined as the probability of falsely rejecting at least one null hypothesis (e.g., the lack of relationship between gene-expression changes and a phenotype), FWER-based procedures are often too conservative, particularly when numerous hypotheses are tested [1]. As an alternative and less stringent error criterion, Benjamini and Hochberg introduced, in their seminal paper [2], the False Discovery Rate (FDR), which is defined as the expected proportion of false discoveries among all discoveries. Here, a discovery refers to a rejected null hypothesis.

Assuming that the test statistics are independent and identically distributed under the null hypothesis, Storey [3] demonstrated that, for a fixed rejection region Γ, which is considered to be the same for every test, the FDR is asymptotically equal to the following posterior probability:

FDR(Γ) = Pr(H = 0|T Γ) (1)

where H is the random variable such that H = 0 if the null hypothesis, noted H0, is true; H = 1 if the alternative hypothesis, noted H1, is true; and T is the test statistic considered for all tested hypotheses. However, one drawback is that the FDR criterion associated with a particular rejection region Γ refers to all the test statistics within the region without distinguishing between those that are close to the boundary and those that are not [4].

For this purpose, Efron [5] introduced a new error criterion called the local False Discovery Rate (lFDR) which can be interpreted as a variant of Benjamini-Hochberg's FDR, that gives each tested null hypothesis its own measure of significance. While the FDR is defined for a whole rejection region, the lFDR is defined for a particular value of the test statistic. More formally:

lFDR(t) = Pr(H = 0|T = t). (2)

As discussed by Efron [6], the local nature of the lFDR is an advantage for interpreting results from individual test statistics. Moreover, the FDR is the conditional expectation of the lFDR given T Γ:

FDR(Γ) = E(lFDR(T)|T Γ). (3)

In this context, most of the published procedures for estimating lFDR proceed from a two-component mixture model approach, in which the marginal distribution of the test statistic can be written:

f(t) = π0f0(t) + (1 - π0)f1(t). (4)

Here, f0 and f1 are the conditional density functions corresponding to null and alternative hypotheses, respectively, and π0 = Pr(H = 0). Using these notations, lFDR can be expressed as:

l F D R ( t ) = π 0 f 0 ( t ) f ( t ) . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGSbaBcqWGgbGrcqWGebarcqWGsbGucqGGOaakcqWG0baDcqGGPaqkcqGH9aqpiiGacqWFapaCdaWgaaWcbaGaeGimaadabeaakmaalaaabaGaemOzay2aaSbaaSqaaiabicdaWaqabaGccqGGOaakcqWG0baDcqGGPaqkaeaacqWGMbGzcqGGOaakcqWG0baDcqGGPaqkaaGaeiOla4caaa@4379@
(5)

A variety of estimators have been proposed that either consider a full model-based approach (for a few [710]) or estimate an upper bound of lFDR without any assumption for f1. It is worth noting that, in this latter framework, the probability π0 is not identifiable [11]. Thus, from equation (5), only an upper bound estimate can be obtained for lFDR.

Four procedures that do not require a distributional hypothesis for f1 were introduced by Efron [6, 12], Aubert et al. [13], Scheid and Spang [14] and Broberg [15]. These methods are based on the separate estimations of π0, f0 and f from the calculated p-values. For the last three procedures [1315], the p-values are supposed to be uniformly distributed under the null hypothesis, while Efron's approach estimates f0 from the observed data.

Herein, we describe a novel and efficient procedure for estimating lFDR. While classical approaches are based on the estimation of the marginal density f, we propose directly estimating π0 and 1/f (equation 5) within the same framework.

To situate our procedure among the four published, we briefly recall below their individual principles.

Efron (2004) [12]

For this procedure, the p-values are transformed into z-values for which the theoretical distribution (under the null hypothesis) is a standard normal distribution. To take into account that f0 may be different from the theoretical null distribution, the parameters are estimated from the observed distribution of the z-values as summarized below.

The density f is non-parametrically estimated using a general Poisson linear model, in which log(f(z)) is modeled as a natural spline function with seven degrees of freedom. Then, the null distribution parameters are estimated as follows. The expectation is taken as arg max( f ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWGMbGzgaqcaaaa@2E11@ (z)) and the variance is deduced by quadratically approximating log( f ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWGMbGzgaqcaaaa@2E11@ (z)) for central z-values (for which f1(z) is supposed to be null). The proportion π0 is then estimated by the ratio of the means f ( z ) ¯ / f 0 ( z ) ¯ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaqdaaqaaiabdAgaMjabcIcaOiabdQha6jabcMcaPaaacqGGVaWldaqdaaqaaiabdAgaMnaaBaaaleaacqaIWaamaeqaaOGaeiikaGIaemOEaONaeiykaKcaaaaa@37E0@ calculated from these central z-values. The lFDR is finally estimated by l F D R ( z ) = π 0 _ f 0 ( z ) / _ f ( z ) _ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGSbaBcqWGgbGrcqWGebarcqWGsbGucqGGOaakcqWG6bGEcqGGPaqkcqGH9aqpdaqiaaqaaGGaciab=b8aWnaaBaaaleaacqaIWaamaeqaaaGccaGLcmaadaqiaaqaaiabdAgaMnaaBaaaleaacqaIWaamaeqaaOGaeiikaGIaemOEaONaeiykaKIaei4la8cacaGLcmaadaqiaaqaaiabdAgaMjabcIcaOiabdQha6jabcMcaPaGaayPadaaaaa@45D5@ . It should be noted that in addition to the normality assumption for the z-values under the null hypothesis, the procedure is also based on the assumptions that central z-values mainly consist of true null hypotheses and that the proportion (1 - π0) of modified genes is small. In particular, Efron recommends using this procedure for π0 > 90%.

Aubert et al. (2004) [13]

Assuming that the p-values are uniformly distributed under the null hypothesis (f0 = 1), the procedure is based on the separate estimations of π0 and f .

Ordering the p-values (p(1) ≤...≤ p(m)), as Aubert et al. [13] did, a natural estimator of f is:

f ^ ( p ( i ) ) = F ^ ( p ( i + 1 ) ) F ^ ( p ( i 1 ) ) p ( i + 1 ) p ( i 1 ) = 2 m p ( i + 1 ) p ( i 1 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWGMbGzgaqcaiabcIcaOiabdchaWnaaBaaaleaacqGGOaakcqWGPbqAcqGGPaqkaeqaaOGaeiykaKIaeyypa0ZaaSaaaeaacuWGgbGrgaqcaiabcIcaOiabdchaWnaaBaaaleaacqGGOaakcqWGPbqAcqGHRaWkcqaIXaqmcqGGPaqkaeqaaOGaeiykaKIaeyOeI0IafmOrayKbaKaacqGGOaakcqWGWbaCdaWgaaWcbaGaeiikaGIaemyAaKMaeyOeI0IaeGymaeJaeiykaKcabeaakiabcMcaPaqaaiabdchaWnaaBaaaleaacqGGOaakcqWGPbqAcqGHRaWkcqaIXaqmcqGGPaqkaeqaaOGaeyOeI0IaemiCaa3aaSbaaSqaaiabcIcaOiabdMgaPjabgkHiTiabigdaXiabcMcaPaqabaaaaOGaeyypa0ZaaSaaaeaacqaIYaGmcqWGTbqBaeaacqWGWbaCdaWgaaWcbaGaeiikaGIaemyAaKMaey4kaSIaeGymaeJaeiykaKcabeaakiabgkHiTiabdchaWnaaBaaaleaacqGGOaakcqWGPbqAcqGHsislcqaIXaqmcqGGPaqkaeqaaaaaaaa@6870@
(6)

where F ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWGgbGrgaqcaaaa@2DD1@ is the empirical cumulative distribution function of the p-values. The resulting estimator for this lFDR is then l F D R _ ( p ( i ) ) = m π ^ 0 ( p ( i + 1 ) p ( i 1 ) ) 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGSbaBdaqiaaqaaiabdAeagjabdseaejabdkfasbGaayPadaGaeiikaGIaemiCaa3aaSbaaSqaaiabcIcaOiabdMgaPjabcMcaPaqabaGccqGGPaqkcqGH9aqpdaWcaaqaaiabd2gaTHGaciqb=b8aWzaajaWaaSbaaSqaaiabicdaWaqabaGccqGGOaakcqWGWbaCdaWgaaWcbaGaeiikaGIaemyAaKMaey4kaSIaeGymaeJaeiykaKcabeaakiabgkHiTiabdchaWnaaBaaaleaacqGGOaakcqWGPbqAcqGHsislcqaIXaqmcqGGPaqkaeqaaOGaeiykaKcabaGaeGOmaidaaaaa@4E89@ . However, as noted by Aubert et al. [13], the variance of this estimator is large. A more stable estimator, related to the moving average methodology and corresponding to a generalization of the estimator 6, was given by the authors [13]. To estimate the probability π0, Aubert et al. [13] proposed using an existing procedure, like those proposed by Storey and Tibshirani [16] or Hochberg and Benjamini [17].

Scheid and Spang (2004) [14]

As for the procedure proposed by Aubert et al., the p-values are supposed to be uniformly distributed under the null hypothesis. Thus, this procedure is based on the separate estimations of π0 and f . The marginal distribution f is estimated by dividing the interval [0, 1] into 100 equidistant bins from which a corresponding histogram is derived. A smoothing spline with seven degrees of freedom is then used to estimate f.

The probability π0 is estimated by a stochastic downhill algorithm (summarized below) with the intention of finding the largest subset of genes that could follow a uniform distribution. A penalized Kolmogoroff-Smirnoff score related to the uniform distribution is calculated for the whole gene set:

S ( J ) = max i J | F J ( u i ) u i | + λ m | J | m log ( m | J | ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGtbWucqGGOaakcqWGkbGscqGGPaqkcqGH9aqpdaWfqaqaaiGbc2gaTjabcggaHjabcIha4bWcbaGaemyAaKMaeyicI4SaemOsaOeabeaakiabcYha8jabdAeagnaaBaaaleaacqWGkbGsaeqaaOGaeiikaGIaemyDau3aaSbaaSqaaiabdMgaPbqabaGccqGGPaqkcqGHsislcqWG1bqDdaWgaaWcbaGaemyAaKgabeaakiabcYha8jabgUcaRGGaciab=T7aSnaalaaabaGaemyBa0MaeyOeI0IaeiiFaWNaemOsaOKaeiiFaWhabaGaemyBa0gaaiGbcYgaSjabc+gaVjabcEgaNjabcIcaOiabd2gaTjabgkHiTiabcYha8jabdQeakjabcYha8jabcMcaPaaa@5EDE@
(7)

where m is the total number of genes, J is the set of genes under consideration (first, the whole set of genes), F J is the empirical cumulative distribution for the set J, and λ is a tuning parameter adaptively chosen (for details on the choice of, λ see [14]). Then, iteratively, genes are excluded so that the Kolmogoroff-Smirnoff score decreases. In practice, the procedure stops when the score is not reduced in 2m iterations. The score penalty takes into account the sample size m and avoids overfitting. At the end of the procedure, π0 is estimated by the proportion of the remaining genes. Then, lFDR is estimated by l F D R _ = π 0 _ / f ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGSbaBdaqiaaqaaiabdAeagjabdseaejabdkfasbGaayPadaGaeyypa0ZaaecaaeaaiiGacqWFapaCdaWgaaWcbaGaeGimaadabeaaaOGaayPadaGaei4la8IafmOzayMbaKaaaaa@391D@ .

Broberg (2005) [15]

The procedure proposed by Broberg to estimate lFDR is also based on the assumption that the p-values are uniformly distributed under the null hypothesis. Then, as for the two previous methods, the procedure is based on the separate estimations of π0 and f . The marginal density f of the p-values is estimated by a Poisson regression, similar to the procedure proposed by Efron. To enforce monotony, Broberg proposed using the Pooling Adjacent Violators algorithm (see [15] for details).

The probability π0 is then estimated by minp[0,1] f ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWGMbGzgaqcaaaa@2E11@ (p). Then, lFDR is estimated by l F D R _ = π 0 _ f ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGSbaBdaqiaaqaaiabdAeagjabdseaejabdkfasbGaayPadaGaeyypa0ZaaSaaaeaadaqiaaqaaGGaciab=b8aWnaaBaaaleaacqaIWaamaeqaaaGccaGLcmaaaeaacuWGMbGzgaqcaaaaaaa@3847@ .

Limitations of these estimators

Through different estimations of π0, f0 and f, these four procedures attempt to estimate an upper bound of lFDR. However, each of these methods has its own drawback. Efron's procedure [6, 12] is restricted to situations in which π0 > 90%. The method of Aubert et al. [13] yields an estimator with a large variance. Sheid and Spang's procedure [14] is based on an iterative algorithm and requires extensive computational time (for large datasets). Finally, Broberg's approach [15] sometimes substantially underestimates lFDR. Our procedure, developed in details under Methods, is based on a polynomial regression under monotony and convexity constraints of the inverse function of the empirical cumulative distribution. Thus, an estimated upper bound of lFDR with small variability can be expected, regardless of the true value of π0.

Results

Here, we compared, through simulations, our method to the four procedures described above. The five procedures are then applied to real datasets.

Simulated data

To compare our new estimator to the four previously published procedures, we performed a simulation study. Data were generated to mimic a two-class comparison study with normalized log-ratio measurements for m genes (i = 1,...,m) obtained from 20 experiments corresponding to two conditions (j = 1, 2), each with 10 replicated samples (k = 1,...,10), which corresponds to classical sample sizes for differential gene-expression studies. Two total numbers of genes were considered: one small (m = 500) and one larger (m = 5, 000). In each case, all values were independently sampled from a normal distribution, X i,j,k ~ N(μ ij , 1). For the first condition (j = 1), all data were simulated with μi 1= 0. For the second condition (j = 2), a proportion π0 of genes was simulated with μi 2= 0 (unmodified genes), while modified genes were simulated using three different configurations: (a) μi 2= 1 for the first half, μi 2= 2 for the second half; (b) μi 2= 0.5 for the first half, μi 2= 1 for the second half; and (c) μi 2= 0.5 for the first third, μi 2= 1 for the second third and μi 2= 2 for the last third.

In this way, we tried to mimic realistic situations with different patterns. Here, the distribution of modified genes is a simple mixture of two components with small expression differences (configuration (a)) and large expression differences (configuration (b)), or a more complex mixture with three components (configuration (c)).

Four different π0 values were considered. Because Efron's procedure was developed for situations with π0 values greater than 0.90, we used π0 = 0.9 and π0 = 0.98. We also considered two lower values of π0 that correspond to realistic situations not considered by Efron (π0 = 0.8 and π0 = 0.6). In total, 2 × 3 × 4 = 24 different cases were considered.

To evaluate the behavior of the five procedures in the context of dependent data, we also generated datasets with so-called clumpy dependence (that is, datasets for which the measurements on the genes are dependent in small groups, with each group being independent of the others).

We applied the protocol described in [18] and [19] as follows: First, an independent dataset matrix (x ijk ) was generated, as described above. Then, for each group of 100 genes, a random vector A = {a jk }, where j = 1, 2 and k = 1,..., 10 was generated from a standard normal distribution. The data matrix (y ijk ) was then built so that: y i j k = ρ a j k + 1 ρ x i j k MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWG5bqEdaWgaaWcbaGaemyAaKMaemOAaOMaem4AaSgabeaakiabg2da9maakaaabaacciGae8xWdihaleqaaOGaemyyae2aaSbaaSqaaiabdQgaQjabdUgaRbqabaGccqGHRaWkdaGcaaqaaiabigdaXiabgkHiTiab=f8aYbWcbeaakiabdIha4naaBaaaleaacqWGPbqAcqWGQbGAcqWGRbWAaeqaaaaa@43FE@ with ρ = 0.5. Thus, in each group, the genes have the same correlation, that is to say for i1i2, C o r r ( y i 1 j , y i 2 j ) = 0.5 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGdbWqcqWGVbWBcqWGYbGCcqWGYbGCcqGGOaakcqWG5bqEdaWgaaWcbaGaemyAaK2aaSbaaWqaaiabigdaXaqabaWccqWGQbGAaeqaaOGaeiilaWIaemyEaK3aaSbaaSqaaiabdMgaPnaaBaaameaacqaIYaGmaeqaaSGaemOAaOgabeaakiabcMcaPiabg2da9iabicdaWiabc6caUiabiwda1aaa@4382@ . To render the results comparable with those obtained in the independent setting, the expectations μ ij used for generating the matrix (x ijk ) were divided by 1 ρ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaGcaaqaaiabigdaXiabgkHiTGGaciab=f8aYbWcbeaaaaa@306B@ so that the expectations of the random variables Y ijk correspond to those described in configurations (a), (b) and (c) for independent data. We also considered other ρ values that gave similar results (data not shown).

In each case, the p-values, calculated under the null hypothesis H0 : μi 1= μi 2, were obtained from the Student's statistic. Then, we estimated lFDR from our procedure, referred to as polfdr, and the four procedures presented in the background section, referred to as locfdr (Efron), LocalFDR (Aubert et al.), twilight (Scheid and Spang), pava.fdr (Broberg). Although these procedures were not designed to estimate the probability π0 independently of lFDR, we also compared the estimators of π0 obtained from the five procedures.

For each case, 1,000 datasets were simulated. To compare the different estimators, we considered three different criteria that are described below.

Criterion 1

Since the main contribution of lFDR is that it gives each tested hypothesis its own measure of significance, a small bias for any value within the whole interval [0, 1] can be preferable to a smaller bias limited to a subset of values within the interval. For this purpose and to assess the amplitude of the bias for the five procedures, we considered the infinity norm of the integrated error over the interval [0, 1] defined as follows:

b 1 = max p [ 0 , 1 ] | E { l F D R _ ( p ) l F D R ( p ) } | MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGIbGydaWgaaWcbaGaeGymaedabeaakiabg2da9maaxababaGagiyBa0MaeiyyaeMaeiiEaGhaleaacqWGWbaCcqGHiiIZcqGGBbWwcqaIWaamcqGGSaalcqaIXaqmcqGGDbqxaeqaaOWaaqWaaeaacqWGfbqrcqGG7bWEcqWGSbaBdaqiaaqaaiabdAeagjabdseaejabdkfasbGaayPadaGaeiikaGIaemiCaaNaeiykaKIaeyOeI0IaemiBaWMaemOrayKaemiraqKaemOuaiLaeiikaGIaemiCaaNaeiykaKIaeiyFa0hacaGLhWUaayjcSdaaaa@553B@
(8)

and estimated by:

b ^ 1 = max i = 1 , ... , m | 1 1 , 000 k = 1 1 , 000 { l F D R _ ( p i ( k ) ) l F D R ( p i ( k ) ) } | MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWGIbGygaqcamaaBaaaleaacqaIXaqmaeqaaOGaeyypa0ZaaCbeaeaacyGGTbqBcqGGHbqycqGG4baEaSqaaiabdMgaPjabg2da9iabigdaXiabcYcaSiabc6caUiabc6caUiabc6caUiabcYcaSiabd2gaTbqabaGcdaabdaqaamaalaaabaGaeGymaedabaGaeGymaeJaeiilaWIaeGimaaJaeGimaaJaeGimaadaamaaqadabaGaei4EaSNaemiBaW2aaecaaeaacqWGgbGrcqWGebarcqWGsbGuaiaawkWaaaWcbaGaem4AaSMaeyypa0JaeGymaedabaGaeGymaeJaeiilaWIaeGimaaJaeGimaaJaeGimaadaniabggHiLdGccqGGOaakcqWGWbaCdaqhaaWcbaGaemyAaKgabaGaeiikaGIaem4AaSMaeiykaKcaaOGaeiykaKIaeyOeI0IaemiBaWMaemOrayKaemiraqKaemOuaiLaeiikaGIaemiCaa3aa0baaSqaaiabdMgaPbqaaiabcIcaOiabdUgaRjabcMcaPaaakiabcMcaPiabc2ha9bGaay5bSlaawIa7aaaa@6E08@
(9)

where p i ( k ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGWbaCdaqhaaWcbaGaemyAaKgabaGaeiikaGIaem4AaSMaeiykaKcaaaaa@32AE@ i = 1,...,m are the m p-values corresponding to the kthdataset (among the 1,000 simulated datasets for each case). Here, the theoretical values lFDR( p i ( k ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGWbaCdaqhaaWcbaGaemyAaKgabaGaeiikaGIaem4AaSMaeiykaKcaaaaa@32AE@ ) are calculated from a numerical approximation of the non-centered Student's distribution [20].

The estimated values of b1 for independent data are reported in the Table 1. Although these values were always less than or equal to 0.17 for the polfdr procedure, the highest b1 values for the LocalFDR, pava.fdr, twilight and locfdr procedures were 0.20, 0.21, 0.43 and 0.87, respectively. These results also showed that the locfdr method tended to substantially overestimate lDFR. For example, Figure 1 shows the expected lFDR as a function of p for each estimator with m = 500, π0 = 0.8 and configuration (c) (the figures corresponding to all the other cases are provided in additional files). For these figures, the horizontal scale was log-transformed to better demonstrate the differences between the methods for small p-values. For dependent datasets, the bias of the five estimators increased. While the bias of our estimator was always less than or equal to 0.17, the highest bias values for the methods pava.fdr, LocalFDR, twilight, locfdr were 0.20, 0.23, 0.41 and 0.87, respectively (see additional files, Table 10).

Table 1 Estimated values of b1 for the five estimators in each independent simulated case.
Figure 1
figure 1

Expected lFDR as a function of log(p) for each estimator with m = 500, π0 = 0.8 and configuration (c).

Criterion 2

As noted under Background, the five methods were designed to estimate an lFDR upper bound. However, a negative bias can occur in some cases, leading to more false positive results than expected. In this context, we propose investigating with the five procedures the minimal negative bias, denoted b2, over the interval [0, 1]:

b 2 = | min p [ 0 , 1 ] ( E [ l F D R _ ( p ) l F D R ( p ) ] × 1 { E [ l F D R _ ( p ) l F D R ( p ) ] < 0 } ) | MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGIbGydaWgaaWcbaGaeGOmaidabeaakiabg2da9maaemaabaWaaCbeaeaacyGGTbqBcqGGPbqAcqGGUbGBaSqaaiabdchaWjabgIGiolabcUfaBjabicdaWiabcYcaSiabigdaXiabc2faDbqabaGcdaqadaqaaiabdweafjabcUfaBjabdYgaSnaaHaaabaGaemOrayKaemiraqKaemOuaifacaGLcmaacqGGOaakcqWGWbaCcqGGPaqkcqGHsislcqWGSbaBcqWGgbGrcqWGebarcqWGsbGucqGGOaakcqWGWbaCcqGGPaqkcqGGDbqxcqGHxdaTcqaIXaqmdaWgaaWcbaGaei4EaSNaemyrauKaei4waSLaemiBaW2aaecaaeaacqWGgbGrcqWGebarcqWGsbGuaiaawkWaaiabcIcaOiabdchaWjabcMcaPiabgkHiTiabdYgaSjabdAeagjabdseaejabdkfasjabcIcaOiabdchaWjabcMcaPiabc2faDjabgYda8iabicdaWiabc2ha9bqabaaakiaawIcacaGLPaaaaiaawEa7caGLiWoaaaa@7351@
(10)

and estimated by:

b ^ 2 = | min i = 1 , ... , m ( 1 1000 k = 1 1000 { l F D R _ ( p i ( k ) ) l F D R ( p i ( k ) ) } × 1 { 1 1000 k = 1 1000 { l F D R _ ( p i ( k ) ) l F D R ( p i ( k ) ) } < 0 } ) | MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWGIbGygaqcamaaBaaaleaacqaIYaGmaeqaaOGaeyypa0ZaaqWaaeaadaWfqaqaaiGbc2gaTjabcMgaPjabc6gaUbWcbaGaemyAaKMaeyypa0JaeGymaeJaeiilaWIaeiOla4IaeiOla4IaeiOla4IaeiilaWIaemyBa0gabeaakmaabmaabaWaaSaaaeaacqaIXaqmaeaacqaIXaqmcqaIWaamcqaIWaamcqaIWaamaaWaaabmaeaacqGG7bWEcqWGSbaBdaqiaaqaaiabdAeagjabdseaejabdkfasbGaayPadaaaleaacqWGRbWAcqGH9aqpcqaIXaqmaeaacqaIXaqmcqaIWaamcqaIWaamcqaIWaama0GaeyyeIuoakiabcIcaOiabdchaWnaaDaaaleaacqWGPbqAaeaacqGGOaakcqWGRbWAcqGGPaqkaaGccqGGPaqkcqGHsislcqWGSbaBcqWGgbGrcqWGebarcqWGsbGucqGGOaakcqWGWbaCdaqhaaWcbaGaemyAaKgabaGaeiikaGIaem4AaSMaeiykaKcaaOGaeiykaKIaeiyFa0Naey41aqRaeGymaeZaaSbaaSqaamaacmaabaWaaSaaaeaacqaIXaqmaeaacqaIXaqmcqaIWaamcqaIWaamcqaIWaamaaWaaabmaeaacqGG7bWEcqWGSbaBdaqiaaqaaiabdAeagjabdseaejabdkfasbGaayPadaaameaacqWGRbWAcqGH9aqpcqaIXaqmaeaacqaIXaqmcqaIWaamcqaIWaamcqaIWaama4GaeyyeIuoaliabcIcaOiabdchaWnaaDaaameaacqWGPbqAaeaacqGGOaakcqWGRbWAcqGGPaqkaaWccqGGPaqkcqGHsislcqWGSbaBcqWGgbGrcqWGebarcqWGsbGucqGGOaakcqWGWbaCdaqhaaadbaGaemyAaKgabaGaeiikaGIaem4AaSMaeiykaKcaaSGaeiykaKIaeiyFa0NaeyipaWJaeGimaadacaGL7bGaayzFaaaabeaaaOGaayjkaiaawMcaaaGaay5bSlaawIa7aaaa@A09E@
(11)

Results for independent datasets (Table 2) indicated that all the estimators have non-negligible minimal negative biases. However, while b2 was always less than or equal to 0.08 for our method, the maximal b2 values were 0.11, 0.18, 0.21 and 0.43 for the estimators locfdr, LocalFDR, pava.fdr and twilight, respectively. More precisely, while our estimator slightly underestimated lFDR in some cases, when π0 was close to 1, the twilight method tended to underestimate lFDR for small p-values (see Figure 1) and the pava.fdr method tended to substantially underestimate lFDR for all p-values (for example, see Figure 2). The pava.fdr method underestimation can be attributed to the upper bound of π0, which is estimated by min[ f ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWGMbGzgaqcaaaa@2E11@ (p(i))], because E{min[ f ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWGMbGzgaqcaaaa@2E11@ (p(i))]} ≤ min[E f ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWGMbGzgaqcaaaa@2E11@ (p(i))}]. Thus, even though this method can sometimes lead to a low bias (because its negative bias compensates for the gap between the upper bound and the true value), this estimator can generate high negative bias (see Figure 2). These results also indicated that even though the locfdr method tended to overestimate lFDR for the majority of p-values, it also tended to underestimate lFDR for p-values close to 1.

Table 2 Estimated values of b2 for the five estimators in each independent simulated case.
Figure 2
figure 2

Expected lFDR as a function of log(p) for each estimator with m = 5000, π0 = 0.6 and configuration (a).

Criterion 3

To evaluate the accuracy of the five procedures at all points simultaneously, we estimated the root mean integrated square error (RMISE) of the five estimators which is defined by:

R M I S E = E [ 0 1 ( l F D R _ ( p ) l F D R ( p ) ) 2 d p ] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGsbGucqWGnbqtcqWGjbqscqWGtbWucqWGfbqrcqGH9aqpdaGcaaqaaiabdweafnaadmaabaWaa8qmaeaadaqadaqaaiabdYgaSnaaHaaabaGaemOrayKaemiraqKaemOuaifacaGLcmaacqGGOaakcqWGWbaCcqGGPaqkcqGHsislcqWGSbaBcqWGgbGrcqWGebarcqWGsbGucqGGOaakcqWGWbaCcqGGPaqkaiaawIcacaGLPaaadaahaaWcbeqaaiabikdaYaaakiabdsgaKjabdchaWbWcbaGaeGimaadabaGaeGymaedaniabgUIiYdaakiaawUfacaGLDbaaaSqabaaaaa@5149@
(12)

and estimated by:

R M I S E _ = 1 1 , 000 k = 1 1 , 000 i = 1 m [ ( l F D R _ ( p i ( k ) ) l F D R ( p i ( k ) ) ) 2 × ( p ( i + 1 ) ( k ) p ( i ) ( k ) ) ] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaqiaaqaaiabdkfasjabd2eanjabdMeajjabdofatjabdweafbGaayPadaGaeyypa0ZaaOaaaeaadaWcaaqaaiabigdaXaqaaiabigdaXiabcYcaSiabicdaWiabicdaWiabicdaWaaadaaeWaqaamaaqadabaWaamWaaeaadaqadaqaaiabdYgaSnaaHaaabaGaemOrayKaemiraqKaemOuaifacaGLcmaacqGGOaakcqWGWbaCdaqhaaWcbaGaemyAaKgabaGaeiikaGIaem4AaSMaeiykaKcaaOGaeiykaKIaeyOeI0IaemiBaWMaemOrayKaemiraqKaemOuaiLaeiikaGIaemiCaa3aa0baaSqaaiabdMgaPbqaaiabcIcaOiabdUgaRjabcMcaPaaakiabcMcaPaGaayjkaiaawMcaamaaCaaaleqabaGaeGOmaidaaOGaey41aq7aaeWaaeaacqWGWbaCdaqhaaWcbaGaeiikaGIaemyAaKMaey4kaSIaeGymaeJaeiykaKcabaGaeiikaGIaem4AaSMaeiykaKcaaOGaeyOeI0IaemiCaa3aa0baaSqaaiabcIcaOiabdMgaPjabcMcaPaqaaiabcIcaOiabdUgaRjabcMcaPaaaaOGaayjkaiaawMcaaaGaay5waiaaw2faaaWcbaGaemyAaKMaeyypa0JaeGymaedabaGaemyBa0ganiabggHiLdaaleaacqWGRbWAcqGH9aqpcqaIXaqmaeaacqaIXaqmcqGGSaalcqaIWaamcqaIWaamcqaIWaama0GaeyyeIuoaaSqabaaaaa@7F8F@
(13)

As shown in Table 3, these results indicated that, except for the pava.fdr method (which can substantially underestimate lFDR, as shown above), our method gave the lowest RMISE in 15/24 cases. For the 6 cases with π0 close to one (π0 = 0.98), the locfdr method yielded the lowest RMISE. For the last 3 cases, the difference between our method's RMISE and the lowest value (obtained with the twilight estimator) did not exceed 0.4% (case 7). Moreover, these results also indicated that the LocalFDR estimator, despite a small bias in all cases had a higher RMISE than our estimator due to its wide variance.

Table 3 Estimated RMISE for the five estimators in each independent simulated case.

For dependent data, the RMISE of the five estimators increased and the differences were smaller. Our method yielded the lowest RMISE for 7/24 cases (see the Table 12 in additional files).

However, because in practice, some investigators might want to select only genes with low lFDR, we also reported the results obtained with the 3 criteria over the interval [0, 0.2] (See additional files). They showed that our method maintained good performances compared to the four others. Other thresholds for the p-values were considered (10% and 40%) and gave similar results (data not shown).

To compare the performance of the different estimators of the parameter π0 obtained with the different methods, we evaluated their expectations and their root mean square errors.

Table 4 gives the means of the five estimators of the parameter π0 over the 1,000 simulated independent datasets (results for dependent datasets are provided in additional files, Tables 13–14). The average bias over the 24 simulated datasets was the smallest for our new method (0.1%) with a maximal positive bias of 12% (for m = 5, 000, π0 = 60% and configuration (b)) and a maximal negative bias of 4% (for m = 500, π0 = 98% and configuration (c)). It is worth noting that the method with the highest positive bias was locfdr (29%), while the one with the highest negative bias was pava.fdr (13%).

Table 4 Mean of all estimates of π0 for the five estimators in each independent simulated case.

The estimated root mean square errors for each estimator of the parameter π0 are given in Table 5. Note that the root mean square errors of our estimator were less than or equal to 0.126 for the 24 simulated datasets, while it could reach 0.130, 0.132, 0.145 and 0.292 for locfdr, LocalFDR, twilight and pava.fdr methods, respectively.

Table 5 Mean square error of all estimates of π0 for the five estimators in each independentsimulated case.

Concerning computing time, our procedure was rapid, while the twilight method was cumbersome and impracticably long for large numbers of tested hypotheses. For example, the means of computing times on a personal computer (over 20 simulated datasets) for m = 5, 000, π0 = 0.6 and configuration (c) were 50s, 2s, 1s, 1s and 1s for the methods twilight, LocalFDR, polfdr, pava.fdr and locfdr, respectively. For a larger number tested hypotheses m = 50, 000 (not considered in the simulation study), the means of computing times were 7,261s, 162s, 108s, 2s and 1s, respectively.

Real data

Our method, together with twilight, LocalFDR, locfdr and pava.fdr, was applied to two datasets from genomic breast-cancer studies (Hedenfalk et al. [21] and Wang et al. [22]).

Data from Hedenfalk et al. [21]

Hedenfalk et al. [21] investigated the gene-expression changes between hereditary (BRCA1, BRCA2) and non-hereditary breast cancers. The initial dataset consists of 3,226 genes with expression log-ratios corresponding to the fluorescent intensities from a tumor sample divided by those from a common reference sample. Like Aubert et al. [13], we focused on the comparison of BRCA1 and BRCA2, and used the same p-values which were calculated for each gene from a two-sample t-test.

Figure 3 shows the estimated lFDR as a function of the p-values for the five estimators. The five procedures yielded different results. For example, the estimated lFDR for 3 different genes are reported in Table 6. These results show clear differences between the five methods. In particular, the locfdr method gave 1 for the three genes, which can be explained by a π0 value smaller than 0.9. Indeed, the estimated π0 values were, respectively, 0.67, 0.67, 0.66, 0.66 and 1 for the polfdr, twilight, LocalFDR, pava.fdr and locfdr methods. Concerning the four remaining procedures, the highest differences for the three genes were respectively 3%, 7% and 5%.

Figure 3
figure 3

Estimated lFDR as a function of log(p) for each estimator for the Hedenfalk et al. dataset.

Table 6 lFDR estimations for three genes in Hedenfalk et al. data.

Data from Wang et al. [22]

Wang et al. [22] wanted to provide quantitative gene-expression combinations to predict disease outcomes for patients with lymph-node negative breast cancers. Over 22,000 expression measurements were obtained from Affymetrix oligonucleotide microarray U133A GeneChips for 286 samples. The expression values calculated by the Affymetrix GeneChip analysis software MAS5 are available on the GEO website [23] with clinical data. For normalisation, the quantile method [24] was applied on log-transformed data.

Here, we focused on identifying gene-expression changes that distinguish patients who experienced a tumour relapse within 5 years, from patients who continued to be disease-free after a period of at least 5 years. The p-values were calculated for each gene from a two-sample t-test and the five methods were applied.

Figure 4 shows the estimated lFDR as a function of the p-values for the 5 estimators. As noted above,FDR can be estimated from lFDR using equation (3) via the mean of the estimated lFDR over the rejection region Γ. When selecting all genes so that the estimated FDR is less than 5%, our method selected 325 genes while the pava.fdr and LocalFDR methods selected 367 and 229 genes, respectively, and the twilight locfdr methods did not select any gene. It is worth noting that these strong differences have substantial consequences on the following analyses. The estimated π0 values were, respectively, 0.711, 0.720, 0.714, 0.723 and 0.914 for the polfdr, pava.fdr, LocalFDR, twilight and locfdr methods.

Figure 4
figure 4

Estimated lFDR as a function of log(p) for each estimator for the Wang et al. dataset.

Discussion

In the simulations, for independent datasets, the results indicated good performances for our procedure compared to the four previously published methods. Indeed, while the infinity norm b1 was small in every simulated case with our procedure, it could be large for twilight and locfdr procedures. Moreover, despite the fact that the five estimators were designed with conservative biases, the twilight procedure could generate substantial negative bias for small p-values, the locfdr procedure underestimated the lFDR for p-values close to 1, and pava.fdr tended to underestimate lFDR for all p-values. In addition, and compared to LocalFDR, our method gave smaller RMISE in all cases. When considering only the lowest p-values, the simulation results showed the same trend. In summary, our new estimator exhibited more stable behavior than the four others.

For dependent datasets, simulation results led to similar conclusions. Indeed, correlations between genes do not affect the marginal distribution of the p-values but increase the variability of the different methods and the bias of the estimators of π0.

It is worth noting that a major assumption underlying our procedure, like twilight, LocalFDR and pava.fdr, relies on the distribution of the p-values under the null hypothesis. Because the uniformity assumption is sometimes not tenable [12], Efron's procedure estimates the null distribution parameters from the observed marginal distribution. However, a limitation of that approach is the need for additional assumptions concerning the proportion of true null hypotheses. Another way to address the problem of the null distribution is how the p-values are calculated, notably using sampling methods (for a few [2527]).

Conclusion

Herein, we proposed a novel, simple and efficient procedure for estimating the lFDR. Estimating its value is essential for genomic studies, as it quantifies gene-specific evidence for being associated with the clinical or biological variable of interest. Moreover, it enables calculation of the FDR.

As seen from the simulation results, our new estimator performed well in comparison to locfdr, twilight, LocalFDR and pava.fdr. As discussed above, our method yielded a positive bias for lFDR that reflects the conservative estimation of the probability π0. However, this limitation is compensated for by the fact that no assumption is required for f1.

Finally, we think that extending our approach to multidimensional settings could be useful, as recently discussed by Ploner et al. [28], but will require additional investigations.

The R function polfdr that implements the procedure is available on the polfdr website [30].

Methods

As for the procedures proposed by Aubert et al., Scheid and Spang and Broberg, we make the assumption that, under the null hypothesis, the p-values are uniformly distributed. However, instead of estimating the density f (and then taking the reciprocal of the estimate), we directly estimate the reciprocal of f.

1/f estimation

Let's consider ϕ = F-1(p), the inverse cumulative distribution function of the p-values. Then, p [0, 1], ϕ(F(p)) = p and 1/f is the first derivative of the function ϕ. Indeed, since ϕ F is the identity function:

d ϕ ( F ( p ) ) d p = 1 . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaWcaaqaaiabdsgaKHGaciab=v9aQjabcIcaOiabdAeagjabcIcaOiabdchaWjabcMcaPiabcMcaPaqaaiabdsgaKjabdchaWbaacqGH9aqpcqGGUaGlaaa@3A66@
(14)

Moreover:

d ϕ ( F ( p ) ) d p = d F ( p ) d p × d ϕ ( F ( p ) ) d F ( p ) = f ( p ) × d ϕ ( F ( p ) ) d F ( p ) . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaWcaaqaaiabdsgaKHGaciab=v9aQjabcIcaOiabdAeagjabcIcaOiabdchaWjabcMcaPiabcMcaPaqaaiabdsgaKjabdchaWbaacqGH9aqpdaWcaaqaaiabdsgaKjabdAeagjabcIcaOiabdchaWjabcMcaPaqaaiabdsgaKjabdchaWbaacqGHxdaTdaWcaaqaaiabdsgaKjab=v9aQjabcIcaOiabdAeagjabcIcaOiabdchaWjabcMcaPiabcMcaPaqaaiabdsgaKjabdAeagjabcIcaOiabdchaWjabcMcaPaaacqGH9aqpcqWGMbGzcqGGOaakcqWGWbaCcqGGPaqkcqGHxdaTdaWcaaqaaiabdsgaKjab=v9aQjabcIcaOiabdAeagjabcIcaOiabdchaWjabcMcaPiabcMcaPaqaaiabdsgaKjabdAeagjabcIcaOiabdchaWjabcMcaPaaacqGGUaGlaaa@696B@
(15)

Thus:

1 f ( p ) = d ϕ ( F ( p ) ) d F ( p ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaWcaaqaaiabigdaXaqaaiabdAgaMjabcIcaOiabdchaWjabcMcaPaaacqGH9aqpdaWcaaqaaiabdsgaKHGaciab=v9aQjabcIcaOiabdAeagjabcIcaOiabdchaWjabcMcaPiabcMcaPaqaaiabdsgaKjabdAeagjabcIcaOiabdchaWjabcMcaPaaaaaa@41B9@
(16)

Equation 16, illustrated in the Figure 5, is linked to the geometrical relationship between the FDR and lFDR, as noted by Efron [6].

Figure 5
figure 5

Graph of the null cumulative distribution versus the marginal cumulative distribution.

Because the lFDR (and thus 1/f) is non-negative, the function ϕ is non-decreasing. Moreover, assuming that lFDR is non-decreasing with p (that is to say that, the closer a p-value is to one, the greater the probability that the null hypothesis is true), the function ϕ is convex. Then, we propose using a convex 10-degree polynomial for ϕ.

Therefore, we consider the following linear formulation to represent the relationship between the p-values and the empirical cumulative distribution function:

p = F ( p ) ˜ A + E MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaieqacqWFWbaCcqGH9aqpdaaiaaqaaiab=zeagjabcIcaOiab=bhaWjabcMcaPaGaay5adaGae8xqaeKaey4kaSIae8xraueaaa@3703@
(17)

where p = t(p(1),...,p(m)) is the column vector of observed p-values, F ( p ) ˜ = ( F ( p ) ˜ 0 , ... , F ( p ) ˜ d ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaaiaaqaaGqabiab=zeagjabcIcaOiab=bhaWjabcMcaPaGaay5adaGaeyypa0ZaaeWaaeaadaaiaaqaaiabdAeagjabcIcaOiab=bhaWjabcMcaPaGaay5adaWaaWbaaSqabeaacqaIWaamaaGccqGGSaalcqGGUaGlcqGGUaGlcqGGUaGlcqGGSaaldaaiaaqaaiabdAeagjabcIcaOiab=bhaWjabcMcaPaGaay5adaWaaWbaaSqabeaacqWGKbazaaaakiaawIcacaGLPaaaaaa@4524@ , F ( p ) ˜ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaaiaaqaaiabdAeagjabcIcaOGqabiab=bhaWjabcMcaPaGaay5adaaaaa@31A4@ is the vector of the empirical cumulative distribution function of the p-values, A = t(a0,...,a d ) is the column vector of the polynomial's coefficients, d is the degree of the polynomial, and E, the error term, is a random vector for which the expectation is 0.

The estimator of the polynomial regression coeffcients' vector A can be obtained by solving the following least-square minimization problem with constraints:

min C A 0 F ( p ) ˜ A p 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaWfqaqaaiGbc2gaTjabcMgaPjabc6gaUbWcbaacbeGae83qamKae8xqaeKaeyyzImRae8hmaadabeaakmaafmaabaWaaacaaeaacqWFgbGrcqGGOaakcqWFWbaCcqGGPaqkaiaawoWaaiab=feabjabgkHiTiab=bhaWbGaayzcSlaawQa7amaaCaaaleqabaGaeGOmaidaaaaa@4261@
(18)

where

C = ( 0 d ( d 1 ) ( 1 m ) d 2 k ( k 1 ) ( i m ) k 2 0 d ( d 1 ) ( m m ) d 2 0 d ( 1 m ) d 1 k ( i m ) k 1 0 d ( m m ) d 1 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaieqacqWFdbWqcqGH9aqpdaqadaqaauaabeqagmaaaaqaaiabicdaWaqaaiabl+UimbqaaiabdsgaKjabcIcaOiabdsgaKjabgkHiTiabigdaXiabcMcaPmaabmaabaWaaSaaaeaacqaIXaqmaeaacqWGTbqBaaaacaGLOaGaayzkaaWaaWbaaSqabeaacqWGKbazcqGHsislcqaIYaGmaaaakeaacqWIVlctaeaacqWGRbWAcqGGOaakcqWGRbWAcqGHsislcqaIXaqmcqGGPaqkdaqadaqaamaalaaabaGaemyAaKgabaGaemyBa0gaaaGaayjkaiaawMcaamaaCaaaleqabaGaem4AaSMaeyOeI0IaeGOmaidaaaGcbaGaeS47IWeabaGaeGimaadabaGaeS47IWeabaGaemizaqMaeiikaGIaemizaqMaeyOeI0IaeGymaeJaeiykaKYaaeWaaeaadaWcaaqaaiabd2gaTbqaaiabd2gaTbaaaiaawIcacaGLPaaadaahaaWcbeqaaiabdsgaKjabgkHiTiabikdaYaaaaOqaaiabicdaWaqaaiabl+UimbqaaiabdsgaKnaabmaabaWaaSaaaeaacqaIXaqmaeaacqWGTbqBaaaacaGLOaGaayzkaaWaaWbaaSqabeaacqWGKbazcqGHsislcqaIXaqmaaaakeaacqWIVlctaeaacqWGRbWAdaqadaqaamaalaaabaGaemyAaKgabaGaemyBa0gaaaGaayjkaiaawMcaamaaCaaaleqabaGaem4AaSMaeyOeI0IaeGymaedaaaGcbaGaeS47IWeabaGaeGimaadabaGaeS47IWeabaGaemizaq2aaeWaaeaadaWcaaqaaiabd2gaTbqaaiabd2gaTbaaaiaawIcacaGLPaaadaahaaWcbeqaaiabdsgaKjabgkHiTiabigdaXaaaaaaakiaawIcacaGLPaaaaaa@880F@
(19)

We impose the constraints CA ≥ 0 on our minimization problem due to the convexity and monotony of ϕ, which can be written: i {1,...,m}, ϕ ( i / m ) = k = 2 d { k ( k 1 ) ( i m ) k 2 × a k } 0 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiGacuWFvpGAgaGbaiabcIcaOiabdMgaPjabc+caViabd2gaTjabcMcaPiabg2da9maaqadabaWaaiWabeaacqWGRbWAcqGGOaakcqWGRbWAcqGHsislcqaIXaqmcqGGPaqkdaqadaqaamaalaaabaGaemyAaKgabaGaemyBa0gaaaGaayjkaiaawMcaamaaCaaaleqabaGaem4AaSMaeyOeI0IaeGOmaidaaOGaey41aqRaemyyae2aaSbaaSqaaiabdUgaRbqabaaakiaawUhacaGL9baaaSqaaiabdUgaRjabg2da9iabikdaYaqaaiabdsgaKbqdcqGHris5aOGaeyyzImRaeGimaadaaa@5392@ and ϕ ( i / m ) = k = 1 d { k ( i m ) k 1 × a k } 0 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiGacuWFvpGAgaqbaiabcIcaOiabdMgaPjabc+caViabd2gaTjabcMcaPiabg2da9maaqadabaWaaiWabeaacqWGRbWAdaqadaqaamaalaaabaGaemyAaKgabaGaemyBa0gaaaGaayjkaiaawMcaamaaCaaaleqabaGaem4AaSMaeyOeI0IaeGymaedaaOGaey41aqRaemyyae2aaSbaaSqaaiabdUgaRbqabaaakiaawUhacaGL9baaaSqaaiabdUgaRjabg2da9iabigdaXaqaaiabdsgaKbqdcqGHris5aOGaeyyzImRaeGimaadaaa@4E9F@ . Quadratic programming is used to calculate the solution ([29]). Finally, an estimate of 1/f(p) = ϕ'(p) is deduced from the estimated regression coefficients.

π0 estimation

Classical approaches attempted to estimate π0 from f(1), which is the lowest upper bound of π0 based on the mixture model (4). Indeed, if no assumption is made for f1, π0 is not identifiable and f(1) is the lowest upper bound based on the equation (4). Here, we propose using the same model to estimate π0 that is used to estimate 1/f. Therefore, we consider the reciprocal of the function ϕ. However, due to higher bias and variance at the boundaries of the domain, estimating π0 from a value close (but not equal) to 1 is more appropriate. In order to obtain a less sensitive estimator with respect to ϕ', it is reasonable to estimate π0 at the point where ϕ" is at its minimum:

π ^ 0 = 1 ϕ ( arg min x > a ( ϕ ( x ) ) ) . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiGacuWFapaCgaqcamaaBaaaleaacqaIWaamaeqaaOGaeyypa0ZaaSaaaeaacqaIXaqmaeaacuWFvpGAgaqbaiabcIcaOiGbcggaHjabckhaYjabcEgaNjGbc2gaTjabcMgaPjabc6gaUnaaBaaaleaacqWG4baEcqGH+aGpcqWGHbqyaeqaaOGaeiikaGIaf8x1dOMbayaacqGGOaakcqWG4baEcqGGPaqkcqGGPaqkcqGGPaqkaaGaeiOla4caaa@48F6@
(20)

In practice, we propose setting a = 0.5. Note that the estimation of π0 is not sensitive to the choice of a and other values can be considered.

References

  1. Hochberg Y, Tamhane A: Multiple Comparison Procedures. 1987, Wiley

    Book  Google Scholar 

  2. Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B. 1995, 57: 289-300.

    Google Scholar 

  3. Storey JD: A direct approach to false discovery rates. J R Stat Soc Ser B. 2001, 64: 479-498. 10.1111/1467-9868.00346.

    Article  Google Scholar 

  4. Glonek G, Salomon P: Comment on 'Resampling-based multiple testing for microarray data analysis' by Ge Y, Dudoit S, Speed T. TEST. 2003, 12: 1-44. 10.1007/BF02595811.

    Article  Google Scholar 

  5. Efron B, Tibshirani R, Storey J, Tusher V: Empirical Bayes analysis of a microarray experiment. J Am Stat Assoc. 2001, 96: 1151-1160. 10.1198/016214501753382129.

    Article  Google Scholar 

  6. Efron B: Local false discovery rates. Technical Report. 2005, [http://www-stat.stanford.edu/~brad/papers/False.pdf]

    Google Scholar 

  7. Liao JG, Lin Y, Selvanayagam ZE, Shih WJ: A mixture model for estimating the local false discovery rate in DNA microarray analysis. Bioinformatics. 2004, 20: 2694-701. 10.1093/bioinformatics/bth310.

    Article  CAS  PubMed  Google Scholar 

  8. Pan W, Lin J, Le C: A mixture model approach to detecting differentially expressed genes with microarray data. Funct Integr Genomics. 2003, 3: 117-24. 10.1007/s10142-003-0085-7.

    Article  CAS  PubMed  Google Scholar 

  9. Newton MA, Noueiry A, Sarkar D, Ahlquist P: Detecting differential gene expression with a semiparametric hierarchical mixture method. Biostatistics. 2004, 5: 155-76. 10.1093/biostatistics/5.2.155.

    Article  PubMed  Google Scholar 

  10. Broët P, Lewin A, Richardson S, Dalmasso C, Magdelenat H: A mixture model-based strategy for selecting sets of genes in multiclass response microarray experiments. Bioinformatics. 2004, 20: 2562-2571. 10.1093/bioinformatics/bth285.

    Article  PubMed  Google Scholar 

  11. Langaas M, Lindqvist B, Ferkingstad E: Estimating the proportion of true null hypotheses, with application to DNA microarray data. J R Stat Soc Ser B. 2005, 67: 555-572. 10.1111/j.1467-9868.2005.00515.x.

    Article  Google Scholar 

  12. Efron B: Large-scale simultaneous hypothesis testing: the choice of a null hypothesis. J Am Stat Assoc. 2004, 99: 96-104. 10.1198/016214504000000089.

    Article  Google Scholar 

  13. Aubert J, Bar-Hen A, Daudin JJ, Robin S: Determination of the differentially expressed genes in microarray experiments using localFDR. BMC Bioinformatics. 2004, 5: 125-10.1186/1471-2105-5-125.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  14. Scheid S, Spang R: A stochastic downhill search algorithm for estimating the local false discovery rate. IEEE Transactions on Computational Biology and Bioinformatics. 2004, 1: 98-108. 10.1109/TCBB.2004.24.

    Article  CAS  PubMed  Google Scholar 

  15. Broberg P: A comparative review of estimates of the proportion unchanged genes and the false discovery rate. BMC Bioinformatics. 2005, 6: 199-10.1186/1471-2105-6-199.

    Article  PubMed Central  PubMed  Google Scholar 

  16. Storey JD, Tibshirani R: Statistical significance for genome-wide studies. Proc Natl Acad Sci. 2003, 100: 9440-9445. 10.1073/pnas.1530509100.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  17. Hochberg Y, Benjamini Y: More powerful procedures for multiple significance testing. Stat Med. 1990, 9: 811-818. 10.1002/sim.4780090710.

    Article  CAS  PubMed  Google Scholar 

  18. Storey JD, Tibshirani R: Estimating false discovery rates under dependence, with applications to DNA microarrays. Technical Report 2001–28. 2001, Department of Statistics, Stanford University

    Google Scholar 

  19. Qiu X, Klebanov L, Yakovlev A: Correlation between gene expression levels and limitations of the empirical Bayes methodology for finding differentially expressed genes. Stat Appl Genet Mol Biol. 2005, 4: Article34-

    PubMed  Google Scholar 

  20. Johnson NL, Kotz S, Balakrishnan N: Continuous Univariate Distributions, chapters 28 and 31. 1995, Wiley, New York, 2:

    Google Scholar 

  21. Hedenfalk I, Duggan D, Chen Y, Radmacher M, Bittner M, Simon R, Meltzer P, Gusterson B, Esteller M, Kallioniemi OP, Wilfond B, Borg A, Trent J, Raffeld M, Yakhini Z, Ben-Dor A, Dougherty E, Kononen J, Bubendorf L, Fehrle W, Pittaluga S, Gruvberger S, Loman N, Johannsson O, Olsson H, Sauter G: Gene-expression profiles in hereditary breast cancer. N Engl J Med. 2001, 344: 539-548. 10.1056/NEJM200102223440801.

    Article  CAS  PubMed  Google Scholar 

  22. Wang Y, Klijn JG, Zhang Y, Sieuwerts AM, Look MP, Yang F, Talantov D, Timmermans M, Meijer-van Gelder ME, Yu J, Jatkoe T, Berns EM, Atkins D, Foekens JA: Gene-expression profiles to predict distant metastasis of lymph-node-negative primary breast cancer. Lancet. 2005, 365: 671-679.

    Article  CAS  PubMed  Google Scholar 

  23. Gene Expression Omnibus. [http://www.ncbi.nlm.nih.gov/geo]

  24. Bolstad BM, Irizarry RA, Astrand M, Speed TP: A Comparison of Normalization Methods for High Density Oligonucleotide Array Data Based on Bias and Variance. Bioinformatics. 2003, 19: 185-193. 10.1093/bioinformatics/19.2.185.

    Article  CAS  PubMed  Google Scholar 

  25. Pan W: On the use of permutation in and the performanceof a class of nonparametric methods to detect differential gene expression. Bioinformatics. 2003, 19: 1333-1340. 10.1093/bioinformatics/btg167.

    Article  CAS  PubMed  Google Scholar 

  26. Guo X, Pan W: Using weighted permutation scores to detect differential gene expression with microarray data. J Bioinform Comput Biol. 2005, 3 (4): 989-1006. 10.1142/S021972000500134X.

    Article  CAS  PubMed  Google Scholar 

  27. Xie Y, Pan W, Khodursky AB: A note on using permutation based false discovery rate estimate to compare different analysis methods for microarray data. Bioinformatics. 2005, 21: 4280-4288. 10.1093/bioinformatics/bti685.

    Article  CAS  PubMed  Google Scholar 

  28. Ploner A, Calza S, Gusnanto A, Pawitan Y: Multidimensional local false discovery rate for microarray studies. Bioinformatics. 2006, 22: 556-565. 10.1093/bioinformatics/btk013.

    Article  CAS  PubMed  Google Scholar 

  29. Gill PE, Murray W, Wright MH: Practical Optimization. 1981, London: Academic Press

    Google Scholar 

  30. polfdr. [http://ifr69.vjf.inserm.fr/polfdr]

Download references

Acknowledgements

CD received a post-doctoral grant from the Région Ile-de-France (EPIGENIC project). We thank the three anonymous reviewers for their helpful comments that have contributed improving the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cyril Dalmasso.

Additional information

Authors' contributions

CD, ABH and PB have equally contributed to this work. All authors read and approved the final manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Dalmasso, C., Bar-Hen, A. & Broët, P. A constrained polynomial regression procedure for estimating the local False Discovery Rate. BMC Bioinformatics 8, 229 (2007). https://doi.org/10.1186/1471-2105-8-229

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-8-229

Keywords