Abstract
Background
Before conducting a microarray experiment, one important issue that needs to be determined is the number of arrays required in order to have adequate power to identify differentially expressed genes. This paper discusses some crucial issues in the problem formulation, parameter specifications, and approaches that are commonly proposed for sample size estimation in microarray experiments. Common methods for sample size estimation are formulated as the minimum sample size necessary to achieve a specified sensitivity (proportion of detected truly differentially expressed genes) on average at a specified false discovery rate (FDR) level and specified expected proportion (π_{1}) of the true differentially expression genes in the array. Unfortunately, the probability of detecting the specified sensitivity in such a formulation can be low. We formulate the sample size problem as the number of arrays needed to achieve a specified sensitivity with 95% probability at the specified significance level. A permutation method using a small pilot dataset to estimate sample size is proposed. This method accounts for correlation and effect size heterogeneity among genes.
Results
A sample size estimate based on the common formulation, to achieve the desired sensitivity on average, can be calculated using a univariate method without taking the correlation among genes into consideration. This formulation of sample size problem is inadequate because the probability of detecting the specified sensitivity can be lower than 50%. On the other hand, the needed sample size calculated by the proposed permutation method will ensure detecting at least the desired sensitivity with 95% probability. The method is shown to perform well for a real example dataset using a small pilot dataset with 46 samples per group.
Conclusions
We recommend that the sample size problem should be formulated to detect a specified proportion of differentially expressed genes with 95% probability. This formulation ensures finding the desired proportion of true positives with high probability. The proposed permutation method takes the correlation structure and effect size heterogeneity into consideration and works well using only a small pilot dataset.
Background
DNA microarray technology provides tools for studying the expression profiles of hundreds or thousands of distinct genes simultaneously. A fundamental goal in microarray studies is to identify a subset of genes that are differentially expressed under experimental conditions of interest. Before conducting a microarray experiment, one important issue that needs to be determined is the number of arrays (replicates) required in order to have adequate power to identify differentially expressed genes.
Many sample size estimation methods have been developed for various Type I error specifications, such as familywise error rate (FWE) [13], false discovery rate (FDR) [48], and the number of false positives [7,9]. The sample size for a microarray study is commonly calculated as the number of arrays needed to achieve the specified power on average (e.g., [36,9,10]). The power, the proportion of truly differentially expressed genes expected to be detected, is known as the sensitivity. With the sample size estimate that is calculated to achieve a specified sensitivity on average, the proportion of truly differentially expressed genes detected would frequently be less than the average. Consequently, the sample size calculated tends to give an overoptimistic outcome. Alternatively, Wang and Chen [2], Tsai et al. [7] and Shao and Tseng [8] proposed an alternative formulation: the sample size is calculated to ensure detecting at least the specified sensitivity level with a specified probability. This will be referred to as the (confidence) probability formulation.
When the sample size problem is formulated to achieve the specified sensitivity on average, we will show that the needed sample size can be simply calculated using the univariate sample size formula without considering dependency among genes. On the other hand, if the problem is formulated to achieve a specified sensitivity with a specified probability, then it requires estimating a percentile of the distribution of sensitivity. In this case, the dependency among genes needs to be taken into consideration. Tsai et al. [7] presented an approach for controlling the comparisonwise error rate (CWER) under the model of independent or equicorrelated normal distribution with a constant power for all genes. Shao and Tseng [8] proposed a modelfree procedure to estimate a general correlation matrix under the normal distribution. They used a dataset of 72 samples to illustrate an estimation of the correlation matrix. However, the size of pilot data is often small, 10 or fewer per group, and the estimated variances of the true positives are often negative (zero) resulting in poor estimate of sample size in our simulation study. Tibshirani [10] proposed a permutation method to estimate the FDR and average sensitivity for assessing a specific sample size. Tibshirani's method requires only a small number of pilot datasets and is completely modelfree in the sense that no assumptions on the distribution, effect sizes, and correlations of the test statistics are required. However, the standard deviation estimate (standard error) of a test statistic depends on the sample size. A test statistic from a small sample size will have a larger variation than that from a larger sample size. Since the sample size of a pilot dataset is often small, the cutoff level based on a small pilot dataset often exceeds the true cutoff for needed samples and results in overestimation of the needed sample size.
This paper presents an overview of the power and parameter specifications, and proposes a permutation procedure for sample size determination under the probability formulation ([2,7,8]). The approach of Tibshirani [10] is improved to attain a more correct permutation distribution by incorporation of an adjustment factor. The proposed method uses a small pilot dataset of 4 to 6 samples per group; the method requires fewer samples than the Tibshirani [10] method when the sample size for the pilot dataset is small relative to the needed sample size. When the sample size for the pilot dataset is large, the proposed method and the Tibshirani [10] method are equivalent.
Methods
Let m denote the number of genes studied in an array of which m_{0 }and m_{1 }are the numbers of nondifferentially and differentially expressed genes, respectively. Given the significance level α (per comparisonwise error rate), the results of m tests can be summarized as a 2 × 2 table (Table 1).
Table 1. Four possible outcomes when testing m hypotheses.
V/m_{0 }is the proportion of genes not differentially expressed that are declared significant, its expectation is the per comparisonwise error rate E(V)/m_{0 }= α. V/R is the proportion of declared significant genes among the total number of significances declared that are, in fact, not differentially expressed. Its expectation is the false discovery rate E(V/R) = q, given R > 0. U/m_{1 }is the proportion of truly differentially expressed genes that are correctly declared. In a diagnosis problem, this proportion is often referred as the true positive rate, or the sensitivity. By taking expectation, we have the "average sensitivity", E(U)/m_{1}, denoted by λ.
Sample Size Estimation
In sample size estimation, m, m_{1}, and the (standardized) effect size δ = (δ_{1},..., δ_{m1}) for the differentially expressed genes are prespecified by the investigator. Estimation of sample size needed to achieve the specified sensitivity λ_{0}, on average, is straightforward. Since m_{1 }and λ_{0 }are prespecified, given a FDR level q* the corresponding significance level for per comparisonwise error rate α can easily be calculated. Setting α = [m_{1 }λ_{0 }q*]/[m_{0 }(1q*)], the FDR will be controlled at q* for sufficiently large m_{1 }and m_{0}.
If δ_{i }= δ_{0 }is constant for all i, then the comparisonwise power (1  β) of the univariate test is the same and exactly equal to λ_{0}. Given α, δ_{0}, and (1  β) = λ_{0}, the sample size can be based on the univariate sample size calculation and is given as
where t_{α }and t_{β }are the percentiles of a tdistribution. If δ_{i}'s are different, then β_{i }= t^{1 }() from Equation (1). The sample size n* can be calculated from the following equation
The needed sample size is n = ⌈n*⌉, where ⌈n*⌉ is the smallest integer greater than or equal to n*. Given the sample size n as calculated, the outcome of a univariate test on a truly differentially expressed gene can be modeled by a Bernoulli random variable with the success probability at least (1  β_{i}) since n ≥ n*. The expected number of true detections is at least m_{1 }λ_{0}, regardless of the correlation structure among genes and hence the desired sensitivity can be achieved on average. Most sample size estimation methods are either based on this approach or extensions [36,911]. However, the sample size calculated under this formulation is inadequate; a simple demonstration under an independent model is shown below.
Given m, π_{1 }(= m_{1}/m), a constant effect size δ_{i }= δ_{0}, q*, λ_{0}, and the calculated sample size n (based on Equation 1), under the independent model, the total number of truly differentially expressed genes detected U is a binomial random variable with success probability (1  β) (≥ λ_{0 }since n ≥ n*). The probability ϕ_{λ0 }of identifying at least λ_{0 }fraction of m_{1 }differentially expressed genes can be calculated as the sum of the binomial probabilities [2,7]:
The method of using Equation (1) to estimate sample size is referred to as the univariate method. Column 35 of Table 2 show the estimated sample size n, the average sensitivity λ and the probability ϕ_{λ0 }at λ_{0 }= 0.6, 0.7, 0.8, 0.9. The parameters used in the calculation are: m = 2,000, π_{1 }= 5%, 10%, 20%, δ_{0 }= 2 and q* = 0.05. It can be seen that the probability ϕ_{λ0 }can be less than 60%. That is, using this formulation to calculate needed arrays may result in that an experiment will have the sensitivity less than the specified λ_{0 }level with more than 40% probability.
Table 2. Average formulation versus 95% probability formulation under the independent model.^{a}
Alternatively, Wang and Chen [2] formulated the problem as: the number of arrays needed to achieve the specified sensitivity λ _{0 }with a probability ϕ _{λ0}. In this formulation both λ _{0 }and ϕ _{λ0 }need to be specified and not necessarily equal. The ϕ _{λ0 }is set at 95% since it is consistent with the common statistical practice of using the 95% confidence probability. Under this formulation, for specified λ _{0 }the needed number of arrays is calculated so that the average sensitivity is greater than λ _{0 }and the 5^{th }percentile, λ _{5}, of the distribution of the sensitivity U/m_{1 }is greater than λ_{0}:
In the independent and constant effect size model, Tsai et al. [7] used Equations (1) and (3) to estimate the needed sample size which is referred to as the Binomial method. Columns 68 of Table 2 show the estimated sample size n, the average sensitivity λ, and the probability ϕ_{λ0 }for λ_{0 }= 0.6, 0.7, 0.8, 0.9. The probabilities in Column 8 are all higher than 95% due to n ≥ n*. The procedure will ensure detecting the specified proportion of differentially expressed genes with at least 95% probability.
In Table 2, the theoretical results indicate that the two methods give quite close sample size estimates. The difference of the estimates reflects the difference of the two formulations; when δ_{0 }= 2, the difference is up to 1. For a given sensitivity, the needed sample size increases as the effect size δ_{0 }decreasing, and the difference of the two formulations in the estimates is larger. We calculated the sample sizes using the same parameters as Table 2 for δ_{0 }= 1. The sample size differences increase at about four times those of Table 2 (data not shown).
Permutation Method for Sample Size Estimation
Tibshirani [10] proposed a permutation method to account for both dependency and unequal effect sizes among genes using a pilot dataset for assessing sample size. This method is applied here to estimate the required sample size. Because the sample size of the pilot data is typically smaller than the needed sample size, the null distributions generated from the pilot data have more variations; simply using the null distributions generated from a small pilot dataset can overestimate the needed sample size. A procedure modified from the Tibshirani [10] method with adequate adjustment for sample size estimation is proposed below.
For simplicity, assume an equal sample size in each group, denoted as n = n_{0 }= n_{1}. Start with some pilot data with at least 4 samples per group, denoted as n_{0p }and n_{1p }for the control and treatment group, respectively. For specified m, m_{1}, δ = (δ_{1}, ..., δ_{m1}), q*, and λ_{0}, the algorithm for a two sample ttest is described as follows.
Algorithm: Sample Size Estimation (See additional file 1 for a software application)
Additional file 1. The software for the algorithm of the proposed method. It provides software and an example for the algorithm of the proposed method.
Format: TXT Size: 3KB Download file
1. Set α = [m_{1 }λ_{0}q*]/[m_{0}(1  q*)], use the method of Tsai et al. [7] (Column 6 of Table 2) to find the needed sample size as the initial sample size n.
2. Compute the adjustment factor f = f_{1 }f_{2 }where , , and t_{df, p }is the p^{th }percentile of a tdistribution with df degrees of freedom.
3. Generate the bth permutation samples.
4. Compute the tstatistics and sample standard deviations for the permutation samples for all genes.
5. Multiply each tstatistic by the factor f and add to a set of randomly selected m_{1 }tstatistic of differentially expressed genes to generate the permutation tstatistics s_{b }= {s_{0b, }s_{1b}}, where s_{0b }is the set for the nondifferentially expressed genes, and s_{1b }is the set for the differential expressed genes such that s_{0b }= ft_{0b }and s_{1b }= ft_{1b }+ , where t_{0b }and t_{1b }are the vectors of the tstatistic, δ is a vector of the effect size and is the vector of the sample standard deviation.
6. Store the permutation statistics s_{b}.
7. Repeat 36 for all possible permutations, b = 1, 2, ..., N, where N = (n_{0p}+n_{1p}) Cn_{0p}
8. Construct the null distribution by pooling all permutation statistics from the set of nondifferentially expressed genes s_{0 }= {s_{01}, s_{02}, ..., s_{0N}}. Find the 100×(α/2)^{th }and 100×(1  α/2)^{th }percentiles as the critical values.
9. Compute the number of significances for the true positives u_{b }for each statistic in s_{1b }for each permutation sample b = 1, 2, ..., N.
10. Order u_{1}, u_{2}, ..., u_{N}, and find the 5^{th }percentile, denoted by u*.
11. Compare u* to m_{1 }λ_{0}. If u* ≥ m_{1}λ_{0}, stop and report n as the sample size estimate; otherwise, increase n by 1 and go to 2.
In the proposed algorithm, the permutation tstatistics of nondifferentially expressed genes from all possible permutations were pooled to estimate the null distribution of the test statistics (Step 8). The number of true positives (U) was estimated for each permutation sample (Step 9) since the set of differentially expressed genes in each permutation sample were known. The distribution of the number of true positives U and its 5^{th }percentile u* were estimated (Step 10). To reduce the excess variation of the permutation distribution, the proposed method includes the adjustment factor: f = f_{1}f_{2}. The adjustment factor consists of two scale factors: f_{1 }and f_{2}. The first factor, f_{1}, accounts for differential sample sizes between the pilot study and the planned study and the second scale factor, f_{2}, uses the maximum likelihood estimate of the tstatistic [12]. When the sample size of pilot data is large, both factors f_{1 }and f_{2 }converge to 1 and the proposed and the Tibshirani [10] methods are equivalent. (Note that Tibshirani's method was proposed based on the average formulation.) Since the permutation technique is used to estimate the critical value and the distribution of the sensitivity, no assumptions on the distribution of the tstatistic and the dependency among the statistics are made. Furthermore, the proposed method does not need to estimate the covariance matrix among all genes which can result in computation difficulty when the sample size of the pilot dataset is small.
Results
Two simulation analyses were conducted to evaluate the two formulations of sample size estimation described above. The first analysis evaluated the two formulations under the independent and constant effect size model. The theoretical results for the two formulations are shown in Table 2. The simulation analysis provides an empirical validation. The second analysis evaluated the four methods under a correlated model: 1) the univariate method (e.g., Jung [4]); 2) the Shao and Tseng [8] modelfree method, 3) the Tibshirani [10] permutation method; and 4) the proposed permutation method. The univariate method is designed for the average formulation, while the three other methods are considered with 95% probability with a use of a pilot dataset. The same model parameters in Table 2 were used in the evaluation. The Type I error rate was based on setting the FDR at q* = 0.05. Note that there are many multiple testing FDR procedures with different strategies. For example, the Storey's FDR procedure [13] involved an estimation of the number of nondifferentially expressed genes m_{0}. However, to minimize the confounding effect brought by the variation in estimating m_{0}, we simply used the true m_{0 }in our simulation analysis. Sample sizes were calculated for the given parameter values. The empirical estimates of the FDR, average sensitivity λ and the probability ϕ_{λ0 }were then calculated and evaluated. Using the true m_{0 }provides a direct validation of the proposed procedure with control of the FDR.
The purpose of the first simulation study was to validate the theoretical results of the sample size, sensitivity, and 95% probability for the two methods shown in Table 2 under the independent model. We generated 1,000 simulation samples with sample sizes per group from the Column 3 or Column 6 of Table 2. For the null model, m_{0 }= m × (1  π_{1}) genes were generated from the independent standard normal N(0,1); for the alternative model, m_{1 }= m × π_{1 }genes were generated based on independent normal N(δ_{0}, 1). For each simulation sample set, the tstatistics and the correspondent pvalues were computed, and the numbers of false positives and true positives at the FDR level q* = 0.05 were recorded. The empirical estimates of the FDR, average sensitivity λ and probability ϕ_{λ0 }were then calculated. The estimate of ϕ_{λ0 }was the proportion of times out of the 1,000 simulations that the number of true positives was not less than m_{1 }× λ_{0}.
Table 3 shows the empirical results for the two methods. The empirical FDR appears close to the nominal levels in both approaches. For the univariate method, the empirical average sensitivity λ's are all at or above the desired levels, except for π_{1 }= 0.05 and λ_{0 }= 70%. The probability ϕ_{λ0 }is less than 50%, for π_{1 }= 0.05 and λ_{0 }= 70%. For the binomial method, the empirical average sensitivities λ's are all greater than the specified levels. Most of probabilities ϕ_{λ0}'s exceed 95% except for π_{1 }= 0.05, λ_{0 }= 60%, π_{1 }= 0.10, λ_{0 }= 90% and π_{1 }= 0.20, λ_{0 }= 70%. The empirical results of Table 3 are generally consistent with the theoretical values shown in Table 2. That is, the sample size calculated using the univariate method generally will achieve the specified sensitivity on average; however, the probability to achieve the specified sensitivity can be lower than 50%.
Table 3. The validation of the theoretical results from Table 2.^{a}
For comparison purposes, the mean and standard deviation of the sample size estimates from the proposed permutation method using a pilot dataset of group size 4 are also provided in the last column of Table 3. The pilot data were randomly generated from the normal distribution in each simulation. The proposed method tends to overestimate the needed sample size by up to five arrays.
The second analysis was to evaluate the four methods, the univariate method (Jung [4]), Shao and Tseng [8], Tibshirani [10], and proposed permutation methods, under a correlated model using the well known colon cancer dataset [14]. The colon cancer dataset [14] consists of 22 normal and 40 colon tumor tissue samples with 2,000 genes. The analysis consisted of two steps. The first step evaluated the sample size estimates obtained by the three 95% probability formulation methods based on a pilot dataset of sample size 4 and 6 per group. The second step compared the sample sizes estimated by the proposed method from the first step with the estimates from the univariate method.
In the first step, 4 samples from the colon dataset were randomly selected without replacement from each group to form a pilot dataset. The algorithm described above was used to estimate the sample size for the proposed method and the Tibshirani [10] method. For example, for π_{1 }= 5%, q* = 0.05 and λ_{0 }= 90%, the initial sample size was n = 13 (Column 6 of Table 2) and α = 0.00249. A constant effect size δ_{i }= δ_{0 }= 2 was considered. For the proposed permutation method, the initial adjustment factors for f were f_{1 }= 0.6777 and f_{2 }= = 1.155, while no adjustment was taken for the Tibshirani [10] method. For the Shao and Tseng [8] modelfree method, a correlation matrix of tstatistics was estimated by using all possible permutation datasets from the pilot dataset. However, the Shao and Tseng [8] modelfree method was found to have computational difficulty in most cases. Details are given later.
The procedure was repeated 1,000 times to select different pilot datasets of size 4 from each group to account for the variation of pilot dataset. The means and standard deviations of the sample size estimates from the Tibshirani [10] and proposed methods were calculated and are shown in Columns 4 and 5 of Table 4. The univariate method is considered as the standard method, and the estimates are listed in Column 3. The needed sample size estimated from either the Tibshirani [10] or the proposed method is greater than that from the univariate method in each case. The difference between the univariate method and the proposed method is less than 5 arrays per group in each case. The mean and standard deviation estimates from the Tibshirani [10] method are much larger than the estimates from the proposed method. The difference increases as λ_{0 }increases or π_{1 }decreases. Note that, under the independent model, the sample size and standard deviation estimates from the proposed method are smaller (Table 3).
The procedure was repeated with 6 samples for the initial pilot dataset. The estimates are shown in Columns 6 and 7. The proposed procedure gives consistent results from the two pilot sample sizes; however, the results from the Tibshirani [10] method differ substantially. The Tibshirani approach does not adequately take the pilot sample size into consideration. When the pilot sample size is much smaller than the needed sample size, the overestimation of the sample size by Tibshirani [10] method becomes severe. As the pilot study size getting closer to the needed sample size, the Tibshirani [10] and the proposed methods will give similar results.
In our simulations, the Algorithm B in Shao and Tseng [8] couldn't successfully produce solutions for the pilot data of group size 4 in all 1,000 replications. When the group size increases to 6, the algorithm works only when π_{1 }= 20%, λ_{0 }= 60% and 70%; the mean (standard deviation) of the sample size estimates are 6.4(0.012) and 6.8(0.012), respectively. The estimated values appear too small to be correct. This method does not appear to be applicable for small pilot sample sizes. Using the entire colon cancer dataset [14] of 62 samples, the sample size estimates are shown in Column 8. The estimates generally need one or two more arrays than the univariate methods, but fewer than the proposed method. Since the Tibshirani [10] method gave larger estimates and the Shao and Tseng [8] gave smaller estimates than the proposed method. In the second step of analysis, the univariate method and the proposed method were evaluated.
Comparison of the performance of the two methods is similar to that shown in Table 3. The data were sampled without replacement from the colon cancer dataset, instead of from the normal random variables under the independent model. The sample sizes were based on Column 3 or Column 4 of Table 4. The data were then randomly permuted to remove the difference between two groups, and a common effect size δ_{0 }= 2 was added to a set of randomly selected m_{1 }genes in the tumor group. For each resampled data set, the permutation test was used to generate a pvalue and the numbers of false positives and true positives were computed using q* = 0.05. The number of repetitions to compute the permutation test was 10,000. The empirical estimates of FDR, λ and ϕ_{λ0 }were computed. The entire procedure was repeated 1,000 times.
Table 5 shows the empirical estimates of q*, λ, and ϕ_{λ0 }for the two methods. Both methods are shown to control the FDR well and achieve the desired sensitivity. Thus the two methods can be expected to have satisfactory performance in practice. However, for the univariate method, the empirical ϕ_{λ0 }estimates are between 55% and 75%, except one at 80%. One would have to take a risk that the sensitivity can fall below the specified level.
Table 5. Empirical estimates of FDR, average sensitivity λ, and probability ϕ_{λ0 }from the univariate method and the proposed method based on the results of Table 4.
The effect size of δ_{0 }= 2 (Table 4) was used to validate the proposed permutation method under a correlated model using the colon cancer dataset [14]. In practice, the effect sizes can be much smaller. We calculated the sample sizes using the same parameters as Table 4 with an effect size δ_{0 }= 1 for two pilot sample sizes 4 and 6. The sample size estimates are shown in Table 6. The proposed procedure gives similar results for the two pilot sample sizes, which are consistent with the results for δ_{0 }= 2 in Table 4. The difference between the univariate method and the proposed method is about 15 arrays per group. The Tibshirani [10] method would require up to 67 and 35 extra arrays per group for 4 and 6 pilot samples, respectively. The estimates for the Shao and Tseng [8] method could be estimated only when the pilot study size is around or larger then the needed sample size.
Discussion and Conclusions
Determination of the needed sample size before conducting a microarray experiment is an important issue. The sample size problem is commonly formulated as the number of arrays needed to achieve the specified sensitivity λ on average. This paper demonstrates that the calculated sample size under this formulation may have the sensitivity λ at the specified level on average, but, the probability ϕ_{λ }that the specified sensitivity is achieved can be low (less than 50%) due to the variance in sensitivity distributions. Furthermore, under this formulation this paper shows that the sample size can be calculated by a univariate method, regardless of the correlation structure among the gene expression levels; the procedures to account for correlations, such as Li et al. [6], are not needed (Table 5). These findings agree with the results reported by Jung [4] and Dobbin and Simon [11]. However, this paper provides a theoretical interpretation for this approach.
Under the confidence probability formulation, consideration of the dependency among gene expressions is necessary in estimating the sample size since the percentile of the sensitivity distributions not only depends on the effect size of individual genes but also on their correlations. We propose a permutation method based on the method proposed by Tibshirani [10], but with an inclusion of an adjustment factor and a requirement to achieve a specific sensitivity with 95% probability. The adjustment factor provides more accurate estimates of the power and sample size. Shao and Tseng [8] also formulated the needed sample size in terms of confidence probability. Under the normality assumption, Shao and Tseng [8] proposed algorithms for mild correlations among genes using a preliminary dataset. They showed that their approach worked well for an example dataset of 72 samples. However, using their Algorithm B in our simulation for the colon dataset (the average correlation for the colon dataset is about 0.4), the estimated variance of the true positives can be negative when the preliminary sample size is 4 or 6. Their procedure does not perform well for a small pilot dataset with small sample size. In practice, sample sizes of pilot data are often small. Our simulation studies show that our procedure can work well with 4 to 6 samples per group. However, our procedure seems to overestimate the needed sample size when the correlations are very small, especially with small effect sizes. In this situation, our simulation results indicate that the factor f_{2 }may not be necessary (data not shown).
The choice of a particular multiple testing procedure used for data analysis can affect the error rate and power in the sample size estimation. Using a conservative procedure in the data analysis may decrease the "power" of the study; sometimes, the calculated sample size may have sensitivity below the specified level. For example, in this paper the calculation is based on the true number of nondifferentially genes m_{0}. However, if the data analysis uses an overestimated m_{0 }such as the Benjamini and Hochberg procedure [15], then the power may be below the desired level. An alternative is to use the total number of genes m instead of the number of nondifferentially genes m_{0 }to estimate the sample size. This procedure is expected to generate an appropriate sample size to achieve the desired sensitivity with a specified probability, regardless of which multiple testing procedure is used for data analysis.
Authors' contributions
JJC conceived the study and wrote the manuscript. JJC and WJL developed the methodology and proved theoretical results. WJL implemented the algorithms. HMH improved the concepts of the average and 95% confidence probability formulations. JJC, HMH and WJL performed the analysis. All authors read and approved the final manuscript.
Acknowledgements
HueyMiin Hsueh's research was done while visiting the NCTR. The authors are very grateful to reviewers for much helpful comments and suggestions for revising and improving this paper. The views presented in this paper are those of the authors and do not necessarily represent those of the U.S. Food and Drug Administration
References

Yang MCK, Yang JJ, McIndoe RA, et al.: Microarray experimental design: power and sample size considerations.
Physiol Genomics 2003, 16:2428. PubMed Abstract  Publisher Full Text

Wang SJ, Chen JJ: Sample size for identifying differentially expressed genes in microarray experiments.
J Comput Biol 2004, 11:714726. PubMed Abstract  Publisher Full Text

Jung SH, Bang H, Young S: Sample size calculation for multiple testing in microarray data analysis.
Biostatistics 2005, 6:157169. PubMed Abstract  Publisher Full Text

Jung SH: Sample size for FDRcontrol in microarray data analysis.
Bioinformatics 2005, 21:S30973104. Publisher Full Text

Pounds S, Cheng C: Sample size determination for the false discovery rate.
Bioinformatics 2005, 21:42634267. PubMed Abstract  Publisher Full Text

Li SS, Bigler J, Lampe JW, Potter JD, Feng Z: FDRcontrolling testing procedures and sample size determination for microarrays.
Statist Med 2005, 24:22672280. Publisher Full Text

Tsai CA, Wang SJ, Chen DT, et al.: Sample size for gene expression microarray experiments.
Bioinformatics 2005, 21:15021508. PubMed Abstract  Publisher Full Text

Shao Y, Tseng CH: Sample size calculation with dependence adjustment for FDRcontrol in microarray studies.
Statist Med 2007, 26:42194237. Publisher Full Text

Lee ML, Whitmore G: Power and sample size for DNA microarray studies.
Statist Med 2002, 21:354370. Publisher Full Text

Tibshirani R: A simple method for assessing sample sizes in microarray experiments.
BMC Bioinformatics 2006, 7:106. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Dobbin K, Simon R: Sample size determination in microarray experiments for class comparison and prognostic classification.
Biostatistics 2005, 6:2738. PubMed Abstract  Publisher Full Text

Hedges LV, Olkin I: Statistical Methods for MetaAnalysis. Academic Press; 1985.

Storey JD: A direct approach to false discovery rates.
Journal of the Royal Statistical Society, Series B 2002, 64:479498. Publisher Full Text

Alon U, Barkai N, Notterman DA, et al.: Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays.
Proc Natl Acad Sci 1999, 96:67456750. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Benjamini Y, Hochberg y: Controlling the false discovery rate: a practical and powerful approach to multiple testing.
Journal of the Royal Statistical Society, Series B 1995, 57:289300.