Abstract
Background
In microarray experiments with small sample sizes, it is a challenge to estimate pvalues accurately and decide cutoff pvalues for gene selection appropriately. Although permutationbased methods have proved to have greater sensitivity and specificity than the regular ttest, their pvalues are highly discrete due to the limited number of permutations available in very small sample sizes. Furthermore, estimated permutationbased pvalues for true nulls are highly correlated and not uniformly distributed between zero and one, making it difficult to use current false discovery rate (FDR)controlling methods.
Results
We propose a modelbased information sharing method (MBIS) that, after an appropriate data transformation, utilizes information shared among genes. We use a normal distribution to model the mean differences of true nulls across two experimental conditions. The parameters of the model are then estimated using all data in hand. Based on this model, pvalues, which are uniformly distributed from true nulls, are calculated. Then, since FDRcontrolling methods are generally not well suited to microarray data with very small sample sizes, we select genes for a given cutoff pvalue and then estimate the false discovery rate.
Conclusion
Simulation studies and analysis using real microarray data show that the proposed method, MBIS, is more powerful and reliable than current methods. It has wide application to a variety of situations.
Background
Microarray technology has been successfully used by biological and biomedical researchers to investigate gene expression profiles at the genomewide level. Usually, the sample sizes are small compared to the number of genes to be investigated, making estimation of standard error for statistical tests very inaccurate. Furthermore, thousands of hypotheses (one corresponding to each gene or set of genes, in general) are tested at once, which greatly increases the probability of Type I error. This problem is also called the "multiple comparison problem" in hypothesis testing. A very small cutoff pvalue is then needed to avoid picking a large number of false positives (FP); however, the price of that decision is failing to find many true positives whose pvalues are larger than the cutoff value. When the sample sizes are extremely small, the problem worsens because as the sample size decreases so do the detection power and the ability to estimate pvalues.
When the sample sizes are large enough, even if the data across two conditions are not normally distributed, we can still use a twosample ttest to estimate the pvalue for each gene. In practice, to avoid the normal distribution assumption, we may also choose nonparametric (rankbased) or permutationbased procedures. However, when sample sizes are very small, the ttest is not reliable due to the poor estimation for variances; many genes will have small pvalues only because their estimated variances are too small. Furthermore, the ttest method treats each gene independently and does not utilize information shared among them. To borrow information from other genes, modified ttest methods have been proposed [1,2]. The modified ttest statistic is:
where d_{i }is the difference of means under two conditions for gene i; se_{i }is the estimated standard error for d_{i }and s_{0 }is a constant, which is used to avoid too large absolute values of regular tstatistics due to very small estimated standard errors.
When we use test statistics in (1), we will lose the information about the distribution of true nulls since we do not know the distribution of (1). To overcome this problem, permutationbased procedures have been proposed [2]. One extensively used method in microarray data analysis is called SAM for "Significance Analysis of Microarray" [2]. SAM uses test statistics in (1) and then permutes sample labels to estimate the pvalue for each gene.
The absolute values of statistics in (1) are usually smaller than that of regular tstatistics. When sample sizes are extremely small, the total number of distinguished permutations is limited and, therefore, permutationbased methods, such as SAM, will have larger pvalues than those from regular ttest, especially for differentially expressed (DE) genes. For example, in experiments where there are only three replicates for two conditions (a typical scenario) there exist only ten different available permutations. The coarseness of the possible selections creates a problem for finding a reasonable cutoff pvalue.
To select DE genes, we use a cutoff pvalue and pick those genes whose pvalues are smaller than the given cutoff value. Understood in this process and in any gene selection is the tradeoff between false positives (type I error) and false negatives (type II error). If we want to control familywise error rate (FWER), we need a very small cutoff pvalue that will fail to find many true positives. Some researchers have proposed a strategy of, instead of controlling FWER, controlling false discovery rate (FDR) to allow some FPs in the set of selected genes, but to control the mean of the ratio of number of FPs to the number of total declared DE genes [35]. To control FDR, we need to estimate the number and the distribution of true nulls, which is quite difficult. Since it is difficult to separate nonDE genes from DE genes when doing permutations, the resulting estimated number and the distribution of the pvalues for true nulls may not be accurate. Although several improvements for SAM have been proposed [68], Qiu et al showed that the permutationbased methods may have large variance and, therefore, are not reliable [9]. Yang and Churchill have noticed the problem of permutationbased methods when applied to small microarray experiments [8].
As part of SAM, Storey's FDRcontrolling method has been proven to be more accurate than Benjamini and Hochberg's procedure and has been used extensively in microarray data analysis [4]. They defined a quantity called qvalue. Similar to pvalue, "a qvalue threshold can be phrased in practical terms as the proportion of significant features that turn out to be false leads" [5]. Its R package, "qvalue," is publicly available [10]. "qvalue" first estimates the qvalue for each pvalue (gene) based on all pvalues and then calculates the cutoff pvalue for a given cutoff qvalue. Although the authors claimed that "qvalue" usually conservatively controls the FDR in that its true false discovery rate is smaller than the given cutoff qvalue [11], Jung and Jang have found that it could also be anticonservative for small cutoff qvalues [12]. In some cases, when the given cutoff qvalues are small, "qvalue" may select very few or no DE genes.
In this paper, we show that when sample sizes are extremely small, the ttest has poor performance in terms of sensitivity and specificity and SAM (and "qvalue") may not be applicable due to the difficulty of controlling FDR for GeneChip array data. To circumvent those problems, we propose a new modelbased method we call modelbased information sharing method (MBIS). To evaluate the performance of our new method, we compare it with others by using both simulation data and real data.
Method
Fold change, equal variance, and data transformation
The ratio of the expression levels across two conditions is called fold change (FC); it has been used in the early comparative experiments [13,14]. This criterion is arguable since, depending on the decisionmakers, choosing cutoff FC is arbitrary. Furthermore, the FC method does not take into account the variability with gene expression measurements, or, even worse, it assumes that the variability for all expression measurements is the same, which is likely to be false for most gene expression experiments. However, FC criteria have their own advantages. First, they are biologically meaningful and easily interpreted. Second, more importantly, many studies have shown that FCbased methods, if used appropriately, outperform other methods [1519].
One way to obtain equal variance from gene to gene is to transform the data, usually with a logarithmic transformation. After this transformation, a FC (log scale) can be calculated from the difference of means across two conditions. However, different data sets may require different variancestabilization transformations. Several variancestabilization and normalization transformation methods, which try to transform expression values to be equal variance and normally distributed for each gene, have been proposed [1923].
Modelbased information sharing (MBIS)
MBIS makes the assumption that an appropriate data transformation is available and has been applied to the raw gene expression data. This transformation has furthermore stabilized the variance. Therefore, the variance for each gene is a constant, denoted by s^{2}, after transformation. If we can estimate s^{2 }from data, then we can calculate pvalue easily for each gene.
Estimation of s^{2}
Suppose there are n_{1 }and n_{2 }replicates for condition one and two, respectively, and G genes to be tested. Under the assumptions of normality and equal variance, the estimated variance from each individual gene is an unbiased estimate for s^{2 }and has a Chisquare distribution with degrees of freedom n_{1 }+ n_{2 } 2. Therefore the average of the estimated variances from all genes is also an unbiased estimate for s^{2}:
where is the estimated variance from individual gene i and G is the number of genes. Then we use the square root of , , as the estimated standard variance for each gene. From the equal variance assumption, we can use a normal distribution to approximate the mean difference of nonDE genes:
Based on this normal distribution, we calculate the pvalue for gene i:
where d_{i }is the difference of the means for gene i across two conditions and Φ(.) is the cumulative distribution function (CDF) of the standard normal distribution.
Estimation of total number of nonDE genes G_{0}
For a given value μ (0 <μ < 1), we count the number (N_{u}) of genes with pvalues greater than or equal to μ. Then an estimate of G_{0 }is N_{μ}/(1μ). To reduce the influence of DE genes since they have relatively small pvalues, a relatively large μ is preferable. We can also use a vector of μ's and calculate the corresponding estimated 's and then take their (weighted) mean as the final estimate for G_{0}.
Gene selection and estimations for false positives and FDR
For a given cutoff pvalue, p_{0}, we pick those genes with pvalues smaller than p_{0 }as DE genes. Suppose S genes are selected. Then we can estimate the number of false positives, , and the false discovery rate, .
SAM, ttest and qvalue
For the SAM method, we use the R package, SAMr [10], and choose different values for s0.perc (percentile of estimated se's): 1 (ttest only, i.e. s0 = 0 in (1)), 20, 40, 60, 80 and 100. SAM will calculate pvalues by permutation. For the ttest method, we calculate pvalues from the regular ttest statistics (i.e. s0 = 0 in (1)) without permutation. We then use the calculated pvalues for each method as the input for R package "qvalue" and then get the output of selected DE genes with different preset qvalues.
Simulation design
To restrict ourselves to small experiments, we assume the sample sizes for both conditions are 3, 5 and 8. We simulate 10,000 genes with normal distributions for two conditions. For nonDE genes, we assume they are normally distributed with a mean equal to 0; for DE genes, their absolute mean difference is uniformly distributed: with three ranges representing different degrees of differential expression: U(1,3), low, U(3,6), middle, and U(6,9), high. We assume the standard deviations are uniformly distributed as U(1,b), where b is greater than or equal to one. In the ideal situation, i.e. equal variance, b = 1. However, even after trying several variancestabilization transformations, sometimes this assumption may be too strong for real data, and we therefore choose different b's in our simulations: b = 1, 1.5 and 2. In other words, we simulate data with equal or near equal variance. The proportion of DE genes among all genes may also affect the gene selection results; we then choose three levels of proportions: 0.1, 0.3 and 0.5 (i.e. the numbers of DE genes are 1000, 3000 and 5000, respectively). The output of selected genes from "qvalue" for each method with different preset cutoff qvalues: 0.05, 0.10, 0.15, 0.20 and 0.25, are compared.
Real data set
We use Affymetrix GeneChip data sets selected from the GSE2350 series [24], downloaded from the NCBI GEO database [25] to compare our new method with others. We use the first three samples from both "control" (GSM44051, GSM44052 and GSM44053) and "CD40L treatment" (GSM44057, GSM44058 and GSM44059) groups. For the raw intensity data, we use the "rma" function in R package "affy" [10] to do background correction, normalization, and summarization [26]. Then we apply different methods to the summarized expression values (already on log base 2 scale) to estimate pvalues that are the input for the "qvalue."
To see which method gives more biologically meaningful results, we use the webbased tool, CLASSIFI algorithm [2729], that uses Gene Ontology (GO) [30] annotation to classify groups of genes defined by gene cluster analysis using the statistical analysis of GO annotation coclustering. We compare the median pvalues of "topfile" from the output of CLASSIFI. In general, the smaller the pvalue is, the more reasonable the results in terms of GO classification [27].
Results
Simulation results
Figure 1 plots the Receiver Operating Characteristic (ROC) curves from different methods for our simulated data. The curves from regular ttest (without permutation) and SAM with s0 = 0 (TPermut, i.e. ttest with permutation) are almost identical and perform worst in terms of sensitivity and specificity. Figure 1 clearly shows that informationsharing methods (SAM with s0>0 and MBIS) perform better. Our new method, MBIS, outperforms all SAM and ttest methods.
Figure 1. ROC Curves. ROC curves of MBIS, SAM with s0.perc = 1, 20, 40, 60, 80 and 100, and ttest from a simulated data set. There are three replicates for each condition. One thousand out of 10,000 genes are simulated differentially expressed with mean differences uniformly distributed between 3 and 6. The simulated variance for each gene is uniformly distributed between 1 and 1.5.
Table 1 gives the numbers of true positives (TP), false positives (FP), and the observed false discovery rates (Obs. FDR), FP/(FP+TP), obtained by "qvalue" with preset qvalues: 0.05, 0.10, 0.15, 0.20 and 0.25, respectively, from a simulation. In this simulation, there are 1,000 DE genes out of 10,000 genes, three replicates for both conditions, b = 1.5, and the absolute mean differences for DE genes are uniformly distributed between three and six. For MBIS and ttest without permutation, we know the distribution of all nulls and, therefore, we can estimate the number of false positives (Est.FP) for a given cutoff pvalue (calculated from given qvalues by "qvalue"). As the ROC curves show, the regular ttest method performs more poorly than MBIS. For example, with preset qvalue 0.05, the ttest method can only select 244 out of 1000 true positives at the price of 19 false positives. However, MBIS can obtain more than 95% true positives with only 94 false positives. Table 1 also shows that the numbers of estimated false positives from ttest and MBIS are very close to the true numbers of false positives, indicating that the estimated number and the distribution for true nulls are accurate for both the ttest and MBIS.
Table 1. Simulation results of numbers of TPs, and FPs from different methods (nde = 1000, rep = 3, b = 1.5, diff = c(3,6))
For the SAM methods with various s0.perc, when the preset qvalue is small, we failed to get any true positives. For example, when given qvalue 0.1, none of the SAM methods can get any true positives. Interestingly, when the given qvalue is small, a regular ttest performs better than a ttest with a permutation in SAM; this implies permutationbased methods are not appropriate in this situation. Table 1 also indicates that SAM methods are usually conservative, as the authors of "qvalue" claimed [4]. However, it is not the case for MBIS and regular ttest. In general, the observed false discovery rates (Obs. FDR in Table 1) from MBIS and regular ttest methods are larger than the preset qvalues, while SAM methods are usually too conservative and need large qvalues to get a reasonable proportion of true positives. For different setups in our simulations, we obtained similar comparison results.
Results from real data set
For the real data set, we use MBIS, regular ttest, and SAM to calculate the pvalues for each gene and then use "qvalue" to select DE genes with cutoff qvalues equal to 0.01, 0.025, 0.05, 0.075 and 0.1, respectively. By using "qvalue," we calculate the corresponding cutoff pvalues from each cutoff qvalue for these three methods. Since we know the distributions of nulls from MBIS and ttest (they have a uniform distribution for the pvalues of nulls), and we can also estimate the number of true negatives for a given cutoff pvalue, we can estimate the number of false positives and the false positive rates.
Table 2 summarizes the results. For a given cutoff qvalue, the cutoff pvalues calculated from "qvalue" for our new method and ttest are usually similar, but both are larger than that for SAM. Our new method usually selects more genes than the ttest does, which selects more genes than SAM does. In fact, for small cutoff qvalues, for example, 0.01 and 0.025, SAM fails to select any genes due to the fact that the minimum of the estimated qvalues from "qvalue" for SAM is 0.04, larger than 0.01 and 0.025. However, when the cutoff qvalue increases to 0.05, the number of genes selected by SAM jumps to 3695. On the other hand, although the numbers of selected genes by our new method and the ttest increase as the cutoff qvalues increase, as expected, the increments are more stable. All these observations are consistent with what we have observed in our simulations.
Table 2. Results from real data for given cutoff qvalues
The selected gene sets from MBIS and the ttest are usually different. For example, when the cutoff qvalue is equal to 0.05, MBIS and the ttest select 5550 and 4748 genes, respectively; the number of common genes by these two methods is 3694. In other words, about 1000 genes are selected by the ttest that are not in the list from the MBIS. However, SAM selected genes also usually selected by MBIS.
From the CLASSIFI output with cutoff qvalue 0.05, the median pvalues (log10 scale) are 15.30, 7.05 and 6.01 for MBIS, SAM, and ttest, respectively, indicating that SAM performs better than the ttest but worse than MBIS in terms of coclustering for genes with similar function according to GO.
Since the cutoff pvalues from the same cutoff qvalue are different for these three methods, we then use the same cutoff pvalues for each method and compare their selected genes. Table 3 gives the comparisons with cutoff pvalues equal to 0.05, 0.025, 0.01, 0.005, and 0.0025. The corresponding cutoff qvalues obtained by "qvalue" are always larger for SAM than for ttest and MBIS. But the number of selected genes by SAM is much smaller than those by ttest, and MBIS for each given cutoff pvalue. Again, for a given cutoff pvalue, the gene sets selected by ttest and MBIS are different, while SAM still selects almost a subset of genes obtained by MBIS. The observed FDRs from the ttest and MBIS are always larger than those estimated from the "qvalue," a finding that is consistent with our observations in simulations. The median pvalues (log10 scale) from CLASSIFI are 16.32, 8.31, and 6.76 for MBIS, SAM, and ttest, respectively, when the cutoff pvalue is 0.01, indicating that MBIS outperforms SAM that, in turn, performs better than the ttest.
Table 3. Results from real data for given cutoff pvalues
Discussion
When sample sizes are small, information shared by genes is helpful and should be used. While ttest treats each gene independently, both SAM and MBIS, use information shared among genes. When the equal variance assumption in MBIS is met, the estimated variance for gene i in the ttest has a Chisquare distribution with degrees of freedom of n_{1 }+ n_{2 } 2:
And the square of standard error estimated in ttest has variance:
However, (2) has a Chisquare distribution with degrees of freedom G(n_{1 }+ n_{2 } 2), and its variance is:
The square of standard error estimated for our new method is:
In a typical microarray experiment, the number of genes, G, is usually between 10K and 50K, indicating that the variance in (9) is very close to 0 and the estimated value in (2) is close to the true value; therefore a normal distribution is appropriate to approximate the mean differences of the true nulls.
In comparing (7) with (9), we can see that, while the regular ttest method gives a much larger variance for each estimated variance (each individual ttest will lose two degrees of freedom due to variance estimation), MBIS, a method that utilizes information among genes, has a more precise estimate for the common variance. Therefore, MBIS always outperforms the ttest.
On the other hand, the Chisquare distribution is right skewed, implying that its mean is larger than its median. If 's have a Chisquare distribution, they are more likely to have estimated values less than the mean (true value) than estimated values greater than the mean. In other words, are more probable to underestimate than overestimate the constant variance. Therefore many true nulls may have very small pvalues from a ttest only because they have small estimated standard errors. This explains why there are so many FPs from ttest in our simulations; and consequently ttest selects so many different DE genes than SAM and MBIS do in real data. Because of the same reason, adding a common number to each individual se_{i }in (1) will potentially decrease the bias (for small s0.perc in SAM) and/or decrease the relative difference of estimated variances for most genes; therefore SAM usually improves the test statistics, although still not as favorably as MBIS. This explains why SAM performs better than ttest but worse than MBIS in terms of sensitivity and specificity.
When sample sizes are extremely small, as we mentioned before, SAM will have relatively larger pvalues due to a limited number of permutations available, affecting the estimation of qvalues by "qvalue". "qvalue" does not perform very well in this situation. For a given cutoff qvalue, the corresponding cutoff pvalue calculated by "qvalue" could be too large (as seen in the results from ttest and MBIS in simulation and real data) or too conservative (as in the results from SAM), a finding consistent with those from Jung and Jang [12].
Another difficulty for "qvalue" is that the number of selected genes can be very sensitive to the cutoff qvalue, especially the very small preset qvalue (see Table 2), that is desirable in practice; in this situation, SAM even performs worse than the regular ttest in terms of proportion of the DE genes selected. This raises the question of how to choose an appropriate qvalue in practice to which there is no absolute answer. Sometimes, even for large qvalues (as seen in the results from SAM in Table 1), the "qvalue" gives us a small proportion of true positives; on the other hand, we could select a large number of genes with a small qvalue (as seen in the results from MBIS and ttest for real data in Table 2). We recommend that in this situation (small sample sizes), instead of using qvalue only, one should choose a cutoff pvalue to select DE genes first and then estimate FDR if desired.
Although we assume equal variance in the MBIS, we also evaluate this new method under situations when this assumption is violated. By simulation, we have shown that, when the variances of gene expressions are near constant, MBIS still outperforms both the ttest and SAM, making our method applicable in various situations.
From our experience, variances estimated from raw expression data are highly variable. We should transform data before applying MBIS. Several variancestabilization and normalization transformation procedures, such as logarithm, BoxCox transformation, generalized logarithm [19], variance stabilization [21] and datadriven HaarFisz transformation for microarrays (DDHFm) [22], are already available. In addition, choosing appropriate preprocessing procedures (background correction, normalization and summarization) is also very important for downstream analyses, including gene selection [16,26,3134].
Conclusions
For microarray data with extremely small sample sizes, a modified ttest like SAM performs better than a regular ttest in terms of sensitivity and specificity. However, to control FDR, for small preset qvalues, SAM fails to select enough true positives and performs worse than the ttest. To circumvent this problem, we propose a modelbased information sharing method (MBIS) that uses information shared by genes. We show, using both simulation and real microarray data, that this new method outperforms the ttest and SAM.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
ZC devised the basic idea of the new method and drafted the manuscript; QL participated in study design and manuscript preparation; MK participated in the analyses based on CALSSIFI; RHS participated in developing this new algorithm; MM, XH and YD assisted the study and cowrote the manuscript. All authors read and approve the final manuscript.
Acknowledgements
The authors thank Ms. Linda Harrison and Ms. Kimberly Lawson for their editorial assistance. ZC would like to thank the support from the NIH grant (UL1 RR024148), awarded to the University of Texas Health Science Center at Houston.
References

Efron B, Tibshirani R, Storey JD, Tushe V: Empirical Bayes analysis of a microarray experiment.
J Am Stat Assoc 2001, 96:11511160. Publisher Full Text

Tusher VG, Tibshirani R, Chu G: Significance analysis of microarrays applied to the ionizing radiation response.
Proc Natl Acad Sci USA 2001, 98(9):51165121. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing.

Storey J: A direct approach to false discovery rates.
J R Statist Soc B 2002, 64:479498. Publisher Full Text

Storey JD, Tibshirani R: Statistical significance for genomewide studies.
Proc Natl Acad Sci USA 2003, 100(16):94409445. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Pounds S, Cheng C: Improving false discovery rate estimation.
Bioinformatics 2004, 20(11):17371745. PubMed Abstract  Publisher Full Text

Wu B: Differential gene expression detection using penalized linear regression models: the improved SAM statistics.
Bioinformatics 2005, 21:15651571. PubMed Abstract  Publisher Full Text

Yang H, Churchill G: Estimating pvalues in small microarray experiments.
Bioinformatics 2007, 23(1):3843. PubMed Abstract  Publisher Full Text

Qiu X, Xiao Y, Gordon A, Yakovlev A: Assessing stability of gene selection in microarray data analysis.
BMC Bioinformatics 2006, 7:50. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Bioconductor [http://www.bioconductor.org] webcite

Storey J, Taylor JE, Siegmund D: Strong control, conservative point estimation and simultaneous conservative consistency of false discovery rates: a unified approach.
J R Stat Soc B 2004, 66:87205. Publisher Full Text

Jung S, Jang W: How accurately can we control the FDR in analyzing microarray data?
Bioinformatics 2006, 22:17301736. PubMed Abstract  Publisher Full Text

DeRisi JL, Iyer VR, Brown PO: Exploring the metabolic and genetic control of gene expression on a genomic scale.
Science 1997, 278(5338):680686. PubMed Abstract  Publisher Full Text

Schena M, Shalon D, Heller R, Chai A, Brown PO, Davis RW: Parallel human genome analysis: microarraybased expression monitoring of 1000 genes.
Proc Natl Acad Sci USA 1996, 93(20):1061410619. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Chen DT, Chen JJ, Soong SJ: Probe rank approaches for gene selection in oligonucleotide arrays with a small number of replicates.
Bioinformatics 2005, 21(12):28612866. PubMed Abstract  Publisher Full Text

Chen Z, McGee M, Liu Q, Scheuermann RH: A distribution free summarization method for Affymetrix GeneChip arrays.
Bioinformatics 2007, 23(3):321327. PubMed Abstract  Publisher Full Text

Hong F, Breitling R: A comparison of metaanalysis methods for detecting differentially expressed genes in microarray experiments.
Bioinformatics 2008, 24(3):374382. PubMed Abstract  Publisher Full Text

Kim S, Lee J, Sohn I: Comparison of various statistical methods for identifying differential gene expression in replicated microarray data.
Stat Methods Med Res 2006, 15:320. PubMed Abstract  Publisher Full Text

Zhou L, Rocke DM: An expression index for Affymetrix GeneChips based on the generalized logarithm.
Bioinformatics 2005, 21(21):39833989. PubMed Abstract  Publisher Full Text

Durbin BP, Hardin JS, Hawkins DM, Rocke DM: A variancestabilizing transformation for geneexpression microarray data.
Bioinformatics 2002, 18(Suppl 1):S105110. PubMed Abstract  Publisher Full Text

Huber W, von Heydebreck A, Sultmann H, Poustka A, Vingron M: Variance stabilization applied to microarray data calibration and to the quantification of differential expression.
Bioinformatics 2002, 18(Suppl 1):S96104. PubMed Abstract  Publisher Full Text

Motakis ES, Nason GP, Fryzlewicz P, Rutter GA: Variance stabilization and normalization for onecolor microarray data using a datadriven multiscale approach.
Bioinformatics 2006, 22(20):25472553. PubMed Abstract  Publisher Full Text

Rocke DM, Durbin B: A model for measurement error for gene expression arrays.
J Comput Biol 2001, 8(6):557569. PubMed Abstract  Publisher Full Text

Basso K, Margolin AA, Stolovitzky G, Klein U, DallaFavera R, Califano A: Reverse engineering of regulatory networks in human B cells.
Nat Genet 2005, 37(4):382390. PubMed Abstract  Publisher Full Text

NCBI GEO Database [http://www.ncbi.nih.gov/projects/geo] webcite

Bolstad BM, Irizarry RA, Astrand M, Speed TP: A comparison of normalization methods for high density oligonucleotide array data based on variance and bias.
Bioinformatics 2003, 19(2):185193. PubMed Abstract  Publisher Full Text

CLASSIFI [http://pathcuric1.swmed.edu/pathdb/classifi.html] webcite

Kong M, Chen Z, Qian Y, Cai J, Lee J, Rab E, McGee M, Scheuermann R: Use of gene ontology as a tool for assessment of analytical algorithms with real data sets: impact of revised affymetrix CDF annotation.
In In 7th International Workshop on Data Mining in Bioinformatics August 12th 2007; San Jose Edited by Chen JY, Lonardi A, Zaki M. 2007, 6068.

Lee JA, Sinkovits RS, Mock D, Rab EL, Cai J, Yang P, Saunders B, Hsueh RC, Choi S, Subramaniam S, Scheuermann RH: Components of the antigen processing and presentation pathway revealed by gene expression microarray analysis following B cell antigen receptor (BCR) stimulation.
BMC Bioinformatics 2006, 7:237. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

The Gene Ontology Consortium: Creating the gene ontology resource: design and implementation.
Genome Res 2001, 11(8):14251433. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Chen Z, McGee M, Liu Q, Kong M, Deng Y, Scheuermann RH: A distributionfree convolution model for background correction of oligonucleotide microarray data.
BMC Genomics 2009, 10(Suppl 1):S19. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Chen Z, McGee M, Liu Q, Kong YM, Huang X, Yang JY, Scheuermann RH: Identifying differentially expressed genes based on probe level data for GeneChip arrays.
Int J Comput Biol Drug Des 2010, 3(3):237257. PubMed Abstract  Publisher Full Text

Irizarry RA, Hobbs B, Collin F, BeazerBarclay YD, Antonellis KJ, Scherf U, Speed TP: Exploration, normalization, and summaries of high density oligonucleotide array probe level data.
Biostatistics 2003, 4(2):249264. PubMed Abstract  Publisher Full Text

McGee M, Chen Z: Parameter estimation for the exponentialnormal convolution model for background correction of affymetrix GeneChip data.
Stat Appl Genet Mol Biol 2006, 5:Article24. PubMed Abstract  Publisher Full Text