Abstract
Background
The withinsubject coefficient of variation and intraclass correlation coefficient are commonly used to assess the reliability or reproducibility of intervalscale measurements. Comparison of reproducibility or reliability of measurement devices or methods on the same set of subjects comes down to comparison of dependent reliability or reproducibility parameters.
Methods
In this paper, we develop several procedures for testing the equality of two dependent withinsubject coefficients of variation computed from the same sample of subjects, which is, to the best of our knowledge, has not yet been dealt with in the statistical literature. The Wald test, the likelihood ratio, and the score tests are developed. A simple regression procedure based on results due to Pitman and Morgan is constructed. Furthermore we evaluate the statistical properties of these methods via extensive Monte Carlo simulations. The methodologies are illustrated on two data sets; the first are the microarray gene expressions measured by two plat forms; the Affymetrix and the Amersham. Because microarray experiments produce expressions for a large number of genes, one would expect that the statistical tests to be asymptotically equivalent. To explore the behaviour of the tests in small or moderate sample sizes, we illustrated the methodologies on data from computeraided tomographic scans of 50 patients.
Results
It is shown that the relatively simple Wald's test (WT) is as powerful as the likelihood ratio test (LRT) and that both have consistently greater power than the score test. The regression test holds its empirical levels, and in some occasions is as powerful as the WT and the LRT.
Conclusion
A comparison between the reproducibility of two measuring instruments using the same set of subjects leads naturally to a comparison of two correlated indices. The presented methodology overcomes the difficulty noted by data analysts that dependence between datasets would confound any inferences one could make about the differences in measures of reliability and reproducibility. The statistical tests presented in this paper have good properties in terms of statistical power.
Background
An extensive literature has been developed on procedures for testing the equality of two or more independent coefficients of variation as measures of reproducibility [35]. Their work shows that likelihoodbased methods such as the likelihood ratio (LR) test, score test, and tests based on the method of generalized statistics developed by Weerahandi [6], provide efficient procedures for comparing coefficient of variations (CV) in univariate normal populations or from independent samples. However, there are situations where comparing CVs from related samples should be considered. Typical situation is when two instruments are used to measure the same set of subjects, and each subject is repeatedly measured by the same instrument. We shall explain in the methods section the reason why the withinsubject coefficient of variation (WSCV) is a more appropriate measure of reproducibility than the CV. Many authors use the terms reliability and reproducibility interchangeably [79]; however we believe that they are conceptually different. The reliability is the degree of closeness of the repeated observation on the same subject under the same experimental conditions, so the instrument is always the same. The Intraclass correlation coefficient (ICC) is commonly used as a measure of reliability. It is calculated as the ratio between subjects variance to the total variance. Therefore, the larger the heterogeneity among the subjects, with lower or equal random error the easier it is to differentiate among subjects. In other words, the ICC measures how distinguishable the subjects are. On the other hand, reproducibility determines the degree of closeness of the repeated observations made on the same subject either by the same instrument or different instruments. There is a wide debate among statisticians and psychometricians related to the choice of appropriate measures of reliability and reproducibility. We refer the interested reader to [10,11]. The main focus of our paper is on the reproducibility parameter.
An important application from molecular biology research in which correlated/dependent reproducibility coefficients are compared is when microarray technologies are compared in terms of reproducibility of gene expression measurements. DNA Microarrays are powerful technologies which make it possible to study genomewide gene expressions and are extensively used in biological research. As the technology evolves rapidly a number of different platforms became available, which introduces some challenges for researchers to know which technology is best suited for their needs. There have been various studies that directly compared the performance of one platform with another in terms of crossplatform comparability and agreement of gene expression results. However the results of these studies are conflicting: some demonstrate concordance, others discordance between technologies [1217]. Thus one needs to take into consideration the accuracy and reproducibility of different types of microarrays when allocating the laboratory resources for future experiments. The key factors for selecting an appropriate platform are (1) Intraassay reproducibility, and (2) the degree of crossplatform agreement [18]. The concordance among microarray platforms would allow researchers to directly compare their measurements and perform metaanalyses.
Most of the microarray reliability or reproducibility and crossplatform studies use Pearson's correlation, as an index of reproducibility or agreement. However, it has long been recognized that application of procedures such as the paired ttest and Pearson's correlation are not appropriate tools for measuring agreement between measuring devices [19,20]. Rather, indices such as the intraclass correlation coefficient [21] and the within subject coefficient of variation should be used as measures of reproducibility. It has also been demonstrated that the withinsubject coefficient of variation is very useful in assessing instrument reproducibility [8,22].
The main focus of this paper is to develop several procedures for testing the equality of two dependent withinsubject coefficients of variation computed from the same sample of subjects, which is, to the best of our knowledge, has not been dealt with in the statistical literature, and to evaluate the statistical properties of these methods via extensive Monte Carlo simulation. We propose two approaches; one is likelihood based (LRT, Wald, and Score test), and the other is a regression based approach coined as PM test. After evaluating the statistical properties (power and empirical level of significance) of these tests using Monte Carlo simulation, the methodology is illustrated on data from two biomedical studies.
Methods
Likelihood based methodology
Suppose that we are interested in comparing the reproducibility of two instruments. Let x_{ijl }be the jth measurement of the ith subject by the lth instrument, j = 1,2,... m_{l}, i = 1,2,... n, and l = 1, 2. To evaluate the WSCV we consider the oneway random effects model
where μ_{l }is the mean value of measurements made by the lth instrument, b_{i }are independent random subject effects with b_{i }~ N(0, ), and e_{ijl }are independent N(0, ). Many authors have used the intraclass correlation coefficient (ICC), ρ_{l }defined by the ratio as measure of reproducibility/reliability [18,23]. Quan and Shih [8] argued that ρ_{l }is studypopulation based since it involves betweensubject variation. Meaning that the more heterogeneity in the population, the larger the ρ_{l}. Alternatively, they proposed the withinsubject coefficient of variation (WSCV) θ_{l }= σ_{l}/μ_{l }as a measure of reproducibility. It determines the degree of closeness of repeated measurements taken on the same subject either by the same instruments or on different occasions under the same conditions. It is clear that, the smaller the WSCV, the better the reproducibility. We distinguish the WSCV from the coefficient of variation since CV_{l }involves in the numerator and similar to ρ_{l }is population based. Therefore, more heterogeneity in the population would result in a large value of CV_{l}. For that reason we shall focus our work on the WSCV rather than the CV. We also note that there is an inverse relationship between the ICC (ρ_{l}) and the corresponding within subject variance . Clearly, larger values of ICC (higher reliability) would be associated with smaller WSCV (better reproducibility). The focus of this paper is on aspects of statistical inference on the difference between two correlated WSCV. The inferential procedure depends on the multivariate normality of the measurements and is mainly likelihood based. The following setup is to facilitate the construction of the likelihood function.
Let
denote the measurements on the i^{th }subject, i = 1,2,....,n where are the m_{1 }measurements obtained by the first method (platform), are the m_{2 }measurements obtained the second method (platform). We assume that X_{i }~ N(μ, Σ), where and,
In these expressions 1_{k }is a column vector with all k elements equal to 1, I_{k }is a k × k identity matrix and J_{k }and J_{kxt }are k × k and k × t matrices with all the elements equal to 1. Thus the model assumes that the m_{1 }observations taken by the first platform have common mean μ_{1}, common variance , and common intraclass correlation ρ_{1}, whereas the m_{2 }measurements taken by the second platform have common mean μ_{2}, common variance , and common intraclass correlation ρ_{2}. Moreover, ρ_{12 }denotes the interclass correlation between any pair of measurements x_{ij }(j = 1,2,... m_{1}) and , and also assumed constant across all subjects in the population.
For the l^{th }method, the WSCV, which will be denoted as θ_{l }in the remainder of the paper is defined as
Our primary aim is to develop and evaluate methods of testing H_{0}:θ_{1 }= θ_{2 }taking into account dependencies induced by a positive value of ρ_{12}. We restrict our evaluation to reproducibility studies having m_{1 }= m_{2 }= m.
Methods for testing the null hypothesis
Wald test (WT)
If X_{1}, X_{2},.... X_{n }is a sample from the above multivariate normal distribution, then the loglikelihood function l, as a function of ψ = (μ_{1}, μ_{2}, , , ρ_{1}, ρ_{2}, ρ_{12}) is given by:
where,
u_{l }= 1 + (m  1)ρ_{l}, l = 1, 2 and,
From [24] the conditions {1 + (m  1)ρ_{1}}{1 + (m  1)ρ_{2}} > m^{2 } and 1/(m  1) <ρ_{l }< 1 must be satisfied for the likelihood function to be a sample from a nonsingular multivariate normal distribution.
The summary statistics given in (3) are defined as:
The maximum likelihood estimates (MLE) for μ_{l }and are given respectively by , where and l = 1, 2. Clearly, exists for values of m > 1. Therefore we shall assume that m > 1 throughout this paper. From [24], we obtain and by computing Pearson's productmoment correlation over all possible pairs of measurements that can be constructed within platforms 1 and 2 respectively, with similarly obtained by computing this correlation over the nm^{2 }pairs (x_{ij}, x_{i},_{m+l}).
The WT of H_{0}:θ_{1 }= θ_{2 }requires the evaluation of variance of , l = 1, 2, and . To obtain these values we use elements of Fisher's information matrix, along with the delta method [26,27]. On writing:
ψ = (ψ_{1}, ψ_{2})',ψ_{1 }= (μ_{1}, μ_{2})', and , the Fisher's information matrix I = E⌊∂^{2}l/∂ψ∂ψ'⌋ has the following structure:
This is based on a result from [26] (page 239) indicating that, I_{12 }= = E(∂^{2}l/∂ψ_{1}∂ψ'_{2}) = 0. Therefore, from the asymptotic theory of maximum likelihood estimation we have:
And the elements of I_{22 }are given in the Appendix.
The elements of are the asymptotic variance covariance matrix of the maximum likelihood estimators of the covariance parameters. Inverting Fisher's information matrices we get:
Applying the delta method [27], we can show, to the first order of approximation that:
The maximum likelihood estimator of θ_{l }is . Again, by application of the delta method, we can show to the first order of approximation that:
as was shown by Quan and Shih [8].
Again using the delta method we show approximately that:
From [28] we apply the large sample theory of maximum likelihood to establish that:
is approximately distributed under H_{0 }as a standard normal deviate. The denominator of Z is the standard error of and is denoted by SE . Since the standard error of contains unknown parameters, its maximum likelihood estimate is obtained by substituting for θ_{l}, for ρ_{l }and for ρ_{12}. Moreover, we may construct an approximate (1α)100% confidence interval on (θ_{1 } θ_{2}) given as:
, where z_{α/2 }is the (1α/2)100% cutoff point of the standard normal distribution.
Likelihood ratio test (LRT)
An LRT of H_{0 }: θ_{1 }= θ_{2 }was developed numerically, and computed by first setting μ_{l }= σ_{l}/θ_{l}, l = 1,2 in Equation (3), and then adopting the following algorithm:
1 Set μ_{l }= σ_{l}/θ_{l}, l = 1,2 in Equation (3), thereafter;
2 Set θ_{1 }= θ_{2 }= θ in (3)
3 Minimize the resulting expression with respect to all six parameters (σ_{1}, σ_{2}, ρ_{1}, ρ_{2}, ρ_{12}, θ) and;
4 Subtract the minimum from the minimum of 2L as computed over all seven parameters (σ_{1}, σ_{2}, ρ_{1}, ρ_{2}, ρ_{12}, θ_{1}, θ_{2}) in the model.
It then follows from standard likelihood theory that the resulting test statistic is approximately chisquare distributed with 1 degree of freedom under H_{0}.
Score test
One of the advantages of likelihood based inference procedure is that in addition to the WT and the LRT "Rao's score test" can also be readily developed. The motivation for it is that it can sometimes be easier to maximize the likelihood function under the null hypothesis than under the alternative hypothesis. A standard procedure for performing the score test of H_{0 }: θ_{1 }= θ_{2 }is to set θ_{2 }= θ_{1 }+ Δ, so that the null hypothesis is equivalent to H_{0 }: Δ = 0, where Δ is unrestricted. Replacing μ_{l }by σ_{l}/θ_{l}, the loglikelihood function L is then independent of μ_{l}.
Let L = L(Δ; ψ^{•}) = L(Δ; θ_{1}, σ_{1}, σ_{2}, ρ_{1}, ρ_{2}, ρ_{12}) and .
From [28] the score statistic is given by:
where
and . The matrices on the right hand side of A_{1•2 }are obtained from partitioning the Fisher's information matrix A so that where , and with all the matrices on the right hand side of A_{1•2 }evaluated at Δ = 0. When an estimator other than the MLE is used for the nuisance parameters ψ*, provided that the estimator is consistent, it was shown that the asymptotic distribution of S is that of a chisquare with 1 degree of freedom [29,30].
The score test has been applied in many situations and has been proven to be locally powerful. Unfortunately, the inversion of A_{1•2 }is quite complicated and we cannot obtain a simple expression for S that can be easily used. Moreover, we have also found through extensive simulations that while the score test holds its levels of significance, it is less powerful than LRT and WT across all parameter configurations. We therefore focus our subsequent discussion of power to LRT and WT.
Regression test
Pitman [1] and Morgan [2] introduced a technique to test the equality of variances of two correlated normally distributed random variables. It is constructed to simply test for zero correlation between the sums and differences of the paired data. Bradley and Blackwood [31] extended Pitman and Morgan's idea to a regression context that affords a simultaneous test for both the means and the variances. The test is applicable to many paired data settings, for example, in evaluating the reproducibility of lab test results obtained from two different sources. The test could also be used in repeated measures experiments, such as in comparing the structural effects of two drugs applied to the same set of subjects. Here we generalize the results of Bradley and Blackwood to establish the simultaneous equality of means and variances of two correlated variables, implying the equality of their coefficients of variations.
Direct application of the multivariate normal theory shows that the conditional expectation of d_{i }on s_{i }is linear [32]. That is
where
where
The proof is straightforward and is therefore omitted.
It can be shown then from direct application of the multivariate normal theory that the conditional expectation (11) is linear, and does not depend on the parameter ρ_{12}.
From (11.a) and (11.b), it is clear that α = β = 0 if and only if μ_{1 }= μ_{2 }and σ_{1 }= σ_{2 }simultaneously. Therefore, testing the equality of two correlated coefficients of variations is equivalent to testing the significance of the regression equation (11). From the theory of least squares, if we define:
and EMS = (TSS  RSS)/(n  2),
the hypothesis H_{0 }: α = β = 0 is rejected when RSS/EMS exceeds F_{v,1,(n2)}, the (1  v) 100% percentile value of the Fdistribution with 1 and (n2) degrees of freedom [32].
Results
Simulation
The theoretical properties of the test procedures discussed thus far are largely intractable in finite samples. We therefore took a Monte Carlo study to determine the levels of significance and powers of these tests over a wide range of parameter values. For this study we generated observations from a multivariate normal distribution with covariance structure defined as in (2). Simulations were performed using programs written in MATLAB (The Math. Works, Inc., Natic, MA).
The parameters of the simulation included the total number of subjects (n), the number of replications (m_{1 }= m_{2 }= m), and various values of (θ_{1}, θ_{2}, ρ_{1}, ρ_{2}, ρ_{12}). For each of 2000 independent runs of an algorithm constructed to generate observations from multivariate normal distribution, we estimated the true level of significance and power of the LRT, Wald, Score and PM tests using a nominal level of significance 5% (two sided) for various combinations of parameters.
Tables 1 and 2 report the empirical significance levels based on 2000 simulated datasets for four (WT, Score, LRT and PM) procedures for sample size of n = 50 and n = 100, respectively. It is seen that all procedures provide satisfactory significance levels at all parameter values examined. The empirical significance levels for smaller sample sizes (n = 10, 20, and 30) were also estimated. All test procedures provided empirical levels that are very close to the 5% nominal level (data not shown).
Table 1. Empirical significance levels based on 2000 runs at nominal level 5% (two sided) for testing θ_{1 }= θ_{2 }= 0.15 using the LRT, Wald, Score and PM for n = 50 subjects and m replicates, ρ_{1 }= ρ_{2 }= ρ.
Table 2. Empirical significance levels based on 2000 runs at nominal level 5% (two sided) for testing θ_{1 }= θ_{2 }= 0.15 using the LRT, Wald, Score and PM for n = 100 subjects and m replicates, ρ_{1 }= ρ_{2 }= ρ.
Tables 3 and 4 display empirical powers based on 2000 simulated datasets for WT and LRT in sample sizes n = 30 and 50, respectively. As alluded to earlier, the score test is excluded from the power Tables 3 and 4 because its simulated empirical power values were unacceptably low (as we show in Table 5). We observe that for all parameter values that WT and LRT provide almost identical values of power (Tables 3 and 4). Thus, although the LRT shows greater power at some parameter combinations than the WT, the difference is usually less than three percentage points. We also conducted simulations to estimate the powers of the test statistics for smaller sample sizes (n = 10, and 20) (data not shown). We found that for some parameter combinations Wald and LRT provided acceptable power especially if the distance between θ_{1 }and θ_{2 }is large, and showed greater power than both the Score and PM tests. The power of Score test was generally very low.
Table 3. Empirical power based on 2000 runs for testing θ_{1 }= θ_{2 }using the LRT and Wald test for n = 30 subjects.
Table 4. Empirical power based on 2000 runs for testing θ_{1 }= θ_{2 }using the LRT and Wald test for n = 50 subjects.
Table 5. Empirical Power of PM, Score and Wald tests based on 2000 data sets, n = 50 subjects, m = 3 replicates.
For selected parameter values, power levels of PM, Wald, and the score tests for n = 50 subjects are given in Table 5. As already mentioned, the power of the score test is generally low. We note that the power of the Wald test is quite sensitive to the distance between θ_{1 }and θ_{2}. We note that the equality of the means and variances implies the equality of the WSCV, but the reverse is not true. This strong assumption might explain the relatively poor performance of the PM test, particularly when the means are not well separated.
To assess the effect of nonnormality on the properties of the proposed test statistics we generated data from a lognormal distribution, and evaluated the performance of the four procedures for 2000 simulated datasets. The empirical levels of the regression based PM test were quite close to the 5% nominal level, but the power was poor. However, the likelihood based procedures (Wald, LRT and Score) did not preserve their nominal levels for the majority of the parameters combinations (data not shown).
Applications
Gene expression data
We illustrate the proposed methodologies by analyzing data from two biomedical studies. In the first data sets we illustrate the methodology on the gene expression measurement results of identical RNA preparations for two commercially available microarray platforms, namely, Affymerix (25mer), and Amersham (30mer) [14]. The RNA was collected from pancreatic PANC1 cells grown in a serumrich medium ("control") and 24 h following the removal of the serum ("treatment"). Three biological replicates (B1, B2, and B3) and three technical replicates (T1, T2, and T3) for the first biological replicate (B1) were produced by each platform. Therefore, for each condition (control and treatment) five hybridizations are conducted. The dataset consists of 2009 genes that are identified as common across the platforms after comparing their Gene Bank IDs, and is normalized according to the manufacturer's standard software and normalization procedures. More details concerning this dataset can be found in the original article [14].
The results presented in this section were not restricted to the group of differentially expressed genes, and we used the "control" part of the data for both technical and biological replicates. The normalized intensity values are averaged for genes with multiple probes for a given Gene ID. Hence, we have a sample size of n = 2009 genes measured three times (m = 3) by each of the two platforms (or instruments). We have used the within gene coefficient of variation as a measure of reproducibility of a specific platform.
The results of the data analyses are summarized in Table 6. Parameter estimates for both platforms, the estimated WSCV under the null hypotheses, as well as confidence interval of the difference of the two WSCVs are given in the Table. We note that the correlation estimates remain the same under both hypotheses. Moreover, we note that the intraclass correlations (ρ) are quite high. Using benchmarks provided in [33], both platforms produce substantially reproducible gene expression levels. Clearly, this is due to the large heterogeneity among the genes in the data set. Application of the LRT, Wald, and the PM tests for testing the equality of two dependent WSCV show that the Amersham has significantly lower WSCV (P < 0.001) i.e. better reproducibility for both the technical and biological replicates.
Table 6. Microarray Gene Expression data results (n = 2009 genes, m = 3 replicates)
Analysis of computer aided tomographic scan measurements
Here we demonstrate the statistical methodologies of this paper on a much smaller data set than the microarray gene expression example. The data are from a study using the ComputerAided Tomographic Scans (CATSCAN) of the heads of 50 psychiatric patients [20,34]. The measurements are the size of the brain ventricle relative to that of the patient's skull, and given by the ventriclebrain ratio VBR = (ventricle size/brain size) × 100. For a given scan, VBR was determined from measurements of the perimeter of the patient's ventricle together with the perimeter of the inner surface of the skull. These measurements were taken either: (i) from an automated pixel count (PIX) based on the images displayed on a television screen, or (ii) a handheld planimeter (PLAN) on a projection of the Xray image. Table 7 summarizes the results. Clearly all tests show that PIX has significantly lower WSCV that the PLAN (p < 0.001); that is better reproducibility.
Table 7. Analysis of computeraided tomographic scan data on 50 patients via PIX or PLAN with two replicates
Discussion
A comparison between the reproducibility of two measuring instruments using the same set of subjects leads naturally to a comparison of two dependent indices. In this paper, several procedures are developed for testing equality of two dependent withinsubject coefficient of variations computed from the same sample of subjects. We proposed two approaches; one is likelihood based (LRT, Wald, and Score test), while the other is regression based approach (extension of PitmanMorgan). We assessed the powers and the empirical levels of significance of these methods via extensive Monte Carlo simulations. It is shown that the relatively simple Wald's test (WT) is as powerful as the likelihood ratio test (LRT) and that both have consistently greater power than the score test. A simple procedure based on results due to Pitman [1] and Morgan [2] is also developed and shown to be as powerful as the likelihood based tests.
We illustrated the proposed methodologies with the analyses of data from two biomedical studies. The majority of microarray reproducibility and crossplatform agreement studies use Pearson's correlation, as an index of reproducibility and agreement, which would not be an appropriate measure of reproducibility. Because of the large heterogeneity among the genes in the data set, the intraclass correlation coefficient as an index of reproducibility of the platform would also not be an appropriate index of reliability as highly heterogeneous populations artificially produces high reliability index. Therefore, WSCV should be used as an index of reproducibility. In addition, the methodology presented in this paper overcomes the difficulty noted by Tan et al. [14] in which the authors state that "Dependence between the datasets would confound any inferences we could make about the differences in correlations. ... determination whether differences in correlation were statistically significant could not be made". In this paper, we have used the within gene coefficient of variation as a measure of reproducibility of a specific platform. Therefore, a comparison across platforms leads naturally to a comparison of two dependent withinsubject coefficients of variation.
Two issues need to be discussed in this section. The first is related to the nature of the data to be analyzed while the other is related to situations when the assumed underlying model generating the data deviates from the normal distribution.
First, a frequently occurring question in the planning of biomedical investigations is whether to measure the response or the trait of interest on a continuous scale (e.g. gene expressions; systolic blood pressures etc.) or dichotomous scale (e.g. highly expressed gene vs. low expressed genes; hypertensive vs. normtensive etc.). In the case of two measuring devices and two dichotomous responses, the most commonly used measure of testretest reliability or agreement is the kappa coefficient introduced in [35]. Donner and Eliasziw [36] and more recently Shoukri and Donner [37] cautioned against dichotomizing traits measured on the continuous scales. They demonstrated that the loss in the efficiency in estimation of the reliability coefficient can be severe. The conclusion is that for naturally dichotomous traits (e.g. affected vs. not affected) one can use kappa to assess the testretest reliability, while for continuous traits the methods presented in this paper would be more appropriate.
Second, it should be noted that the inference procedures discussed in this paper (except the PM test) are likelihood based and their statistical properties may not be appropriate in small samples. The difficulty is that the sampling distribution of a test statistics is unknown. Alternatively, one may use the bootstrap technology to estimate the sampling distributions of the test statistics. When the data are hierarchical in nature with variance covariance matrix Σ as shown in (2), one may use modelbased approach to generate bootstrap samples [38], which is achieved by sampling subjects with replacement and estimate the coefficients of variations and hence their empirical sampling distributions. There is already a rich class of bootstrap methods for clustered data in the literature but there is an absence of detailed theoretical results on the properties of these methods [39]. Gaining insight into bootstrapping clustered data for all these methods and draw comparison to our proposed likelihood based approach warrants serious investigation and is beyond the scope of this paper.
Conclusion
Comparison of reproducibility or reliability of measurement devices or methods on the same set of subjects comes down to comparison of dependent reliability or reproducibility parameters. Testing the equality of two dependent WSCV has not been dealt with in the statistical literature. The presented methodology overcomes the difficulty noted by data analysts that the issue of dependence when ignored, would confound the inference on measures of reliability or reproducibility. It should also be emphasized that when comparison among platforms reliability indices the ICC is not an appropriate measure of reliability due to the large heterogeneity among the genes. Because the magnitude of the ICC depends on the degree of heterogeneity among the subjects it is not an appropriate index of reproducibility. We therefore recommend the WSCV in similar settings.
The LRT and WT procedures presented in Section 2 may also be extended in a straightforward manner to compare more than two platforms (methods, labs, or measurement devices). A further advantage of the LRT in this context is that it may easily be extended to deal with the case of an unequal number of replicates for each platform.
The codes developed (in MATLAB) can be used to do power calculations for planning a reproducibility study when comparing two methods (or devices), and can be obtained on request from the authors.
APPENDIX
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
MMS conceived of the study problem and derived the analytical results. DC conducted the simulations and analyzed the data. All authors contributed to the writing of the manuscript, and approved its final format.
Acknowledgements
The first three authors would like to thank the research centre administration of the King Faisal Specialist Hospital and Research Centre for their support. Dr. Donner acknowledges the support made to his research by The Natural Sciences and Engineering Research Council of Canada (NSERC).
References

Morgan W: A test for the significance of the difference between two variances in a sample from bivariate population.

Gupta RC, Ma S: Testing the equality of coefficients of variation in k Testing normal populations.
Communications in StatisticsTheory and Methods 1996, 25:115132. Publisher Full Text

Fung WK, Tsang TS: A simulation study comparing tests for the equality of coefficients of variation.
Statistics in Medicine 1998, 17:20032014. PubMed Abstract  Publisher Full Text

Tian L: Inferences on the common coefficient of variation.
Stat Med 2005, 24(14):22132220. PubMed Abstract  Publisher Full Text

Weerahandi S: Exact statistical methods for data analysis. Springer: New York; 1995.

Quan H, Shih W: Response to Letter to the Editor.
Biometrics 2000, 56:301303. PubMed Abstract  Publisher Full Text

Quan H, Shih W: Assessing reproducibility by the withinsubject coefficient of variation with random effects models.
Biometrics 1996, 52:11951203. PubMed Abstract  Publisher Full Text

Giraudeau B, Ravaud P, Chastang C: Comments on Quan and Shih's Assessing Reproducibility by the WithinSubject Coefficient of Variation With Random Effects Models.
Biometrics 2000, 56:301303. PubMed Abstract  Publisher Full Text

Atkinson G, Neville A: Comment on the use of concordance correlation to assess the agreement between two variables.

Lin LI, Chinchilli V: Rejoinder to the letter to the Editor from Atkinson and Neville.
Biometrics 1997, 53(2):777778. Publisher Full Text

Shi L, Tong W, Fang H, Scherf U, Han J, Puri RK, Frueh FW, Goodsaid FM, Guo L, Su Z, Han T, Fuscoe JC, Xu ZA, Patterson TA, Hong H, Xie Q, Perkins RG, Chen JJ, Casciano DA: Crossplatform comparability of microarray technology: intraplatform consistency and appropriate data analysis procedures are essential.
BMC Bioinformatics 2005, 6(Suppl 2):S12. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Irizarry RA, Warren D, Spencer F, Kim IF, Biswal S, Frank BC, Gabrielson E, Garcia JGN, Geoghegan J, Germino G, Griffin C, Hilmer SC, Hoffman E, Jedlicka AE, Kawasaki E, MartínezMurillo F, Morsberger L, Lee H, Petersen D, Quackenbush J, Scott A, Wilson M, Yang Y, Ye SQ, Yu W: Multiplelaboratory comparison of microarray platforms.
Nature Methods 2005, 2(5):345350. PubMed Abstract  Publisher Full Text

Tan PK, Downey TJ, Spitznagel EL Jr, Xu P, Fu D, Dmitrov DS, Lempicki RA, Raaka BM, Cam MC: Evaluation of gene expression measurements from commercial microarray platforms.
Nucleic Acids Res 2003, 31:56765684. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Kuo WP, Jenssen TK, Butte AJ, OhnoMachado L, Kohane IS: Analysis of matched mRNA measurements from two different microarray technologies.
Bioinformatics 2002, 18:405412. PubMed Abstract  Publisher Full Text

Yauk CL, Berndt ML, Williams A, Douglas GR: Comprehensive comparison of six microarray technologies.
Nucleic Acids Res 2004, 32(15):e124.
doi:101093/nar/gnh123
PubMed Abstract  Publisher Full Text  PubMed Central Full Text 
Jarvinen AK, Hautaniemi S, Edgren H, Auvinen P, Saarela J, Kallioniemi OP, Monni O: Are data from different gene expression microarray platforms comparable?
Genomics 2004, 83:11641168. PubMed Abstract  Publisher Full Text

Wang H, He X, Band M, Wilson C, Liu L: A study of interlab and interplatform agreement of DNA microarray data.
BMC Genomics 2005, 6:71. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Lin L: A concordance correlation coefficient to evaluate reproducibility.
Biometrics 1989, 45:255268. PubMed Abstract  Publisher Full Text

Dunn G: Design and Analysis of Reliability Studies.
Statistical Methods in Medical Research 1992, 1:123157. PubMed Abstract  Publisher Full Text

Donner A, Zou G: Testing the equality of dependent intraclass correlation coefficients.

Shoukri M, ElKum N, Walter SD: Interval estimation and optimal design for the withinsubject coefficient of variation for continuous and binary variables.
BMC Medical Research Methodology 2006, 6:24.
doi:10.1186/14712288624.
PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text 
Fleiss J: The Design and Analysis of Clinical Experiments. J Wiley, New York; 1986.

Donner A, Bull S: Inferences concerning a common intraclass correlation coefficient.
Biometrics 1983, 39:771775. PubMed Abstract  Publisher Full Text

Blodeau M, Brenner D: Theory of Multivariate Statistics. New York: Springer; 1999.

Searle RS, Casella G, McCulloch CE: Variance Components. Wiley Interscience; 1992.

Stuart A, Ord K: Advanced Theory of Statistics. Volume 1. 5th edition. London: Griffin; 1987::324.

Cox DR, Hinkley DV: Theoretical Statistics. Chapman and Hall: London; 1974.

Neyman J, Scott E: On the use of C(α) optimal tests of composite hypotheses.
Bulletin of the International Statistical Institute, Proceedings of the 35th Session 1966, 41:477497.

Neyman J: Optimal asymptotic tests of composite hypotheses. In Probability and Statistics: The Harold Cramer Volume. Edited by Grenander V. Wiley: New York; 1959:213234.

Bradley E, Blackwood L: Comparing paired data: A simultaneous test for means and variances.
The American Statistician 1989, 43:234235. Publisher Full Text

Draper N, Smith H: Applied Regression Analysis. 2nd edition. WileyInterscience; 1981.

Landis R, Koch G: The measurements of observer agreement for categorical data.
Biometrics 1977, 33:159174. PubMed Abstract  Publisher Full Text

Turner SW, Toone BK, BrettJones JR: Computerized tomographic scan in early schizophrenia preliminary findings.
Psychological Medicine 1986, 16:219225. PubMed Abstract

Donner A, Eliasziw M: Statistical implications for the choice between a dichotomous or continuous trait in studies of interobserver agreement.
Biometrics 1994, 50:550777. Publisher Full Text

Shoukri MM, Donner A: Efficiency considerations in the analysis of interobserver agreement.
Biostatistics 2001, 2(3):323336. PubMed Abstract  Publisher Full Text

Davison AC, Hinkley D: Bootstrap Methods and Their Application. Cambridge: Cambridge University Press; 1997.

Ukomunne OC, Davison AC, Gulliford MC, Chinn S: Nonparametric bootstrap confidence intervals for the intraclass correlation coefficient.
Statistics in Medicine 2003, 22:38053821. PubMed Abstract  Publisher Full Text
Prepublication history
The prepublication history for this paper can be accessed here: