Skip to main content
  • Research article
  • Open access
  • Published:

Comparison of two dependent within subject coefficients of variation to evaluate the reproducibility of measurement devices

Abstract

Background

The within-subject coefficient of variation and intra-class correlation coefficient are commonly used to assess the reliability or reproducibility of interval-scale measurements. Comparison of reproducibility or reliability of measurement devices or methods on the same set of subjects comes down to comparison of dependent reliability or reproducibility parameters.

Methods

In this paper, we develop several procedures for testing the equality of two dependent within-subject coefficients of variation computed from the same sample of subjects, which is, to the best of our knowledge, has not yet been dealt with in the statistical literature. The Wald test, the likelihood ratio, and the score tests are developed. A simple regression procedure based on results due to Pitman and Morgan is constructed. Furthermore we evaluate the statistical properties of these methods via extensive Monte Carlo simulations. The methodologies are illustrated on two data sets; the first are the microarray gene expressions measured by two plat- forms; the Affymetrix and the Amersham. Because microarray experiments produce expressions for a large number of genes, one would expect that the statistical tests to be asymptotically equivalent. To explore the behaviour of the tests in small or moderate sample sizes, we illustrated the methodologies on data from computer-aided tomographic scans of 50 patients.

Results

It is shown that the relatively simple Wald's test (WT) is as powerful as the likelihood ratio test (LRT) and that both have consistently greater power than the score test. The regression test holds its empirical levels, and in some occasions is as powerful as the WT and the LRT.

Conclusion

A comparison between the reproducibility of two measuring instruments using the same set of subjects leads naturally to a comparison of two correlated indices. The presented methodology overcomes the difficulty noted by data analysts that dependence between datasets would confound any inferences one could make about the differences in measures of reliability and reproducibility. The statistical tests presented in this paper have good properties in terms of statistical power.

Peer Review reports

Background

An extensive literature has been developed on procedures for testing the equality of two or more independent coefficients of variation as measures of reproducibility [35]. Their work shows that likelihood-based methods such as the likelihood ratio (LR) test, score test, and tests based on the method of generalized statistics developed by Weerahandi [6], provide efficient procedures for comparing coefficient of variations (CV) in univariate normal populations or from independent samples. However, there are situations where comparing CVs from related samples should be considered. Typical situation is when two instruments are used to measure the same set of subjects, and each subject is repeatedly measured by the same instrument. We shall explain in the methods section the reason why the within-subject coefficient of variation (WSCV) is a more appropriate measure of reproducibility than the CV. Many authors use the terms reliability and reproducibility interchangeably [79]; however we believe that they are conceptually different. The reliability is the degree of closeness of the repeated observation on the same subject under the same experimental conditions, so the instrument is always the same. The Intra-class correlation coefficient (ICC) is commonly used as a measure of reliability. It is calculated as the ratio between subjects variance to the total variance. Therefore, the larger the heterogeneity among the subjects, with lower or equal random error the easier it is to differentiate among subjects. In other words, the ICC measures how distinguishable the subjects are. On the other hand, reproducibility determines the degree of closeness of the repeated observations made on the same subject either by the same instrument or different instruments. There is a wide debate among statisticians and psychometricians related to the choice of appropriate measures of reliability and reproducibility. We refer the interested reader to [10, 11]. The main focus of our paper is on the reproducibility parameter.

An important application from molecular biology research in which correlated/dependent reproducibility coefficients are compared is when microarray technologies are compared in terms of reproducibility of gene expression measurements. DNA Microarrays are powerful technologies which make it possible to study genome-wide gene expressions and are extensively used in biological research. As the technology evolves rapidly a number of different platforms became available, which introduces some challenges for researchers to know which technology is best suited for their needs. There have been various studies that directly compared the performance of one platform with another in terms of cross-platform comparability and agreement of gene expression results. However the results of these studies are conflicting: some demonstrate concordance, others discordance between technologies [1217]. Thus one needs to take into consideration the accuracy and reproducibility of different types of microarrays when allocating the laboratory resources for future experiments. The key factors for selecting an appropriate platform are (1) Intra-assay reproducibility, and (2) the degree of cross-platform agreement [18]. The concordance among microarray platforms would allow researchers to directly compare their measurements and perform meta-analyses.

Most of the microarray reliability or reproducibility and cross-platform studies use Pearson's correlation, as an index of reproducibility or agreement. However, it has long been recognized that application of procedures such as the paired t-test and Pearson's correlation are not appropriate tools for measuring agreement between measuring devices [19, 20]. Rather, indices such as the intra-class correlation coefficient [21] and the within- subject coefficient of variation should be used as measures of reproducibility. It has also been demonstrated that the within-subject coefficient of variation is very useful in assessing instrument reproducibility [8, 22].

The main focus of this paper is to develop several procedures for testing the equality of two dependent within-subject coefficients of variation computed from the same sample of subjects, which is, to the best of our knowledge, has not been dealt with in the statistical literature, and to evaluate the statistical properties of these methods via extensive Monte Carlo simulation. We propose two approaches; one is likelihood based (LRT, Wald, and Score test), and the other is a regression based approach coined as PM test. After evaluating the statistical properties (power and empirical level of significance) of these tests using Monte Carlo simulation, the methodology is illustrated on data from two biomedical studies.

Methods

Likelihood based methodology

Suppose that we are interested in comparing the reproducibility of two instruments. Let x ijl be the jth measurement of the ith subject by the lth instrument, j = 1,2,... m l , i = 1,2,... n, and l = 1, 2. To evaluate the WSCV we consider the one-way random effects model

x ijl = μ l + b i + e ijl

where μ l is the mean value of measurements made by the lth instrument, b i are independent random subject effects with b i ~ N(0, σ b 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeq4Wdm3aa0baaSqaaiabdkgaIbqaaiabikdaYaaaaaa@3004@ ), and e ijl are independent N(0, σ l 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeq4Wdm3aa0baaSqaaiabdYgaSbqaaiabikdaYaaaaaa@3018@ ). Many authors have used the intra-class correlation coefficient (ICC), ρ l defined by the ratio ρ l = σ b 2 / ( σ b 2 + σ l 2 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeqyWdi3aaSbaaSqaaiabdYgaSbqabaGccqGH9aqpdaWcgaqaaiabeo8aZnaaDaaaleaacqWGIbGyaeaacqaIYaGmaaaakeaadaqadaqaaiabeo8aZnaaDaaaleaacqWGIbGyaeaacqaIYaGmaaGccqGHRaWkcqaHdpWCdaqhaaWcbaGaemiBaWgabaGaeGOmaidaaaGccaGLOaGaayzkaaaaaaaa@3F72@ as measure of reproducibility/reliability [18, 23]. Quan and Shih [8] argued that ρ l is study-population based since it involves between-subject variation. Meaning that the more heterogeneity in the population, the larger the ρ l . Alternatively, they proposed the within-subject coefficient of variation (WSCV) θ l = σ l /μ l as a measure of reproducibility. It determines the degree of closeness of repeated measurements taken on the same subject either by the same instruments or on different occasions under the same conditions. It is clear that, the smaller the WSCV, the better the reproducibility. We distinguish the WSCV from the coefficient of variation C V l = ( σ b 2 + σ l 2 ) 1 2 / μ l MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaem4qamKaemOvay1aaSbaaSqaaiabdYgaSbqabaGccqGH9aqpjuaGdaWcgaqaamaabmaabaGaeq4Wdm3aa0baaeaacqWGIbGyaeaacqaIYaGmaaGaey4kaSIaeq4Wdm3aa0baaeaacqWGSbaBaeaacqaIYaGmaaaacaGLOaGaayzkaaWaaWbaaeqabaWaaSGaaeaacqaIXaqmaeaacqaIYaGmaaaaaaqaaiabeY7aTnaaBaaabaGaemiBaWgabeaaaaaaaa@416F@ since CV l involves σ b 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeq4Wdm3aa0baaSqaaiabdkgaIbqaaiabikdaYaaaaaa@3004@ in the numerator and similar to ρ l is population based. Therefore, more heterogeneity in the population would result in a large value of CV l . For that reason we shall focus our work on the WSCV rather than the CV. We also note that there is an inverse relationship between the ICC (ρ l ) and the corresponding within subject variance σ l 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeq4Wdm3aa0baaSqaaiabdYgaSbqaaiabikdaYaaaaaa@3018@ . Clearly, larger values of ICC (higher reliability) would be associated with smaller WSCV (better reproducibility). The focus of this paper is on aspects of statistical inference on the difference between two correlated WSCV. The inferential procedure depends on the multivariate normality of the measurements and is mainly likelihood based. The following set-up is to facilitate the construction of the likelihood function.

Let

X i = ( X i 1 , X i 2 , ....... X i m 1 , X i , m 1 + 1 , X i , m 1 + 2 , .... , X i , m 1 + m 2 ) ' MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiwaG1aaSbaaSqaaiabdMgaPbqabaGccqGH9aqpcqGGOaakcqWGybawdaWgaaWcbaGaemyAaKMaeGymaedabeaakiabcYcaSiabdIfaynaaBaaaleaacqWGPbqAcqaIYaGmaeqaaOGaeiilaWIaeiOla4IaeiOla4IaeiOla4IaeiOla4IaeiOla4IaeiOla4IaeiOla4IaemiwaG1aaSbaaSqaaiabdMgaPjabd2gaTnaaBaaameaacqaIXaqmaeqaaaWcbeaakiabcYcaSiabdIfaynaaBaaaleaacqWGPbqAcqGGSaalcqWGTbqBdaWgaaadbaGaeGymaedabeaaliabgUcaRiabigdaXaqabaGccqGGSaalcqWGybawdaWgaaWcbaGaemyAaKMaeiilaWIaemyBa02aaSbaaWqaaiabigdaXaqabaWccqGHRaWkcqaIYaGmaeqaaOGaeiilaWIaeiOla4IaeiOla4IaeiOla4IaeiOla4IaeiilaWIaemiwaG1aaSbaaSqaaiabdMgaPjabcYcaSiabd2gaTnaaBaaameaacqaIXaqmaeqaaSGaey4kaSIaemyBa02aaSbaaWqaaiabikdaYaqabaaaleqaaOGaeiykaKYaaWbaaSqabeaacqGGNaWjaaaaaa@6833@

denote the measurements on the i thsubject, i = 1,2,....,n where X i 1 , X i 2 , .... , X i m 1 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiwaG1aaSbaaSqaaiabdMgaPjabigdaXaqabaGccqGGSaalcqWGybawdaWgaaWcbaGaemyAaKMaeGOmaidabeaakiabcYcaSiabc6caUiabc6caUiabc6caUiabc6caUiabcYcaSiabdIfaynaaBaaaleaacqWGPbqAcqWGTbqBdaWgaaadbaGaeGymaedabeaaaSqabaaaaa@3EC6@ are the m 1 measurements obtained by the first method (platform), X i m 1 + 1 , X i m 1 + 2 , .... , X i m 1 + m 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiwaG1aaSbaaSqaaiabdMgaPjabd2gaTnaaBaaameaacqaIXaqmaeqaaSGaey4kaSIaeGymaedabeaakiabcYcaSiabdIfaynaaBaaaleaacqWGPbqAcqWGTbqBdaWgaaadbaGaeGymaedabeaaliabgUcaRiabikdaYaqabaGccqGGSaalcqGGUaGlcqGGUaGlcqGGUaGlcqGGUaGlcqGGSaalcqWGybawdaWgaaWcbaGaemyAaKMaemyBa02aaSbaaWqaaiabigdaXaqabaWccqGHRaWkcqWGTbqBdaWgaaadbaGaeGOmaidabeaaaSqabaaaaa@490F@ are the m 2 measurements obtained the second method (platform). We assume that X i ~ N(μ, Σ), where μ T = ( μ 1 1 m 1 T , μ 2 1 m 2 T ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeqiVd02aaWbaaSqabeaacqWGubavaaGccqGH9aqpcqGGOaakcqaH8oqBdaWgaaWcbaGaeGymaedabeaakiabigdaXmaaDaaaleaacqWGTbqBdaWgaaadbaGaeGymaedabeaaaSqaaiabdsfaubaakiabcYcaSiabeY7aTnaaBaaaleaacqaIYaGmaeqaaOGaeGymaeZaa0baaSqaaiabd2gaTnaaBaaameaacqaIYaGmaeqaaaWcbaGaemivaqfaaOGaeiykaKcaaa@420D@ and,

Σ = [ σ 1 2 I m 1 + ρ 1 1 ρ 1 σ 1 2 J m 1 ρ 12 σ 1 σ 2 J m 1 x m 2 ρ 12 σ 1 σ 2 J m 1 x m 2 σ 2 2 I m 2 + ρ 2 1 ρ 2 σ 2 2 J m 2 ] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeu4OdmLaeyypa0ZaamWaaeaafaqabeGacaaabaGaeq4Wdm3aa0baaSqaaiabigdaXaqaaiabikdaYaaakiabdMeajnaaBaaaleaacqWGTbqBdaWgaaadbaGaeGymaedabeaaaSqabaGccqGHRaWkjuaGdaWcaaqaaiabeg8aYnaaBaaabaGaeGymaedabeaaaeaacqaIXaqmcqGHsislcqaHbpGCdaWgaaqaaiabigdaXaqabaaaaOGaeq4Wdm3aa0baaSqaaiabigdaXaqaaiabikdaYaaakiabdQeaknaaBaaaleaacqWGTbqBdaWgaaadbaGaeGymaedabeaaaSqabaaakeaacqaHbpGCdaWgaaWcbaGaeGymaeJaeGOmaidabeaakiabeo8aZnaaBaaaleaacqaIXaqmaeqaaOGaeq4Wdm3aaSbaaSqaaiabikdaYaqabaGccqWGkbGsdaWgaaWcbaGaemyBa02aaSbaaWqaaiabigdaXaqabaWccqWG4baEcqWGTbqBdaWgaaadbaGaeGOmaidabeaaaSqabaaakeaacqaHbpGCdaWgaaWcbaGaeGymaeJaeGOmaidabeaakiabeo8aZnaaBaaaleaacqaIXaqmaeqaaOGaeq4Wdm3aaSbaaSqaaiabikdaYaqabaGccqWGkbGsdaWgaaWcbaGaemyBa02aaSbaaWqaaiabigdaXaqabaWccqWG4baEcqWGTbqBdaWgaaadbaGaeGOmaidabeaaaSqabaaakeaacqaHdpWCdaqhaaWcbaGaeGOmaidabaGaeGOmaidaaOGaemysaK0aaSbaaSqaaiabd2gaTnaaBaaameaacqaIYaGmaeqaaaWcbeaakiabgUcaRKqbaoaalaaabaGaeqyWdi3aaSbaaeaacqaIYaGmaeqaaaqaaiabigdaXiabgkHiTiabeg8aYnaaBaaabaGaeGOmaidabeaaaaGccqaHdpWCdaqhaaWcbaGaeGOmaidabaGaeGOmaidaaOGaemOsaO0aaSbaaSqaaiabd2gaTnaaBaaameaacqaIYaGmaeqaaaWcbeaaaaaakiaawUfacaGLDbaaaaa@84D1@
(2)

In these expressions 1 k is a column vector with all k elements equal to 1, I k is a k × k identity matrix and J k and J kxt are k × k and k × t matrices with all the elements equal to 1. Thus the model assumes that the m 1 observations taken by the first platform have common mean μ 1, common variance σ 1 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeq4Wdm3aa0baaSqaaiabigdaXaqaaiabikdaYaaaaaa@2FA7@ , and common intra-class correlation ρ 1, whereas the m 2 measurements taken by the second platform have common mean μ 2, common variance σ 2 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeq4Wdm3aa0baaSqaaiabikdaYaqaaiabikdaYaaaaaa@2FA9@ , and common intra-class correlation ρ 2. Moreover, ρ 12 denotes the interclass correlation between any pair of measurements x ij (j = 1,2,... m 1) and x i m 1 + t ( t = 1 , 2 , m 2 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiEaG3aaSbaaSqaaiabdMgaPjabd2gaTjabigdaXiabgUcaRiabdsha0bqabaGcdaqadaqaaiabdsha0jabg2da9iabigdaXiabcYcaSiabikdaYiabcYcaSiablAciljabd2gaTnaaBaaaleaacqaIYaGmaeqaaaGccaGLOaGaayzkaaaaaa@3ED4@ , and also assumed constant across all subjects in the population.

For the l thmethod, the WSCV, which will be denoted as θ l in the remainder of the paper is defined as

θ l = σ l /μ l ,   l = 1, 2.

Our primary aim is to develop and evaluate methods of testing H 0:θ 1 = θ 2 taking into account dependencies induced by a positive value of ρ 12. We restrict our evaluation to reproducibility studies having m 1 = m 2 = m.

Methods for testing the null hypothesis

Wald test (WT)

If X 1, X 2,.... X n is a sample from the above multivariate normal distribution, then the log-likelihood function l, as a function of ψ = (μ 1, μ 2, σ 1 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeq4Wdm3aa0baaSqaaiabigdaXaqaaiabikdaYaaaaaa@2FA7@ , σ 2 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeq4Wdm3aa0baaSqaaiabikdaYaqaaiabikdaYaaaaaa@2FA9@ , ρ 1, ρ 2, ρ 12) is given by:

2 L = Q + n m log ( σ 1 2 σ 2 2 ) n log ( ( 1 ρ 1 ) ( 1 ρ 2 ) ) + n log w MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeyOeI0IaeGOmaiJaemitaWKaeyypa0JaemyuaeLaey4kaSIaemOBa4MaemyBa0MagiiBaWMaei4Ba8Maei4zaC2aaeWaaeaacqaHdpWCdaqhaaWcbaGaeGymaedabaGaeGOmaidaaOGaeq4Wdm3aa0baaSqaaiabikdaYaqaaiabikdaYaaaaOGaayjkaiaawMcaaiabgkHiTiabd6gaUjGbcYgaSjabc+gaVjabcEgaNnaabmaabaWaaeWaaeaacqaIXaqmcqGHsislcqaHbpGCdaWgaaWcbaGaeGymaedabeaaaOGaayjkaiaawMcaamaabmaabaGaeGymaeJaeyOeI0IaeqyWdi3aaSbaaSqaaiabikdaYaqabaaakiaawIcacaGLPaaaaiaawIcacaGLPaaacqGHRaWkcqWGUbGBcyGGSbaBcqGGVbWBcqGGNbWzcqWG3bWDaaa@5ECE@
(3)

where,

w = u 1 u 2 - m 2 ρ 12 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeqyWdi3aa0baaSqaaiabigdaXiabikdaYaqaaiabikdaYaaaaaa@3096@ ,

u l = 1 + (m - 1)ρ l , l = 1, 2 and,

Q = S 1 2 σ 1 2 + m ( 1 ρ 1 ) u 2 w σ 1 2 i = 1 n ( x ¯ i 1 μ 1 ) 2 + S 2 2 σ 2 2 + m ( 1 ρ 2 ) u 1 w σ 2 2 i = 1 n ( x ¯ i 2 μ 2 ) 2 2 m 2 ρ 12 w σ 1 σ 2 ( ( 1 ρ 1 ) ( 1 ρ 2 ) ) 1 / 2 i = 1 n ( x ¯ i 1 μ 1 ) ( x ¯ i 2 μ 2 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabiqaaaqaaiabdgfarjabg2da9KqbaoaalaaabaGaem4uam1aa0baaeaacqaIXaqmaeaacqaIYaGmaaaabaGaeq4Wdm3aa0baaeaacqaIXaqmaeaacqaIYaGmaaaaaOGaey4kaSscfa4aaSaaaeaacqWGTbqBdaqadaqaaiabigdaXiabgkHiTiabeg8aYnaaBaaabaGaeGymaedabeaaaiaawIcacaGLPaaacqWG1bqDdaWgaaqaaiabikdaYaqabaaabaGaem4DaCNaeq4Wdm3aa0baaeaacqaIXaqmaeaacqaIYaGmaaaaaOWaaabCaeaadaqadaqaaiqbdIha4zaaraWaaSbaaSqaaiabdMgaPjabigdaXaqabaGccqGHsislcqaH8oqBdaWgaaWcbaGaeGymaedabeaaaOGaayjkaiaawMcaamaaCaaaleqabaGaeGOmaidaaaqaaiabdMgaPjabg2da9iabigdaXaqaaiabd6gaUbqdcqGHris5aOGaey4kaSscfa4aaSaaaeaacqWGtbWudaqhaaqaaiabikdaYaqaaiabikdaYaaaaeaacqaHdpWCdaqhaaqaaiabikdaYaqaaiabikdaYaaaaaGccqGHRaWkjuaGdaWcaaqaaiabd2gaTnaabmaabaGaeGymaeJaeyOeI0IaeqyWdi3aaSbaaeaacqaIYaGmaeqaaaGaayjkaiaawMcaaiabdwha1naaBaaabaGaeGymaedabeaaaeaacqWG3bWDcqaHdpWCdaqhaaqaaiabikdaYaqaaiabikdaYaaaaaGcdaaeWbqaamaabmaabaGafmiEaGNbaebadaWgaaWcbaGaemyAaKMaeGOmaidabeaakiabgkHiTiabeY7aTnaaBaaaleaacqaIYaGmaeqaaaGccaGLOaGaayzkaaWaaWbaaSqabeaacqaIYaGmaaaabaGaemyAaKMaeyypa0JaeGymaedabaGaemOBa4ganiabggHiLdaakeaacqGHsisljuaGdaWcaaqaaiabikdaYiabd2gaTnaaCaaabeqaaiabikdaYaaacqaHbpGCdaWgaaqaaiabigdaXiabikdaYaqabaaabaGaem4DaCNaeq4Wdm3aaSbaaeaacqaIXaqmaeqaaiabeo8aZnaaBaaabaGaeGOmaidabeaaaaGcdaqadaqaamaabmaabaGaeGymaeJaeyOeI0IaeqyWdi3aaSbaaSqaaiabigdaXaqabaaakiaawIcacaGLPaaadaqadaqaaiabigdaXiabgkHiTiabeg8aYnaaBaaaleaacqaIYaGmaeqaaaGccaGLOaGaayzkaaaacaGLOaGaayzkaaWaaWbaaSqabeaadaWcgaqaaiabigdaXaqaaiabikdaYaaaaaGcdaaeWbqaamaabmaabaGafmiEaGNbaebadaWgaaWcbaGaemyAaKMaeGymaedabeaakiabgkHiTiabeY7aTnaaBaaaleaacqaIXaqmaeqaaaGccaGLOaGaayzkaaWaaeWaaeaacuWG4baEgaqeamaaBaaaleaacqWGPbqAcqaIYaGmaeqaaOGaeyOeI0IaeqiVd02aaSbaaSqaaiabikdaYaqabaaakiaawIcacaGLPaaaaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabd6gaUbqdcqGHris5aaaaaaa@BD4D@

From [24] the conditions {1 + (m - 1)ρ 1}{1 + (m - 1)ρ 2} > m 2 ρ 12 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeqyWdi3aa0baaSqaaiabigdaXiabikdaYaqaaiabikdaYaaaaaa@3096@ and -1/(m - 1) <ρ l < 1 must be satisfied for the likelihood function to be a sample from a non-singular multivariate normal distribution.

The summary statistics given in (3) are defined as:

x ¯ i j = k = 1 m x i j k / m i = 1 , 2 , n ; j = 1 , 2 S j 2 = i = 1 n k = 1 m ( x i j k x ¯ i j ) 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabiGaaaqaaiqbdIha4zaaraWaaSbaaSqaaiabdMgaPjabdQgaQbqabaGccqGH9aqpdaaeWbqaaiabdIha4naaBaaaleaacqWGPbqAcqWGQbGAcqWGRbWAaeqaaOGaei4la8IaemyBa0galeaacqWGRbWAcqGH9aqpcqaIXaqmaeaacqWGTbqBa0GaeyyeIuoaaOqaaiabdMgaPjabg2da9iabigdaXiabcYcaSiabikdaYiabcYcaSiablAciljabd6gaUjabcUda7iabdQgaQjabg2da9iabigdaXiabcYcaSiabikdaYaqaaiabdofatnaaDaaaleaacqWGQbGAaeaacqaIYaGmaaGccqGH9aqpdaaeWbqaamaaqahabaWaaeWaaeaacqWG4baEdaWgaaWcbaGaemyAaKMaemOAaOMaem4AaSgabeaakiabgkHiTiqbdIha4zaaraWaaSbaaSqaaiabdMgaPjabdQgaQbqabaaakiaawIcacaGLPaaadaahaaWcbeqaaiabikdaYaaaaeaacqWGRbWAcqGH9aqpcqaIXaqmaeaacqWGTbqBa0GaeyyeIuoaaSqaaiabdMgaPjabg2da9iabigdaXaqaaiabd6gaUbqdcqGHris5aaGcbaaaaaaa@6FFD@

The maximum likelihood estimates (MLE) for μ l and σ l 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeq4Wdm3aa0baaSqaaiabdYgaSbqaaiabikdaYaaaaaa@3018@ are given respectively by μ l = x ¯ l , σ l 2 = S l 2 / n ( m 1 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqiVd0MbambadaWgaaWcbaGaemiBaWgabeaakiabg2da9iqbdIha4zaaraWaaSbaaSqaaiabdYgaSbqabaGccqGGSaalcuaHdpWCgaWeamaaDaaaleaacqWGSbaBaeaacqaIYaGmaaGccqGH9aqpcqWGtbWudaqhaaWcbaGaemiBaWgabaGaeGOmaidaaOGaei4la8IaemOBa42aaeWaaeaacqWGTbqBcqGHsislcqaIXaqmaiaawIcacaGLPaaaaaa@4484@ , where x ¯ l = 1 n i = 1 n x ¯ i j MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafmiEaGNbaebadaWgaaWcbaGaemiBaWgabeaakiabg2da9KqbaoaalaaabaGaeGymaedabaGaemOBa4gaaOWaaabCaeaacuWG4baEgaqeamaaBaaaleaacqWGPbqAcqWGQbGAaeqaaaqaaiabdMgaPjabg2da9iabigdaXaqaaiabd6gaUbqdcqGHris5aaaa@3E62@ and l = 1, 2. Clearly, σ ^ l 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafq4WdmNbaKaadaqhaaWcbaGaemiBaWgabaGaeGOmaidaaaaa@3028@ exists for values of m > 1. Therefore we shall assume that m > 1 throughout this paper. From [24], we obtain ρ ^ 1 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqyWdiNbaKaadaWgaaWcbaGaeGymaedabeaaaaa@2EC1@ and ρ ^ 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqyWdiNbaKaadaWgaaWcbaGaeGOmaidabeaaaaa@2EC3@ by computing Pearson's product-moment correlation over all possible pairs of measurements that can be constructed within platforms 1 and 2 respectively, with ρ ^ 12 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqyWdiNbaKaadaWgaaWcbaGaeGymaeJaeGOmaidabeaaaaa@2FB3@ similarly obtained by computing this correlation over the nm 2 pairs (x ij , x i , m+l ).

The WT of H 0:θ 1 = θ 2 requires the evaluation of variance of θ l MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqiUdeNbambadaWgaaWcbaGaemiBaWgabeaaaaa@2F32@ , l = 1, 2, and cov ( θ 1 , θ 2 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGagi4yamMaei4Ba8MaeiODay3aaeWaaeaacuaH4oqCgaWeamaaBaaaleaacqaIXaqmaeqaaOGaeiilaWIafqiUdeNbambadaWgaaWcbaGaeGOmaidabeaaaOGaayjkaiaawMcaaaaa@3856@ . To obtain these values we use elements of Fisher's information matrix, along with the delta method [26, 27]. On writing:

ψ = (ψ 1, ψ 2)',ψ 1 = (μ 1, μ 2)', and ψ 2 = ( σ 1 2 , σ 2 2 , ρ 1 , ρ 2 , ρ 12 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeqiYdK3aaSbaaSqaaiabikdaYaqabaGccqGH9aqpdaqadaqaaiabeo8aZnaaDaaaleaacqaIXaqmaeaacqaIYaGmaaGccqGGSaalcqaHdpWCdaqhaaWcbaGaeGOmaidabaGaeGOmaidaaOGaeiilaWIaeqyWdi3aaSbaaSqaaiabigdaXaqabaGccqGGSaalcqaHbpGCdaWgaaWcbaGaeGOmaidabeaakiabcYcaSiabeg8aYnaaBaaaleaacqaIXaqmcqaIYaGmaeqaaaGccaGLOaGaayzkaaWaaWbaaSqabeaakiada6SHYaIOaaaaaa@4947@ , the Fisher's information matrix I = -E2 l/∂ψψ' has the following structure:

I = [ I 11 O O I 22 ] . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemysaKKaeyypa0ZaamWaaeaafaqabeGacaaabaGaemysaK0aaSbaaSqaaiabigdaXiabigdaXaqabaaakeaacqWGpbWtaeaacqWGpbWtaeaacqWGjbqsdaWgaaWcbaGaeGOmaiJaeGOmaidabeaaaaaakiaawUfacaGLDbaacqGGUaGlaaa@39DE@
(4)

This is based on a result from [26] (page 239) indicating that, I 12 = I 21 ' MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemysaK0aa0baaSqaaiabikdaYiabigdaXaqaaiabcEcaNaaaaaa@2FD5@ = -E(∂2 l/∂ψ 1ψ'2) = 0. Therefore, from the asymptotic theory of maximum likelihood estimation we have:

I 11 1 = [ var ( μ 1 ) cov ( μ 1 , μ 2 ) cov ( μ 1 , μ 2 ) var ( μ 2 ) ] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemysaK0aa0baaSqaaiabigdaXiabigdaXaqaaiabgkHiTiabigdaXaaakiabg2da9maadmaabaqbaeqabiGaaaqaaiGbcAha2jabcggaHjabckhaYnaabmaabaGafqiVd0MbambadaWgaaWcbaGaeGymaedabeaaaOGaayjkaiaawMcaaaqaaiGbcogaJjabc+gaVjabcAha2naabmaabaGafqiVd0MbambadaWgaaWcbaGaeGymaedabeaakiabcYcaSiqbeY7aTzaataWaaSbaaSqaaiabikdaYaqabaaakiaawIcacaGLPaaaaeaacyGGJbWycqGGVbWBcqGG2bGDdaqadaqaaiqbeY7aTzaataWaaSbaaSqaaiabigdaXaqabaGccqGGSaalcuaH8oqBgaWeamaaBaaaleaacqaIYaGmaeqaaaGccaGLOaGaayzkaaaabaGagiODayNaeiyyaeMaeiOCai3aaeWaaeaacuaH8oqBgaWeamaaBaaaleaacqaIYaGmaeqaaaGccaGLOaGaayzkaaaaaaGaay5waiaaw2faaaaa@5E94@

And the elements of I 22 are given in the Appendix.

The elements of I 22 1 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemysaK0aa0baaSqaaiabikdaYiabikdaYaqaaiabgkHiTiabigdaXaaaaaa@30DE@ are the asymptotic variance- covariance matrix of the maximum likelihood estimators of the covariance parameters. Inverting Fisher's information matrices we get:

var ( μ l ) = σ l 2 n m ( 1 ρ l ) [ 1 + ( m 1 ) ρ l ] . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGagiODayNaeiyyaeMaeiOCai3aaeWaaeaacuaH8oqBgaWeamaaBaaaleaacqWGSbaBaeqaaaGccaGLOaGaayzkaaGaeyypa0tcfa4aaSaaaeaacqaHdpWCdaqhaaqaaiabdYgaSbqaaiabikdaYaaaaeaacqWGUbGBcqWGTbqBdaqadaqaaiabigdaXiabgkHiTiabeg8aYnaaBaaabaGaemiBaWgabeaaaiaawIcacaGLPaaaaaGcdaWadaqaaiabigdaXiabgUcaRmaabmaabaGaemyBa0MaeyOeI0IaeGymaedacaGLOaGaayzkaaGaeqyWdi3aaSbaaSqaaiabdYgaSbqabaaakiaawUfacaGLDbaacqGGUaGlaaa@515D@
(5)

Applying the delta method [27], we can show, to the first order of approximation that:

var ( σ l ) σ l 2 / 2 n ( m 1 ) . l = 1 , 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiGbcAha2jabcggaHjabckhaYnaabmaabaGafq4WdmNbambadaWgaaWcbaGaemiBaWgabeaaaOGaayjkaiaawMcaaiabgIKi7kabeo8aZnaaDaaaleaacqWGSbaBaeaacqaIYaGmaaGccqGGVaWlcqaIYaGmcqWGUbGBdaqadaqaaiabd2gaTjabgkHiTiabigdaXaGaayjkaiaawMcaaiabc6caUaqaaiabdYgaSjabg2da9iabigdaXiabcYcaSiabikdaYaaaaaa@496A@
(6)

The maximum likelihood estimator of θ l is θ ^ l = μ ^ l σ ^ l MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqiUdeNbaKaadaWgaaWcbaGaemiBaWgabeaakiabg2da9KqbaoaalaaabaGafqiVd0MbaKaadaWgaaqaaiabdYgaSbqabaaabaGafq4WdmNbaKaadaWgaaqaaiabdYgaSbqabaaaaaaa@3773@ . Again, by application of the delta method, we can show to the first order of approximation that:

var ( θ l ) θ l 4 [ 1 + ( m 1 ) ρ l ] n m ( 1 ρ l ) + θ l 2 2 n ( m 1 ) , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGagiODayNaeiyyaeMaeiOCai3aaeWaaeaacuaH4oqCgaWeamaaBaaaleaacqWGSbaBaeqaaaGccaGLOaGaayzkaaGaeyisISBcfa4aaSaaaeaacqaH4oqCdaqhaaqaaiabdYgaSbqaaiabisda0aaadaWadaqaaiabigdaXiabgUcaRmaabmaabaGaemyBa0MaeyOeI0IaeGymaedacaGLOaGaayzkaaGaeqyWdi3aaSbaaeaacqWGSbaBaeqaaaGaay5waiaaw2faaaqaaiabd6gaUjabd2gaTnaabmaabaGaeGymaeJaeyOeI0IaeqyWdi3aaSbaaeaacqWGSbaBaeqaaaGaayjkaiaawMcaaaaakiabgUcaRKqbaoaalaaabaGaeqiUde3aa0baaeaacqWGSbaBaeaacqaIYaGmaaaabaGaeGOmaiJaemOBa42aaeWaaeaacqWGTbqBcqGHsislcqaIXaqmaiaawIcacaGLPaaaaaGccqGGSaalaaa@5EBB@
(7)

as was shown by Quan and Shih [8].

Again using the delta method we show approximately that:

cov ( θ 1 , θ 2 ) 2 θ 1 2 θ 2 2 ρ 12 n ( 1 ρ 1 ) ( 1 ρ 2 ) . MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGagi4yamMaei4Ba8MaeiODay3aaeWaaeaacuaH4oqCgaWeamaaBaaaleaacqaIXaqmaeqaaOGaeiilaWIafqiUdeNbambadaWgaaWcbaGaeGOmaidabeaaaOGaayjkaiaawMcaaiabgIKi7MqbaoaalaaabaGaeGOmaiJaeqiUde3aa0baaeaacqaIXaqmaeaacqaIYaGmaaGaeqiUde3aa0baaeaacqaIYaGmaeaacqaIYaGmaaGaeqyWdi3aaSbaaeaacqaIXaqmcqaIYaGmaeqaaaqaaiabd6gaUnaakaaabaWaaeWaaeaacqaIXaqmcqGHsislcqaHbpGCdaWgaaqaaiabigdaXaqabaaacaGLOaGaayzkaaWaaeWaaeaacqaIXaqmcqGHsislcqaHbpGCdaWgaaqaaiabikdaYaqabaaacaGLOaGaayzkaaaabeaaaaGccqGGUaGlaaa@55F1@
(8)

From [28] we apply the large sample theory of maximum likelihood to establish that:

Z = θ ^ 1 θ ^ 2 var ( θ ^ 1 ) + var ( θ ^ 2 ) 2 cov ( θ ^ 1 , θ ^ 2 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemOwaOLaeyypa0tcfa4aaSaaaeaacuaH4oqCgaqcamaaBaaabaGaeGymaedabeaacqGHsislcuaH4oqCgaqcamaaBaaabaGaeGOmaidabeaaaeaadaGcaaqaaiGbcAha2jabcggaHjabckhaYjabcIcaOiqbeI7aXzaajaWaaSbaaeaacqaIXaqmaeqaaiabcMcaPiabgUcaRiGbcAha2jabcggaHjabckhaYjabcIcaOiqbeI7aXzaajaWaaSbaaeaacqaIYaGmaeqaaiabcMcaPiabgkHiTiabikdaYiGbcogaJjabc+gaVjabcAha2jabcIcaOiqbeI7aXzaajaWaaSbaaeaacqaIXaqmaeqaaiabcYcaSiqbeI7aXzaajaWaaSbaaeaacqaIYaGmaeqaaiabcMcaPaqabaaaaaaa@564A@
(9)

is approximately distributed under H 0 as a standard normal deviate. The denominator of Z is the standard error of θ ^ 1 θ ^ 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqiUdeNbaKaadaWgaaWcbaGaeGymaedabeaakiabgkHiTiqbeI7aXzaajaWaaSbaaSqaaiabikdaYaqabaaaaa@3292@ and is denoted by SE θ ^ 1 θ ^ 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqiUdeNbaKaadaWgaaWcbaGaeGymaedabeaakiabgkHiTiqbeI7aXzaajaWaaSbaaSqaaiabikdaYaqabaaaaa@3292@ . Since the standard error of θ ^ 1 θ ^ 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqiUdeNbaKaadaWgaaWcbaGaeGymaedabeaakiabgkHiTiqbeI7aXzaajaWaaSbaaSqaaiabikdaYaqabaaaaa@3292@ contains unknown parameters, its maximum likelihood estimate S ^ E ( θ ^ 1 θ ^ 2 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafm4uamLbaKaacqWGfbqrcqGGOaakcuaH4oqCgaqcamaaBaaaleaacqaIXaqmaeqaaOGaeyOeI0IafqiUdeNbaKaadaWgaaWcbaGaeGOmaidabeaakiabcMcaPaaa@36A0@ is obtained by substituting θ ^ l MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqiUdeNbaKaadaWgaaWcbaGaemiBaWgabeaaaaa@2F28@ for θ l , ρ ˜ l MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqyWdiNbaGaadaWgaaWcbaGaemiBaWgabeaaaaa@2F31@ for ρ l and ρ ˜ 12 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqyWdiNbaGaadaWgaaWcbaGaeGymaeJaeGOmaidabeaaaaa@2FB2@ for ρ 12. Moreover, we may construct an approximate (1-α)100% confidence interval on (θ 1 - θ 2) given as:

θ ^ 1 θ ^ 2 ± z α / 2 S ^ E ( θ ^ 1 θ ^ 2 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqiUdeNbaKaadaWgaaWcbaGaeGymaedabeaakiabgkHiTiqbeI7aXzaajaWaaSbaaSqaaiabikdaYaqabaGccqGHXcqScqWG6bGEdaWgaaWcbaGaeqySdeMaei4la8IaeGOmaidabeaakiqbdofatzaajaGaemyrauKaeiikaGIafqiUdeNbaKaadaWgaaWcbaGaeGymaedabeaakiabgkHiTiqbeI7aXzaajaWaaSbaaSqaaiabikdaYaqabaGccqGGPaqkaaa@447F@ , where z α/2 is the (1-α/2)100% cut-off point of the standard normal distribution.

Likelihood ratio test (LRT)

An LRT of H 0 : θ 1 = θ 2 was developed numerically, and computed by first setting μ l = σ l /θ l , l = 1,2 in Equation (3), and then adopting the following algorithm:

1- Set μ l = σ l /θ l , l = 1,2 in Equation (3), thereafter;

2- Set θ 1 = θ 2 = θ in (3)

3- Minimize the resulting expression with respect to all six parameters (σ 1, σ 2, ρ 1, ρ 2, ρ 12, θ) and;

4- Subtract the minimum from the minimum of -2L as computed over all seven parameters (σ 1, σ 2, ρ 1, ρ 2, ρ 12, θ 1, θ 2) in the model.

It then follows from standard likelihood theory that the resulting test statistic is approximately chi-square distributed with 1 degree of freedom under H0.

Score test

One of the advantages of likelihood based inference procedure is that in addition to the WT and the LRT "Rao's score test" can also be readily developed. The motivation for it is that it can sometimes be easier to maximize the likelihood function under the null hypothesis than under the alternative hypothesis. A standard procedure for performing the score test of H 0 : θ 1 = θ 2 is to set θ 2 = θ 1 + Δ, so that the null hypothesis is equivalent to H 0 : Δ = 0, where Δ is unrestricted. Replacing μ l by σ l /θ l , the log-likelihood function L is then independent of μ l .

Let L = L(Δ; ψ ) = L(Δ; θ 1, σ 1, σ 2, ρ 1, ρ 2, ρ 12) and l 1 = L Δ , l 2 = L ψ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiBaW2aaSbaaSqaaiabigdaXaqabaGccqGH9aqpjuaGdaWcaaqaaiabgkGi2kabdYeambqaaiabgkGi2kabfs5aebaakiabcYcaSiabdYgaSnaaBaaaleaacqaIYaGmaeqaaOGaeyypa0tcfa4aaSaaaeaacqGHciITcqWGmbataeaacqGHciITcqaHipqEdaahaaqabeaacqGHxiIkaaaaaaaa@4136@ .

From [28] the score statistic is given by:

S = l ˙ 1 T A 1 2 1 l ˙ 1 , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaem4uamLaeyypa0JafmiBaWMbaiaadaqhaaWcbaGaeGymaedabaGaemivaqfaaOGaemyqae0aa0baaSqaaiabigdaXiabgkci3kabikdaYaqaaiabgkHiTiabigdaXaaakiqbdYgaSzaacaWaaSbaaSqaaiabigdaXaqabaGccqGGSaalaaa@3BC2@

where

l 1 = L Δ | Δ = 0 = n m w ^ θ ^ 1 2 [ μ ^ 1 ( 1 ρ ^ 2 ) ( θ ^ 1 θ ^ 2 θ ^ 1 θ ^ 2 ) ] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiBaW2aa0baaSqaaiabigdaXaqaaiabgkci3caakiabg2da9KqbaoaalaaabaGaeyOaIyRaemitaWeabaGaeyOaIyRaeuiLdqeaaOWaaCbeaeaacqGG8baFaSqaaiabfs5aejabg2da9iabicdaWaqabaGccqGH9aqpjuaGdaWcaaqaaiabd6gaUjabd2gaTbqaaiqbdEha3zaajaGafqiUdeNbaKaadaqhaaqaaiabigdaXaqaaiabikdaYaaaaaGcdaWadaqaaiqbeY7aTzaajaWaaSbaaSqaaiabigdaXaqabaGcdaqadaqaaiabigdaXiabgkHiTiqbeg8aYzaajaWaaSbaaSqaaiabikdaYaqabaaakiaawIcacaGLPaaadaqadaqcfayaamaalaaabaGafqiUdeNbaKaadaWgaaqaaiabigdaXaqabaGaeyOeI0IafqiUdeNbaKaadaWgaaqaaiabikdaYaqabaaabaGafqiUdeNbaKaadaWgaaqaaiabigdaXaqabaGafqiUdeNbaKaadaWgaaqaaiabikdaYaqabaaaaaGccaGLOaGaayzkaaaacaGLBbGaayzxaaaaaa@5FCF@
(10)

and A 1 2 = A 11 A 12 A 22 1 A 21 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyqae0aaSbaaSqaaiabigdaXiabgkci3kabikdaYaqabaGccqGH9aqpcqWGbbqqdaWgaaWcbaGaeGymaeJaeGymaedabeaakiabgkHiTiabdgeabnaaBaaaleaacqaIXaqmcqaIYaGmaeqaaOGaemyqae0aa0baaSqaaiabikdaYiabikdaYaqaaiabgkHiTiabigdaXaaakiabdgeabnaaBaaaleaacqaIYaGmcqaIXaqmaeqaaaaa@40D0@ . The matrices on the right hand side of A 1•2 are obtained from partitioning the Fisher's information matrix A so that A = ( A 11 A 12 A 21 A 22 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyqaeKaeyypa0ZaaeWaaeaafaqabeGacaaabaGaemyqae0aaSbaaSqaaiabigdaXiabigdaXaqabaaakeaacqWGbbqqdaWgaaWcbaGaeGymaeJaeGOmaidabeaaaOqaaiabdgeabnaaBaaaleaacqaIYaGmcqaIXaqmaeqaaaGcbaGaemyqae0aaSbaaSqaaiabikdaYiabikdaYaqabaaaaaGccaGLOaGaayzkaaaaaa@3C0B@ where A 11 = E ( 2 l Δ 2 ) , A 12 = A 21 T = E ( 2 l Δ ψ ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyqae0aaSbaaSqaaiabigdaXiabigdaXaqabaGccqGH9aqpcqWGfbqrdaqadaqaaiabgkHiTKqbaoaalaaabaGaeyOaIy7aaWbaaeqabaGaeGOmaidaaiabdYgaSbqaaiabgkGi2kabfs5aenaaCaaabeqaaiabikdaYaaaaaaakiaawIcacaGLPaaacqGGSaalcqWGbbqqdaWgaaWcbaGaeGymaeJaeGOmaidabeaakiabg2da9iabdgeabnaaDaaaleaacqaIYaGmcqaIXaqmaeaacqWGubavaaGccqGH9aqpcqWGfbqrdaqadaqaaiabgkHiTKqbaoaalaaabaGaeyOaIy7aaWbaaeqabaGaeGOmaidaaiabdYgaSbqaaiabgkGi2kabfs5aejabgkGi2kabeI8a5naaCaaabeqaaiabgEHiQaaaaaaakiaawIcacaGLPaaaaaa@5569@ , and A 22 = E ( 2 l ψ ψ T ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyqae0aaSbaaSqaaiabikdaYiabikdaYaqabaGccqGH9aqpcqWGfbqrdaqadaqaaiabgkHiTKqbaoaalaaabaGaeyOaIy7aaWbaaeqabaGaeGOmaidaaiabdYgaSbqaaiabgkGi2kabeI8a5naaCaaabeqaaiabgEHiQaaacqGHciITcqaHipqEdaahaaqabeaacqGHxiIkcqWGubavaaaaaaGccaGLOaGaayzkaaaaaa@41C7@ with all the matrices on the right hand side of A 1•2 evaluated at Δ = 0. When an estimator other than the MLE is used for the nuisance parameters ψ*, provided that the estimator ψ ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafqiYdKNbaKaadaahaaWcbeqaaiabgEHiQaaaaaa@2ECF@ is n MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaaOaaaeaacqWGUbGBaSqabaaaaa@2D55@ consistent, it was shown that the asymptotic distribution of S is that of a chi-square with 1 degree of freedom [29, 30].

The score test has been applied in many situations and has been proven to be locally powerful. Unfortunately, the inversion of A 1•2 is quite complicated and we cannot obtain a simple expression for S that can be easily used. Moreover, we have also found through extensive simulations that while the score test holds its levels of significance, it is less powerful than LRT and WT across all parameter configurations. We therefore focus our subsequent discussion of power to LRT and WT.

Regression test

Pitman [1] and Morgan [2] introduced a technique to test the equality of variances of two correlated normally distributed random variables. It is constructed to simply test for zero correlation between the sums and differences of the paired data. Bradley and Blackwood [31] extended Pitman and Morgan's idea to a regression context that affords a simultaneous test for both the means and the variances. The test is applicable to many paired data settings, for example, in evaluating the reproducibility of lab test results obtained from two different sources. The test could also be used in repeated measures experiments, such as in comparing the structural effects of two drugs applied to the same set of subjects. Here we generalize the results of Bradley and Blackwood to establish the simultaneous equality of means and variances of two correlated variables, implying the equality of their coefficients of variations.

Let X ¯ i j = k = 1 m X i j k / m MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGafmiwaGLbaebadaWgaaWcbaGaemyAaKMaemOAaOgabeaakiabg2da9maaqahabaGaemiwaG1aaSbaaSqaaiabdMgaPjabdQgaQjabdUgaRbqabaaabaGaem4AaSMaeyypa0JaeGymaedabaGaemyBa0ganiabggHiLdGccqGGVaWlcqWGTbqBaaa@3FD8@ , and define d i = X ¯ i 1 X ¯ i 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemizaq2aaSbaaSqaaiabdMgaPbqabaGccqGH9aqpcuWGybawgaqeamaaBaaaleaacqWGPbqAcqaIXaqmaeqaaOGaeyOeI0IafmiwaGLbaebadaWgaaWcbaGaemyAaKMaeGOmaidabeaaaaa@3846@ , and s i = X ¯ i 1 + X ¯ i 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbwvMCKfMBHbqedmvETj2BSbqee0evGueE0jxyaibaieYdOi=BI8qipeYdI8qiW7rqqrFfpeea0xe9LqFf0xc9q8qqaqFn0dXdHiVcFbIOFHK8Feei0lXdar=Jb9qqFfeaYRXxe9vr0=vr0=LqpWqaaeaabiGaciaacaqabeaabeqacmaaaOqaaiaadohadaWgaaWcbaGaamyAaaqabaGccqGH9aqpceWGybGbaebadaWgaaWcbaGaamyAaiaaigdaaeqaaOGaey4kaSIabmiwayaaraWaaSbaaSqaaiaadMgacaaIYaaabeaaaaa@36CF@ .

Direct application of the multivariate normal theory shows that the conditional expectation of d i on s i is linear [32]. That is

E(d i | s i ) = α + βs i ,

where

α = ( μ 1 μ 2 ) ( μ 1 + μ 2 ) ( σ 1 2 σ 2 2 ) k MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeqySdeMaeyypa0ZaaeWaaeaacqaH8oqBdaWgaaWcbaGaeGymaedabeaakiabgkHiTiabeY7aTnaaBaaaleaacqaIYaGmaeqaaaGccaGLOaGaayzkaaGaeyOeI0YaaeWaaeaacqaH8oqBdaWgaaWcbaGaeGymaedabeaakiabgUcaRiabeY7aTnaaBaaaleaacqaIYaGmaeqaaaGccaGLOaGaayzkaaWaaeWaaeaacqaHdpWCdaqhaaWcbaGaeGymaedabaGaeGOmaidaaOGaeyOeI0Iaeq4Wdm3aa0baaSqaaiabikdaYaqaaiabikdaYaaaaOGaayjkaiaawMcaaiabdUgaRbaa@4B99@
(11.a)
β = ( σ 1 2 σ 2 2 ) k MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeqOSdiMaeyypa0ZaaeWaaeaacqaHdpWCdaqhaaWcbaGaeGymaedabaGaeGOmaidaaOGaeyOeI0Iaeq4Wdm3aa0baaSqaaiabikdaYaqaaiabikdaYaaaaOGaayjkaiaawMcaaiabdUgaRbaa@3A59@
(11.b)

where

k 1 = σ 1 2 ( 1 ρ 1 ) 1 ( 1 + ( 2 m 1 ) ρ 1 ) + σ 2 2 ( 1 ρ 2 ) 1 ( 1 + ( 2 m 1 ) ρ 2 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaem4AaS2aaWbaaSqabeaacqGHsislcqaIXaqmaaGccqGH9aqpcqaHdpWCdaqhaaWcbaGaeGymaedabaGaeGOmaidaaOWaaeWaaeaacqaIXaqmcqGHsislcqaHbpGCdaWgaaWcbaGaeGymaedabeaaaOGaayjkaiaawMcaamaaCaaaleqabaGaeyOeI0IaeGymaedaaOWaaeWaaeaacqaIXaqmcqGHRaWkdaqadaqaaiabikdaYiabd2gaTjabgkHiTiabigdaXaGaayjkaiaawMcaaiabeg8aYnaaBaaaleaacqaIXaqmaeqaaaGccaGLOaGaayzkaaGaey4kaSIaeq4Wdm3aa0baaSqaaiabikdaYaqaaiabikdaYaaakmaabmaabaGaeGymaeJaeyOeI0IaeqyWdi3aaSbaaSqaaiabikdaYaqabaaakiaawIcacaGLPaaadaahaaWcbeqaaiabgkHiTiabigdaXaaakmaabmaabaGaeGymaeJaey4kaSYaaeWaaeaacqaIYaGmcqWGTbqBcqGHsislcqaIXaqmaiaawIcacaGLPaaacqaHbpGCdaWgaaWcbaGaeGOmaidabeaaaOGaayjkaiaawMcaaaaa@61A6@ is strictly positive.

The proof is straightforward and is therefore omitted.

It can be shown then from direct application of the multivariate normal theory that the conditional expectation (11) is linear, and does not depend on the parameter ρ 12.

From (11.a) and (11.b), it is clear that α = β = 0 if and only if μ 1 = μ 2 and σ 1 = σ 2 simultaneously. Therefore, testing the equality of two correlated coefficients of variations is equivalent to testing the significance of the regression equation (11). From the theory of least squares, if we define:

TSS = i = 1 n ( d i d ¯ ) 2 , R S S = β ^ 1 2 i = 1 n ( s i s ¯ ) 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeivaqLaee4uamLaee4uamLaeyypa0ZaaabCaeaadaqadaqaaiabdsgaKnaaBaaaleaacqWGPbqAaeqaaOGaeyOeI0IafmizaqMbaebaaiaawIcacaGLPaaadaahaaWcbeqaaiabikdaYaaaaeaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGUbGBa0GaeyyeIuoakiabcYcaSiabdkfasjabdofatjabdofatjabg2da9iqbek7aIzaajaWaa0baaSqaaiabigdaXaqaaiabikdaYaaakmaaqahabaWaaeWaaeaacqWGZbWCdaWgaaWcbaGaemyAaKgabeaakiabgkHiTiqbdohaZzaaraaacaGLOaGaayzkaaWaaWbaaSqabeaacqaIYaGmaaaabaGaemyAaKMaeyypa0JaeGymaedabaGaemOBa4ganiabggHiLdaaaa@577F@ and EMS = (TSS - RSS)/(n - 2),

the hypothesis H 0 : α = β = 0 is rejected when RSS/EMS exceeds F v,1,(n-2), the (1 - v) 100% percentile value of the F-distribution with 1 and (n-2) degrees of freedom [32].

Results

Simulation

The theoretical properties of the test procedures discussed thus far are largely intractable in finite samples. We therefore took a Monte Carlo study to determine the levels of significance and powers of these tests over a wide range of parameter values. For this study we generated observations from a multivariate normal distribution with covariance structure defined as in (2). Simulations were performed using programs written in MATLAB (The Math. Works, Inc., Natic, MA).

The parameters of the simulation included the total number of subjects (n), the number of replications (m 1 = m 2 = m), and various values of (θ 1, θ 2, ρ 1, ρ 2, ρ 12). For each of 2000 independent runs of an algorithm constructed to generate observations from multivariate normal distribution, we estimated the true level of significance and power of the LRT, Wald, Score and PM tests using a nominal level of significance 5% (two sided) for various combinations of parameters.

Tables 1 and 2 report the empirical significance levels based on 2000 simulated datasets for four (WT, Score, LRT and PM) procedures for sample size of n = 50 and n = 100, respectively. It is seen that all procedures provide satisfactory significance levels at all parameter values examined. The empirical significance levels for smaller sample sizes (n = 10, 20, and 30) were also estimated. All test procedures provided empirical levels that are very close to the 5% nominal level (data not shown).

Table 1 Empirical significance levels based on 2000 runs at nominal level 5% (two sided) for testing θ 1 = θ 2 = 0.15 using the LRT, Wald, Score and PM for n = 50 subjects and m replicates, ρ1 = ρ2 = ρ.
Table 2 Empirical significance levels based on 2000 runs at nominal level 5% (two sided) for testing θ 1 = θ 2 = 0.15 using the LRT, Wald, Score and PM for n = 100 subjects and m replicates, ρ1 = ρ2 = ρ.

Tables 3 and 4 display empirical powers based on 2000 simulated datasets for WT and LRT in sample sizes n = 30 and 50, respectively. As alluded to earlier, the score test is excluded from the power Tables 3 and 4 because its simulated empirical power values were unacceptably low (as we show in Table 5). We observe that for all parameter values that WT and LRT provide almost identical values of power (Tables 3 and 4). Thus, although the LRT shows greater power at some parameter combinations than the WT, the difference is usually less than three percentage points. We also conducted simulations to estimate the powers of the test statistics for smaller sample sizes (n = 10, and 20) (data not shown). We found that for some parameter combinations Wald and LRT provided acceptable power especially if the distance between θ 1 and θ 2 is large, and showed greater power than both the Score and PM tests. The power of Score test was generally very low.

Table 3 Empirical power based on 2000 runs for testing θ 1 = θ 2 using the LRT and Wald test for n = 30 subjects.
Table 4 Empirical power based on 2000 runs for testing θ1 = θ2 using the LRT and Wald test for n = 50 subjects.
Table 5 Empirical Power of PM, Score and Wald tests based on 2000 data sets, n = 50 subjects, m = 3 replicates.

For selected parameter values, power levels of PM, Wald, and the score tests for n = 50 subjects are given in Table 5. As already mentioned, the power of the score test is generally low. We note that the power of the Wald test is quite sensitive to the distance between θ 1 and θ 2. We note that the equality of the means and variances implies the equality of the WSCV, but the reverse is not true. This strong assumption might explain the relatively poor performance of the PM test, particularly when the means are not well separated.

To assess the effect of non-normality on the properties of the proposed test statistics we generated data from a log-normal distribution, and evaluated the performance of the four procedures for 2000 simulated datasets. The empirical levels of the regression based PM test were quite close to the 5% nominal level, but the power was poor. However, the likelihood based procedures (Wald, LRT and Score) did not preserve their nominal levels for the majority of the parameters combinations (data not shown).

Applications

Gene expression data

We illustrate the proposed methodologies by analyzing data from two biomedical studies. In the first data sets we illustrate the methodology on the gene expression measurement results of identical RNA preparations for two commercially available microarray platforms, namely, Affymerix (25-mer), and Amersham (30-mer) [14]. The RNA was collected from pancreatic PANC-1 cells grown in a serum-rich medium ("control") and 24 h following the removal of the serum ("treatment"). Three biological replicates (B1, B2, and B3) and three technical replicates (T1, T2, and T3) for the first biological replicate (B1) were produced by each platform. Therefore, for each condition (control and treatment) five hybridizations are conducted. The dataset consists of 2009 genes that are identified as common across the platforms after comparing their Gene Bank IDs, and is normalized according to the manufacturer's standard software and normalization procedures. More details concerning this dataset can be found in the original article [14].

The results presented in this section were not restricted to the group of differentially expressed genes, and we used the "control" part of the data for both technical and biological replicates. The normalized intensity values are averaged for genes with multiple probes for a given Gene ID. Hence, we have a sample size of n = 2009 genes measured three times (m = 3) by each of the two platforms (or instruments). We have used the within- gene coefficient of variation as a measure of reproducibility of a specific platform.

The results of the data analyses are summarized in Table 6. Parameter estimates for both platforms, the estimated WSCV under the null hypotheses, as well as confidence interval of the difference of the two WSCVs are given in the Table. We note that the correlation estimates remain the same under both hypotheses. Moreover, we note that the intraclass correlations (ρ) are quite high. Using benchmarks provided in [33], both platforms produce substantially reproducible gene expression levels. Clearly, this is due to the large heterogeneity among the genes in the data set. Application of the LRT, Wald, and the PM tests for testing the equality of two dependent WSCV show that the Amersham has significantly lower WSCV (P < 0.001) i.e. better reproducibility for both the technical and biological replicates.

Table 6 Microarray Gene Expression data results (n = 2009 genes, m = 3 replicates)

Analysis of computer aided tomographic scan measurements

Here we demonstrate the statistical methodologies of this paper on a much smaller data set than the microarray gene expression example. The data are from a study using the Computer-Aided Tomographic Scans (CAT-SCAN) of the heads of 50 psychiatric patients [20, 34]. The measurements are the size of the brain ventricle relative to that of the patient's skull, and given by the ventricle-brain ratio VBR = (ventricle size/brain size) × 100. For a given scan, VBR was determined from measurements of the perimeter of the patient's ventricle together with the perimeter of the inner surface of the skull. These measurements were taken either: (i) from an automated pixel count (PIX) based on the images displayed on a television screen, or (ii) a hand-held planimeter (PLAN) on a projection of the X-ray image. Table 7 summarizes the results. Clearly all tests show that PIX has significantly lower WSCV that the PLAN (p < 0.001); that is better reproducibility.

Table 7 Analysis of computer-aided tomographic scan data on 50 patients via PIX or PLAN with two replicates

Discussion

A comparison between the reproducibility of two measuring instruments using the same set of subjects leads naturally to a comparison of two dependent indices. In this paper, several procedures are developed for testing equality of two dependent within-subject coefficient of variations computed from the same sample of subjects. We proposed two approaches; one is likelihood based (LRT, Wald, and Score test), while the other is regression based approach (extension of Pitman-Morgan). We assessed the powers and the empirical levels of significance of these methods via extensive Monte Carlo simulations. It is shown that the relatively simple Wald's test (WT) is as powerful as the likelihood ratio test (LRT) and that both have consistently greater power than the score test. A simple procedure based on results due to Pitman [1] and Morgan [2] is also developed and shown to be as powerful as the likelihood based tests.

We illustrated the proposed methodologies with the analyses of data from two biomedical studies. The majority of microarray reproducibility and cross-platform agreement studies use Pearson's correlation, as an index of reproducibility and agreement, which would not be an appropriate measure of reproducibility. Because of the large heterogeneity among the genes in the data set, the intra-class correlation coefficient as an index of reproducibility of the platform would also not be an appropriate index of reliability as highly heterogeneous populations artificially produces high reliability index. Therefore, WSCV should be used as an index of reproducibility. In addition, the methodology presented in this paper overcomes the difficulty noted by Tan et al. [14] in which the authors state that "Dependence between the datasets would confound any inferences we could make about the differences in correlations. ... determination whether differences in correlation were statistically significant could not be made". In this paper, we have used the within- gene coefficient of variation as a measure of reproducibility of a specific platform. Therefore, a comparison across platforms leads naturally to a comparison of two dependent within-subject coefficients of variation.

Two issues need to be discussed in this section. The first is related to the nature of the data to be analyzed while the other is related to situations when the assumed underlying model generating the data deviates from the normal distribution.

First, a frequently occurring question in the planning of biomedical investigations is whether to measure the response or the trait of interest on a continuous scale (e.g. gene expressions; systolic blood pressures etc.) or dichotomous scale (e.g. highly expressed gene vs. low expressed genes; hypertensive vs. normtensive etc.). In the case of two measuring devices and two dichotomous responses, the most commonly used measure of test-retest reliability or agreement is the kappa coefficient introduced in [35]. Donner and Eliasziw [36] and more recently Shoukri and Donner [37] cautioned against dichotomizing traits measured on the continuous scales. They demonstrated that the loss in the efficiency in estimation of the reliability coefficient can be severe. The conclusion is that for naturally dichotomous traits (e.g. affected vs. not affected) one can use kappa to assess the test-re-test reliability, while for continuous traits the methods presented in this paper would be more appropriate.

Second, it should be noted that the inference procedures discussed in this paper (except the PM test) are likelihood based and their statistical properties may not be appropriate in small samples. The difficulty is that the sampling distribution of a test statistics is unknown. Alternatively, one may use the bootstrap technology to estimate the sampling distributions of the test statistics. When the data are hierarchical in nature with variance covariance matrix Σ as shown in (2), one may use model-based approach to generate bootstrap samples [38], which is achieved by sampling subjects with replacement and estimate the coefficients of variations and hence their empirical sampling distributions. There is already a rich class of bootstrap methods for clustered data in the literature but there is an absence of detailed theoretical results on the properties of these methods [39]. Gaining insight into bootstrapping clustered data for all these methods and draw comparison to our proposed likelihood based approach warrants serious investigation and is beyond the scope of this paper.

Conclusion

Comparison of reproducibility or reliability of measurement devices or methods on the same set of subjects comes down to comparison of dependent reliability or reproducibility parameters. Testing the equality of two dependent WSCV has not been dealt with in the statistical literature. The presented methodology overcomes the difficulty noted by data analysts that the issue of dependence when ignored, would confound the inference on measures of reliability or reproducibility. It should also be emphasized that when comparison among platforms reliability indices the ICC is not an appropriate measure of reliability due to the large heterogeneity among the genes. Because the magnitude of the ICC depends on the degree of heterogeneity among the subjects it is not an appropriate index of reproducibility. We therefore recommend the WSCV in similar settings.

The LRT and WT procedures presented in Section 2 may also be extended in a straightforward manner to compare more than two platforms (methods, labs, or measurement devices). A further advantage of the LRT in this context is that it may easily be extended to deal with the case of an unequal number of replicates for each platform.

The codes developed (in MATLAB) can be used to do power calculations for planning a reproducibility study when comparing two methods (or devices), and can be obtained on request from the authors.

APPENDIX

Elements of Fisher's information matrix (m 1 = m 2 = m)

i 33 = E ( 2 l σ 1 2 σ 1 2 ) = n 4 σ 1 4 [ 2 m + m 2 w ρ 12 2 ] i 34 = E ( 2 l σ 1 2 σ 2 2 ) = n m 2 4 σ 1 2 σ 2 2 w ρ 12 2 i 35 = E ( 2 l σ 1 2 ρ 1 ) = n ( m 1 ) 2 σ 1 2 [ 1 1 ρ 1 1 + ( m 1 ) ρ 2 w ] i 36 = i 45 = 0 i 37 = E ( 2 l σ 1 2 ρ 12 ) = n m 2 2 w σ 1 2 ρ 12 i 44 = E ( 2 l σ 2 2 σ 2 2 ) = n 4 σ 2 4 [ 2 m + m 2 w ρ 12 2 ] i 46 = E ( 2 l σ 2 2 ρ 2 ) = n ( m 1 ) 2 σ 2 2 [ 1 1 ρ 2 1 + ( m 1 ) ρ 1 w ] i 47 = E ( 2 l σ 2 2 ρ 12 ) = n m 2 w ρ 12 2 σ 2 2 i 55 = E ( 2 l ρ 1 2 ) = n ( m 1 ) 2 [ 1 ( 1 ρ 1 ) 2 + ( m 1 ) ( 1 + ( m 1 ) ρ 2 ) 2 w 2 ] i 56 = E ( 2 l ρ 1 ρ 2 ) = n ( m 1 ) 2 m 2 ρ 12 2 2 w 2 i 57 = E ( 2 l ρ 1 ρ 12 ) = n ( m 1 ) m 2 ρ 12 w [ 1 + ( m 1 ) ρ 2 ] i 66 = E ( 2 l ρ 2 2 ) = n ( m 1 ) 2 [ 1 ( 1 ρ 2 ) 2 + ( m 1 ) ( 1 + ( m 1 ) ρ 1 ) 2 w 2 ] i 67 = E ( 2 l ρ 2 ρ 12 ) = n ( m 1 ) m 2 ρ 12 w [ 1 + ( m 1 ) ρ 1 ] i 77 = E ( 2 l ρ 12 2 ) = n m 2 w [ 1 + 2 ρ 12 2 m 2 w 2 ] , MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaab4qaaaaaaaqaaiabdMgaPnaaBaaaleaacqaIZaWmcqaIZaWmaeqaaOGaeyypa0Jaemyrau0aaeWaaKqbagaadaWcaaqaaiabgkGi2oaaCaaabeqaaiabikdaYaaacqWGSbaBaeaacqGHciITcqaHdpWCdaqhaaqaaiabigdaXaqaaiabikdaYaaacqGHciITcqaHdpWCdaqhaaqaaiabigdaXaqaaiabikdaYaaaaaaakiaawIcacaGLPaaacqGH9aqpjuaGdaWcaaqaaiabd6gaUbqaaiabisda0iabeo8aZnaaDaaabaGaeGymaedabaGaeGinaqdaaaaakmaadmaabaGaeGOmaiJaemyBa0Maey4kaSscfa4aaSaaaeaacqWGTbqBdaahaaqabeaacqaIYaGmaaaabaGaem4DaChaaOGaeqyWdi3aa0baaSqaaiabigdaXiabikdaYaqaaiabikdaYaaaaOGaay5waiaaw2faaaqaaiabdMgaPnaaBaaaleaacqaIZaWmcqaI0aanaeqaaOGaeyypa0Jaemyrau0aaeWaaKqbagaadaWcaaqaaiabgkGi2oaaCaaabeqaaiabikdaYaaacqWGSbaBaeaacqGHciITcqaHdpWCdaqhaaqaaiabigdaXaqaaiabikdaYaaacqGHciITcqaHdpWCdaqhaaqaaiabikdaYaqaaiabikdaYaaaaaaakiaawIcacaGLPaaacqGH9aqpcqGHsisljuaGdaWcaaqaaiabd6gaUjabd2gaTnaaCaaabeqaaiabikdaYaaaaeaacqaI0aancqaHdpWCdaqhaaqaaiabigdaXaqaaiabikdaYaaacqaHdpWCdaqhaaqaaiabikdaYaqaaiabikdaYaaacqWG3bWDaaGccqaHbpGCdaqhaaWcbaGaeGymaeJaeGOmaidabaGaeGOmaidaaaGcbaGaemyAaK2aaSbaaSqaaiabiodaZiabiwda1aqabaGccqGH9aqpcqWGfbqrdaqadaqcfayaamaalaaabaGaeyOaIy7aaWbaaeqabaGaeGOmaidaaiabdYgaSbqaaiabgkGi2kabeo8aZnaaDaaabaGaeGymaedabaGaeGOmaidaaiabgkGi2kabeg8aYnaaBaaabaGaeGymaedabeaaaaaakiaawIcacaGLPaaacqGH9aqpcqGHsisljuaGdaWcaaqaaiabd6gaUnaabmaabaGaemyBa0MaeyOeI0IaeGymaedacaGLOaGaayzkaaaabaGaeGOmaiJaeq4Wdm3aa0baaeaacqaIXaqmaeaacqaIYaGmaaaaaOWaamWaaeaajuaGdaWcaaqaaiabigdaXaqaaiabigdaXiabgkHiTiabeg8aYnaaBaaabaGaeGymaedabeaaaaGccqGHsisljuaGdaWcaaqaaiabigdaXiabgUcaRmaabmaabaGaemyBa0MaeyOeI0IaeGymaedacaGLOaGaayzkaaGaeqyWdi3aaSbaaeaacqaIYaGmaeqaaaqaaiabdEha3baaaOGaay5waiaaw2faaaqaaiabdMgaPnaaBaaaleaacqaIZaWmcqaI2aGnaeqaaOGaeyypa0JaemyAaK2aaSbaaSqaaiabisda0iabiwda1aqabaGccqGH9aqpcqaIWaamaeaacqWGPbqAdaWgaaWcbaGaeG4mamJaeG4naCdabeaakiabg2da9iabdweafnaabmaajuaGbaWaaSaaaeaacqGHciITdaahaaqabeaacqaIYaGmaaGaemiBaWgabaGaeyOaIyRaeq4Wdm3aa0baaeaacqaIXaqmaeaacqaIYaGmaaGaeyOaIyRaeqyWdi3aaSbaaeaacqaIXaqmcqaIYaGmaeqaaaaaaOGaayjkaiaawMcaaiabg2da9iabgkHiTKqbaoaalaaabaGaemOBa4MaemyBa02aaWbaaeqabaGaeGOmaidaaaqaaiabikdaYiabdEha3jabeo8aZnaaDaaabaGaeGymaedabaGaeGOmaidaaaaakiabeg8aYnaaBaaaleaacqaIXaqmcqaIYaGmaeqaaaGcbaGaemyAaK2aaSbaaSqaaiabisda0iabisda0aqabaGccqGH9aqpcqWGfbqrdaqadaqcfayaamaalaaabaGaeyOaIy7aaWbaaeqabaGaeGOmaidaaiabdYgaSbqaaiabgkGi2kabeo8aZnaaDaaabaGaeGOmaidabaGaeGOmaidaaiabgkGi2kabeo8aZnaaDaaabaGaeGOmaidabaGaeGOmaidaaaaaaOGaayjkaiaawMcaaiabg2da9KqbaoaalaaabaGaemOBa4gabaGaeGinaqJaeq4Wdm3aa0baaeaacqaIYaGmaeaacqaI0aanaaaaaOWaamWaaeaacqaIYaGmcqWGTbqBcqGHRaWkjuaGdaWcaaqaaiabd2gaTnaaCaaabeqaaiabikdaYaaaaeaacqWG3bWDaaGccqaHbpGCdaqhaaWcbaGaeGymaeJaeGOmaidabaGaeGOmaidaaaGccaGLBbGaayzxaaaabaGaemyAaK2aaSbaaSqaaiabisda0iabiAda2aqabaGccqGH9aqpcqWGfbqrdaqadaqcfayaamaalaaabaGaeyOaIy7aaWbaaeqabaGaeGOmaidaaiabdYgaSbqaaiabgkGi2kabeo8aZnaaDaaabaGaeGOmaidabaGaeGOmaidaaiabgkGi2kabeg8aYnaaBaaabaGaeGOmaidabeaaaaaakiaawIcacaGLPaaacqGH9aqpcqGHsisljuaGdaWcaaqaaiabd6gaUnaabmaabaGaemyBa0MaeyOeI0IaeGymaedacaGLOaGaayzkaaaabaGaeGOmaiJaeq4Wdm3aa0baaeaacqaIYaGmaeaacqaIYaGmaaaaaOWaamWaaeaajuaGdaWcaaqaaiabigdaXaqaaiabigdaXiabgkHiTiabeg8aYnaaBaaabaGaeGOmaidabeaaaaGccqGHsisljuaGdaWcaaqaaiabigdaXiabgUcaRmaabmaabaGaemyBa0MaeyOeI0IaeGymaedacaGLOaGaayzkaaGaeqyWdi3aaSbaaeaacqaIXaqmaeqaaaqaaiabdEha3baaaOGaay5waiaaw2faaaqaaiabdMgaPnaaBaaaleaacqaI0aancqaI3aWnaeqaaOGaeyypa0Jaemyrau0aaeWaaKqbagaadaWcaaqaaiabgkGi2oaaCaaabeqaaiabikdaYaaacqWGSbaBaeaacqGHciITcqaHdpWCdaqhaaqaaiabikdaYaqaaiabikdaYaaacqGHciITcqaHbpGCdaWgaaqaaiabigdaXiabikdaYaqabaaaaaGccaGLOaGaayzkaaGaeyypa0JaeyOeI0scfa4aaSaaaeaacqWGUbGBcqWGTbqBdaahaaqabeaacqaIYaGmaaaabaGaem4DaChaamaalaaabaGaeqyWdi3aaSbaaeaacqaIXaqmcqaIYaGmaeqaaaqaaiabikdaYiabeo8aZnaaDaaabaGaeGOmaidabaGaeGOmaidaaaaaaOqaaiabdMgaPnaaBaaaleaacqaI1aqncqaI1aqnaeqaaOGaeyypa0Jaemyrau0aaeWaaKqbagaadaWcaaqaaiabgkGi2oaaCaaabeqaaiabikdaYaaacqWGSbaBaeaacqGHciITcqaHbpGCdaqhaaqaaiabigdaXaqaaiabikdaYaaaaaaakiaawIcacaGLPaaacqGH9aqpjuaGdaWcaaqaaiabd6gaUnaabmaabaGaemyBa0MaeyOeI0IaeGymaedacaGLOaGaayzkaaaabaGaeGOmaidaaOWaamWaaeaajuaGdaWcaaqaaiabigdaXaqaamaabmaabaGaeGymaeJaeyOeI0IaeqyWdi3aaSbaaeaacqaIXaqmaeqaaaGaayjkaiaawMcaamaaCaaabeqaaiabikdaYaaaaaGccqGHRaWkdaqadaqaaiabd2gaTjabgkHiTiabigdaXaGaayjkaiaawMcaaKqbaoaalaaabaWaaeWaaeaacqaIXaqmcqGHRaWkdaqadaqaaiabd2gaTjabgkHiTiabigdaXaGaayjkaiaawMcaaiabeg8aYnaaBaaabaGaeGOmaidabeaaaiaawIcacaGLPaaadaahaaqabeaacqaIYaGmaaaabaGaem4DaC3aaWbaaeqabaGaeGOmaidaaaaaaOGaay5waiaaw2faaaqaaiabdMgaPnaaBaaaleaacqaI1aqncqaI2aGnaeqaaOGaeyypa0Jaemyrau0aaeWaaKqbagaadaWcaaqaaiabgkGi2oaaCaaabeqaaiabikdaYaaacqWGSbaBaeaacqGHciITcqaHbpGCdaWgaaqaaiabigdaXaqabaGaeyOaIyRaeqyWdi3aaSbaaeaacqaIYaGmaeqaaaaaaOGaayjkaiaawMcaaiabg2da9iabd6gaUnaabmaabaGaemyBa0MaeyOeI0IaeGymaedacaGLOaGaayzkaaWaaWbaaSqabeaacqaIYaGmaaGccqWGTbqBdaahaaWcbeqaaiabikdaYaaajuaGdaWcaaqaaiabeg8aYnaaDaaabaGaeGymaeJaeGOmaidabaGaeGOmaidaaaqaaiabikdaYiabdEha3naaCaaabeqaaiabikdaYaaaaaaakeaacqWGPbqAdaWgaaWcbaGaeGynauJaeG4naCdabeaakiabg2da9iabdweafnaabmaajuaGbaWaaSaaaeaacqGHciITdaahaaqabeaacqaIYaGmaaGaemiBaWgabaGaeyOaIyRaeqyWdi3aaSbaaeaacqaIXaqmaeqaaiabgkGi2kabeg8aYnaaBaaabaGaeGymaeJaeGOmaidabeaaaaaakiaawIcacaGLPaaacqGH9aqpcqGHsisljuaGdaWcaaqaaiabd6gaUnaabmaabaGaemyBa0MaeyOeI0IaeGymaedacaGLOaGaayzkaaGaemyBa02aaWbaaeqabaGaeGOmaidaaiabeg8aYnaaBaaabaGaeGymaeJaeGOmaidabeaaaeaacqWG3bWDaaGcdaWadaqaaiabigdaXiabgUcaRmaabmaabaGaemyBa0MaeyOeI0IaeGymaedacaGLOaGaayzkaaGaeqyWdi3aaSbaaSqaaiabikdaYaqabaaakiaawUfacaGLDbaaaeaacqWGPbqAdaWgaaWcbaGaeGOnayJaeGOnaydabeaakiabg2da9iabdweafnaabmaajuaGbaWaaSaaaeaacqGHciITdaahaaqabeaacqaIYaGmaaGaemiBaWgabaGaeyOaIyRaeqyWdi3aa0baaeaacqaIYaGmaeaacqaIYaGmaaaaaaGccaGLOaGaayzkaaGaeyypa0tcfa4aaSaaaeaacqWGUbGBdaqadaqaaiabd2gaTjabgkHiTiabigdaXaGaayjkaiaawMcaaaqaaiabikdaYaaakmaadmaabaqcfa4aaSaaaeaacqaIXaqmaeaadaqadaqaaiabigdaXiabgkHiTiabeg8aYnaaBaaabaGaeGOmaidabeaaaiaawIcacaGLPaaadaahaaqabeaacqaIYaGmaaaaaOGaey4kaSYaaeWaaeaacqWGTbqBcqGHsislcqaIXaqmaiaawIcacaGLPaaajuaGdaWcaaqaamaabmaabaGaeGymaeJaey4kaSYaaeWaaeaacqWGTbqBcqGHsislcqaIXaqmaiaawIcacaGLPaaacqaHbpGCdaWgaaqaaiabigdaXaqabaaacaGLOaGaayzkaaWaaWbaaeqabaGaeGOmaidaaaqaaiabdEha3naaCaaabeqaaiabikdaYaaaaaaakiaawUfacaGLDbaaaeaacqWGPbqAdaWgaaWcbaGaeGOnayJaeG4naCdabeaakiabg2da9iabdweafnaabmaajuaGbaWaaSaaaeaacqGHciITdaahaaqabeaacqaIYaGmaaGaemiBaWgabaGaeyOaIyRaeqyWdi3aaSbaaeaacqaIYaGmaeqaaiabgkGi2kabeg8aYnaaBaaabaGaeGymaeJaeGOmaidabeaaaaaakiaawIcacaGLPaaacqGH9aqpcqGHsislcqWGUbGBdaqadaqaaiabd2gaTjabgkHiTiabigdaXaGaayjkaiaawMcaaKqbaoaalaaabaGaemyBa02aaWbaaeqabaGaeGOmaidaaiabeg8aYnaaBaaabaGaeGymaeJaeGOmaidabeaaaeaacqWG3bWDaaGcdaWadaqaaiabigdaXiabgUcaRmaabmaabaGaemyBa0MaeyOeI0IaeGymaedacaGLOaGaayzkaaGaeqyWdi3aaSbaaSqaaiabigdaXaqabaaakiaawUfacaGLDbaaaeaacqWGPbqAdaWgaaWcbaGaeG4naCJaeG4naCdabeaakiabg2da9iabdweafnaabmaajuaGbaWaaSaaaeaacqGHciITdaahaaqabeaacqaIYaGmaaGaemiBaWgabaGaeyOaIyRaeqyWdi3aa0baaeaacqaIXaqmcqaIYaGmaeaacqaIYaGmaaaaaaGccaGLOaGaayzkaaGaeyypa0tcfa4aaSaaaeaacqWGUbGBcqWGTbqBdaahaaqabeaacqaIYaGmaaaabaGaem4DaChaaOWaamWaaeaacqaIXaqmcqGHRaWkcqaIYaGmcqaHbpGCdaqhaaWcbaGaeGymaeJaeGOmaidabaGaeGOmaidaaKqbaoaalaaabaGaemyBa02aaWbaaeqabaGaeGOmaidaaaqaaiabdEha3naaCaaabeqaaiabikdaYaaaaaaakiaawUfacaGLDbaacqGGSaalaaaaaa@9AF5@

The matrix I 22 is therefore given by:

I 22 = [ i 33 i 34 i 35 0 i 37 i 44 0 i 46 i 47 i 55 i 56 i 57 i 66 i 67 i 77 ] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemysaK0aaSbaaSqaaiabikdaYiabikdaYaqabaGccqGH9aqpdaWadaqaauaabeqafuaaaaabbaGaemyAaK2aaSbaaSqaaiabiodaZiabiodaZaqabaaakeaacqWGPbqAdaWgaaWcbaGaeG4mamJaeGinaqdabeaaaOqaaiabdMgaPnaaBaaaleaacqaIZaWmcqaI1aqnaeqaaaGcbaGaeGimaadabaGaemyAaK2aaSbaaSqaaiabiodaZiabiEda3aqabaaakeaaaeaacqWGPbqAdaWgaaWcbaGaeGinaqJaeGinaqdabeaaaOqaaiabicdaWaqaaiabdMgaPnaaBaaaleaacqaI0aancqaI2aGnaeqaaaGcbaGaemyAaK2aaSbaaSqaaiabisda0iabiEda3aqabaaakeaaaeaaaeaacqWGPbqAdaWgaaWcbaGaeGynauJaeGynaudabeaaaOqaaiabdMgaPnaaBaaaleaacqaI1aqncqaI2aGnaeqaaaGcbaGaemyAaK2aaSbaaSqaaiabiwda1iabiEda3aqabaaakeaaaeaaaeaaaeaacqWGPbqAdaWgaaWcbaGaeGOnayJaeGOnaydabeaaaOqaaiabdMgaPnaaBaaaleaacqaI2aGncqaI3aWnaeqaaaGcbaaabaaabaaabaaabaGaemyAaK2aaSbaaSqaaiabiEda3iabiEda3aqabaaaaaGccaGLBbGaayzxaaaaaa@61EC@

References

  1. Pitman EJG: A note on normal correlation. Biometrika. 1939, 31: 9-12.

    Article  Google Scholar 

  2. Morgan W: A test for the significance of the difference between two variances in a sample from bivariate population. Biometrika. 1939, 31: 13-19.

    Google Scholar 

  3. Gupta RC, Ma S: Testing the equality of coefficients of variation in k Testing normal populations. Communications in Statistics-Theory and Methods. 1996, 25: 115-132. 10.1080/03610929608831683.

    Article  Google Scholar 

  4. Fung WK, Tsang TS: A simulation study comparing tests for the equality of coefficients of variation. Statistics in Medicine. 1998, 17: 2003-2014. 10.1002/(SICI)1097-0258(19980915)17:17<2003::AID-SIM889>3.0.CO;2-I.

    Article  CAS  PubMed  Google Scholar 

  5. Tian L: Inferences on the common coefficient of variation. Stat Med. 2005, 24 (14): 2213-2220. 10.1002/sim.2088.

    Article  PubMed  Google Scholar 

  6. Weerahandi S: Exact statistical methods for data analysis. 1995, Springer: New York

    Chapter  Google Scholar 

  7. Quan H, Shih W: Response to Letter to the Editor. Biometrics. 2000, 56: 301-303. 10.1111/j.0006-341X.2000.00301.x.

    Article  Google Scholar 

  8. Quan H, Shih W: Assessing reproducibility by the within-subject coefficient of variation with random effects models. Biometrics. 1996, 52: 1195-1203. 10.2307/2532835.

    Article  CAS  PubMed  Google Scholar 

  9. Giraudeau B, Ravaud P, Chastang C: Comments on Quan and Shih's Assessing Reproducibility by the Within-Subject Coefficient of Variation With Random Effects Models. Biometrics. 2000, 56: 301-303. 10.1111/j.0006-341X.2000.00301.x.

    Article  CAS  PubMed  Google Scholar 

  10. Atkinson G, Neville A: Comment on the use of concordance correlation to assess the agreement between two variables. Biometrics. 1997, 53 (2): 775-777.

    Google Scholar 

  11. Lin LI, Chinchilli V: Rejoinder to the letter to the Editor from Atkinson and Neville. Biometrics. 1997, 53 (2): 777-778. 10.2307/2533947.

    Article  Google Scholar 

  12. Shi L, Tong W, Fang H, Scherf U, Han J, Puri RK, Frueh FW, Goodsaid FM, Guo L, Su Z, Han T, Fuscoe JC, Xu ZA, Patterson TA, Hong H, Xie Q, Perkins RG, Chen JJ, Casciano DA: Cross-platform comparability of microarray technology: intra-platform consistency and appropriate data analysis procedures are essential. BMC Bioinformatics. 2005, 6 (Suppl 2): S12-10.1186/1471-2105-6-S2-S12.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Irizarry RA, Warren D, Spencer F, Kim IF, Biswal S, Frank BC, Gabrielson E, Garcia JGN, Geoghegan J, Germino G, Griffin C, Hilmer SC, Hoffman E, Jedlicka AE, Kawasaki E, Martínez-Murillo F, Morsberger L, Lee H, Petersen D, Quackenbush J, Scott A, Wilson M, Yang Y, Ye SQ, Yu W: Multiple-laboratory comparison of microarray platforms. Nature Methods. 2005, 2 (5): 345-350. 10.1038/nmeth756.

    Article  CAS  PubMed  Google Scholar 

  14. Tan PK, Downey TJ, Spitznagel EL, Xu P, Fu D, Dmitrov DS, Lempicki RA, Raaka BM, Cam MC: Evaluation of gene expression measurements from commercial microarray platforms. Nucleic Acids Res. 2003, 31: 5676-5684. 10.1093/nar/gkg763.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Kuo WP, Jenssen TK, Butte AJ, Ohno-Machado L, Kohane IS: Analysis of matched mRNA measurements from two different microarray technologies. Bioinformatics. 2002, 18: 405-412. 10.1093/bioinformatics/18.3.405.

    Article  CAS  PubMed  Google Scholar 

  16. Yauk CL, Berndt ML, Williams A, Douglas GR: Comprehensive comparison of six microarray technologies. Nucleic Acids Res. 2004, 32 (15): e124-10.1093/nar/gnh123. doi:101093/nar/gnh123

    Article  PubMed  PubMed Central  Google Scholar 

  17. Jarvinen A-K, Hautaniemi S, Edgren H, Auvinen P, Saarela J, Kallioniemi OP, Monni O: Are data from different gene expression microarray platforms comparable?. Genomics. 2004, 83: 1164-1168. 10.1016/j.ygeno.2004.01.004.

    Article  CAS  PubMed  Google Scholar 

  18. Wang H, He X, Band M, Wilson C, Liu L: A study of inter-lab and inter-platform agreement of DNA microarray data. BMC Genomics. 2005, 6: 71-10.1186/1471-2164-6-71.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Lin L: A concordance correlation coefficient to evaluate reproducibility. Biometrics. 1989, 45: 255-268. 10.2307/2532051.

    Article  CAS  PubMed  Google Scholar 

  20. Dunn G: Design and Analysis of Reliability Studies. Statistical Methods in Medical Research. 1992, 1: 123-157. 10.1177/096228029200100202.

    Article  CAS  PubMed  Google Scholar 

  21. Donner A, Zou G: Testing the equality of dependent intraclass correlation coefficients. The Statistician. 2002, 51 (part 3): 367-379.

    Google Scholar 

  22. Shoukri M, El-Kum N, Walter SD: Interval estimation and optimal design for the within-subject coefficient of variation for continuous and binary variables. BMC Medical Research Methodology. 2006, 6: 24-10.1186/1471-2288-6-24. doi:10.1186/1471-2288-6-24.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Fleiss J: The Design and Analysis of Clinical Experiments. 1986, J Wiley, New York

    Google Scholar 

  24. Donner A, Bull S: Inferences concerning a common intraclass correlation coefficient. Biometrics. 1983, 39: 771-775. 10.2307/2531107.

    Article  CAS  PubMed  Google Scholar 

  25. Blodeau M, Brenner D: Theory of Multivariate Statistics. 1999, New York: Springer

    Google Scholar 

  26. Searle RS, Casella G, McCulloch CE: Variance Components. 1992, Wiley- Interscience

    Chapter  Google Scholar 

  27. Stuart A, Ord K: Advanced Theory of Statistics. 1987, London: Griffin, 1: 324-5

    Google Scholar 

  28. Cox DR, Hinkley DV: Theoretical Statistics. 1974, Chapman and Hall: London

    Chapter  Google Scholar 

  29. Neyman J, Scott E: On the use of C(α) optimal tests of composite hypotheses. Bulletin of the International Statistical Institute, Proceedings of the 35th Session. 1966, 41: 477-497.

    Google Scholar 

  30. Neyman J: Optimal asymptotic tests of composite hypotheses. Probability and Statistics: The Harold Cramer Volume. Edited by: Grenander V. 1959, Wiley: New York, 213-234.

    Google Scholar 

  31. Bradley E, Blackwood L: Comparing paired data: A simultaneous test for means and variances. The American Statistician. 1989, 43: 234-235. 10.2307/2685368.

    Google Scholar 

  32. Draper N, Smith H: Applied Regression Analysis. 1981, Wiley-Inter-science, 2

    Google Scholar 

  33. Landis R, Koch G: The measurements of observer agreement for categorical data. Biometrics. 1977, 33: 159-174. 10.2307/2529310.

    Article  CAS  PubMed  Google Scholar 

  34. Turner SW, Toone BK, Brett-Jones JR: Computerized tomographic scan in early schizophrenia- preliminary findings. Psychological Medicine. 1986, 16: 219-225.

    Article  CAS  PubMed  Google Scholar 

  35. Cohen J: A coefficient of agreement for nominal scale. Educational and Psychological Measurements. 1960, 20: 27-46.

    Article  Google Scholar 

  36. Donner A, Eliasziw M: Statistical implications for the choice between a dichotomous or continuous trait in studies of inter-observer agreement. Biometrics. 1994, 50: 550-777. 10.2307/2533400.

    Article  CAS  PubMed  Google Scholar 

  37. Shoukri MM, Donner A: Efficiency considerations in the analysis of inter-observer agreement. Biostatistics. 2001, 2 (3): 323-336. 10.1093/biostatistics/2.3.323.

    Article  PubMed  Google Scholar 

  38. Davison AC, Hinkley D: Bootstrap Methods and Their Application. 1997, Cambridge: Cambridge University Press

    Chapter  Google Scholar 

  39. Ukomunne OC, Davison AC, Gulliford MC, Chinn S: Non-parametric bootstrap confidence intervals for the intra-class correlation coefficient. Statistics in Medicine. 2003, 22: 3805-3821. 10.1002/sim.1643.

    Article  Google Scholar 

Pre-publication history

Download references

Acknowledgements

The first three authors would like to thank the research centre administration of the King Faisal Specialist Hospital and Research Centre for their support. Dr. Donner acknowledges the support made to his research by The Natural Sciences and Engineering Research Council of Canada (NSERC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamed M Shoukri.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

MMS conceived of the study problem and derived the analytical results. DC conducted the simulations and analyzed the data. All authors contributed to the writing of the manuscript, and approved its final format.

Mohamed M Shoukri, Dilek Colak contributed equally to this work.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Shoukri, M.M., Colak, D., Kaya, N. et al. Comparison of two dependent within subject coefficients of variation to evaluate the reproducibility of measurement devices. BMC Med Res Methodol 8, 24 (2008). https://doi.org/10.1186/1471-2288-8-24

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2288-8-24

Keywords