Department Xenética, CIBUS Campus Sur, Universidade de Santiago de Compostela, Santiago de Compostela, Galicia 15782, Spain

Abstract

Background

Comparing the covariation patterns of populations or species is a basic step in the evolutionary analysis of quantitative traits. Here I propose a new, simple method to make this comparison in two population samples that is based on comparing the variance explained in each sample by the eigenvectors of its own covariance matrix with that explained by the covariance matrix eigenvectors of the other sample. The rationale of this procedure is that the matrix eigenvectors of two similar samples would explain similar amounts of variance in the two samples. I use computer simulation and morphological covariance matrices from the two morphs in a marine snail hybrid zone to show how the proposed procedure can be used to measure the contribution of the matrices orientation and shape to the overall differentiation.

Results

I show how this procedure can detect even modest differences between matrices calculated with moderately sized samples, and how it can be used as the basis for more detailed analyses of the nature of these differences.

Conclusions

The new procedure constitutes a useful resource for the comparison of covariance matrices. It could fill the gap between procedures resulting in a single, overall measure of differentiation, and analytical methods based on multiple model comparison not providing such a measure.

Background

Covariance matrices are key tools in the study of the genetics and evolution of quantitative traits. The **G** matrix, containing the additive genetic variances and covariances for a set of characters, summarizes the genetic architecture of traits and determines their short-term response to multivariate selection along with the constraints this response will face. The more easily estimated matrix of phenotypic variances and covariances **P** can be used as a surrogate for **G**, especially in the case of high heritability morphological characters **G** in the presence of selection and drift

Several methods for the comparison of covariance matrices are available (reviewed in

Other simple procedures

Among the limitations of CPCA are, first, that it is based on the assumption of multivariate normality, and second, that it results in categorical, not continuously varying measures of matrix similarity

In the present work I propose a new, simple and distribution-free procedure for the exploration of differences between covariance matrices that, in addition to providing a single and continuously varying measure of matrix differentiation, makes it possible to analyse this measure in terms of the contributions of differences in matrix orientation and shape. I use both computer simulation and **P** matrices corresponding to snail morphological measures to compare this procedure with some widely used alternatives. I show that the new procedure has power similar or better than that of the simpler methods, and how it can be used as the basis for more detailed analyses of the nature of the found differences.

Pairwise matrix comparison

The rationale for the comparison procedure is that, when the covariance matrices of two data samples are similar, the eigenvectors obtained in a principal component analysis of any of them will explain similar amounts of variation in both samples. The degree of similarity can be measured by calculating in each sample the individuals’ values and variances for the eigenvectors obtained in the other sample. Given that **D**
_{
1
} and **D**
_{
2
} are the matrices with the characters’ measures in the two samples and **X**
_{
1
} and **X**
_{
2
} the matrices containing in their columns the eigenvectors of these samples’ covariance matrices, the variances of the columns of the products **D**
_{
1
}**X**
_{
1
} and **D**
_{
2
}**X**
_{
2
} are the corresponding eigenvalues, i.e., the amounts of variance explained by the original eigenvectors, and those of **D**
_{
1
}**X**
_{
2
} and **D**
_{
2
}**X**
_{
1
}, the amounts of variance explained by the eigenvectors from the compared sample. Thus, for each of the _{i11}, v_{i12}, v_{i21}, and v_{i22}, where v_{i11} is the amount of sample 1 total variance explained by eigenvector i from sample 1, v_{i12} the amount of sample 1 total variance explained when applying eigenvector i from sample 2, and so on. These n sets of four values are the basic items to measure the similarity in covariance between samples. I define three sums:

where S1 is a general measure of differentiation depending on the ability of the eigenvectors from each sample to explain the variation in the other sample; S2 is a measure of the contribution of between-matrix differences in orientation (i.e., differences in orientation between eigenvectors in the same ordinal position in the two matrices) to S1, and S3 that of differences in shape (i.e., differences in the proportion of total variance explained by eigenvectors in the same ordinal position in the two matrices). It can be shown (Appendix 1) that S1 = S2 + S3. Figure

Contributions (S1_{1}, S2_{1}, S3_{1}) of the first eigenvectors of two sample matrices to the three sums used to measure the differentiation between these matrices in six hypothetical two-variable situations differing in matrices’ shape and orientation

**Contributions (S1**_{1}**, S2**_{1}**, S3**_{1}**) of the first eigenvectors of two sample matrices to the three sums used to measure the differentiation between these matrices in six hypothetical two-variable situations differing in matrices’ shape and orientation. **The ellipse axes’ lengths in the graphics represent the magnitude of the eigenvalues and the orientation of the eigenvectors in the two samples. The straight lines mark the first eigenvectors. The tables in the middle column contain the variances explained by the first eigenvectors obtained in each sample when calculated in the two data sets. Details about the generation of the used matrices are given in Appendix 2.

The S statistics are easier to compare between studies if they are made to vary between zero and one by making them relative to their maximum possible value. This maximum would occur in the extreme situation in which single eigenvectors explain all variation in each of the compared samples, and the eigenvectors of the two samples are orthogonal. In that case, S1 is equal to 2 times the sum of twice the square of the total variance of the first sample and twice the square of the total variance of the second sample. When the variances explained by each eigenvector are expressed as proportions of the total variance, so that the sum of all the proportions is equal to one, the maximum possible value for S1 is equal to eight. In the computer simulation and real data example shown in this article, the explained variances are expressed as proportions, and the S1, S2 and S3 statistics divided by eight so that they could vary between zero and one.

Figure

S1, S2 and S3 statistics values (y axis; black, grey and white points respectively; they are slightly displaced for clarity) in five matrix-shape differences and six relative orientations of the matrices’ first eigenvectors, from zero to 90º (x axis)

**S1, S2 and S3 statistics values (y axis; black, grey and white points respectively; they are slightly displaced for clarity) in five matrix-shape differences and six relative orientations of the matrices’ first eigenvectors, from zero to 90º (x axis). ****A**) two matrices with very different shape, one with eigenvalues equal to 95 and 5% of total variance and the other with eigenvalues equal to 55 and 45% of total variance; **B**) two same-shape “elongate” matrices, both with eigenvalues explaining 95 and 5% of total variance; **C**) two same-shape “rounded” matrices both with eigenvalues explaining 55 and 45% of total variance; **D**) two “elongate” matrices with slightly different shapes, one with eigenvalues explaining 95 and 5% of total variance and the other, 90 and 10% of total variance; **E**) two “rounded” matrices with slightly different shapes, one with eigenvalues explaining 60 and 40% of total variance and the other, 55 and 45% of total variance. Matrices are schematically represented at left in a zero degrees relative orientation, with ellipses’ axes equal to the matrices’ eigenvalues. Note that the scale varies between plots.

Results

I contrasted the results obtained with the proposed procedure with those from other widely used ones, namely CPCA and two simpler procedures providing single measures of matrix differentiation: one, the Random Skewers, based on products with test vectors, and the other, the T method, based on the comparison of matrix elements (see Methods). I followed two approaches. First, I studied the procedures’ power and Type I error through computer simulations that considered covariance matrices differing in shape, orientation or both, in different number of variables and sample size situations. Second, I compared their ability to detect differences between covariance matrices of shell measures from different morphs and populations of the seashore snail

Computer simulation

Figure

Examples of population samples used in the simulations (two variables case, size = 100): (a) from the reference population, (b) from the population resulting in a covariance matrix with changed orientation, (c) from the population resulting in a covariance matrix with changed shape, and (d) from the population resulting in a covariance matrix with both orientation and shape changed

**Examples of population samples used in the simulations (two variables case, size = 100): (a) from the reference population, (b) from the population resulting in a covariance matrix with changed orientation, (c) from the population resulting in a covariance matrix with changed shape, and (d) from the population resulting in a covariance matrix with both orientation and shape changed. **Each axis in the graphs corresponds to one of the two variables.

Proportions (from 0 to 100%; the lower and upper dotted lines mark the 5 and 95% respectively) of simulation replicates in which a difference between covariance matrices was found by the S1, S2, S3 (black, grey and white circles), RS (rhombs) and T method (squares) in comparisons involving matrices of samples taken from the same reference population (reference) or one from the reference population and another from a population resulting in matrices with altered orientation(orientation) or with altered shape (shape) or both (orientation + shape) in situations involving 2, 4 or 7 variables and sample sizes of 25, 50 or 100 individuals

**Proportions (from 0 to 100%; the lower and upper dotted lines mark the 5 and 95% respectively) of simulation replicates in which a difference between covariance matrices was found by the S1, S2, S3 (black, grey and white circles), RS (rhombs) and T method (squares) in comparisons involving matrices of samples taken from the same reference population (reference) or one from the reference population and another from a population resulting in matrices with altered orientation(orientation) or with altered shape (shape) or both (orientation + shape) in situations involving 2, 4 or 7 variables and sample sizes of 25, 50 or 100 individuals. **The sign positions in each sample size were slightly displaced to improve clarity. The top of each graph shows the results of CPC-based comparisons of two samples taken at random from each of the two populations considered in that graph (E: equal, P: proportional, C: CPC result, meaning that all eigenvectors were common but the matrices were not proportional –i.e., same orientation but differences in shape- and U: unrelated matrices). Note: the CPC program considers the possibility that only a subset of eigenvectors are in common, but that result was never found in these simulations.

Figure

The eigenvectors and eigenvalues of the six samples’ covariance matrices are shown in Table

**
Sample
**

**
EG1
**

**
EG2
**

**
EG3
**

**
EG4
**

**
EG5
**

**
EG6
**

**
EG7
**

The columns show each eigenvector’s coefficients for the variables measured in each sample, along with the corresponding absolute eigenvalues (cursive) and the same eigenvalues as percentages of the total variance in the sample (bold).

Rb loc 1

0.349

0.178

0.046

0.428

−0.049

−0.596

0.551

0.297

0.464

0.231

0.492

0.139

0.614

−0.064

0.323

0.350

0.209

−0.711

0.396

0.008

0.260

0.365

0.010

−0.033

−0.260

−0.870

0.170

0.047

0.427

−0.068

−0.867

0.011

0.219

0.107

−0.049

0.499

−0.765

0.354

0.041

0.127

0.146

0.039

0.347

0.176

0.152

0.011

0.026

−0.454

−0.790

**85.48**

**11.14**

**2.48**

**0.42**

**0.30**

**0.11**

**0.07**

Rb loc 2

0.349

0.125

0.133

0.432

−0.335

−0.568

0.473

0.306

0.305

0.323

0.464

0.235

0.648

0.122

0.314

0.226

0.339

−0.567

0.551

−0.259

0.208

0.377

0.124

0.137

−0.490

−0.700

0.304

−0.032

0.447

0.250

−0.843

−0.020

0.154

0.050

0.027

0.471

−0.865

0.023

0.027

0.125

0.098

0.058

0.357

0.185

0.127

0.005

−0.001

−0.748

−0.512

**84.24**

**11.00**

**3.66**

**0.50**

**0.29**

**0.20**

**0.11**

Rb loc 3

0.362

0.143

0.124

0.481

−0.094

−0.599

0.483

0.306

0.411

0.334

0.406

0.020

0.678

0.004

0.317

0.284

0.209

−0.616

0.567

−0.068

0.264

0.363

0.099

0.060

−0.464

−0.797

0.048

0.049

0.417

0.131

−0.881

0.062

0.111

0.126

−0.008

0.489

−0.828

0.152

0.037

0.124

0.180

0.050

0.360

0.129

0.161

0.060

0.085

−0.354

−0.832

**83.44**

**12.85**

**2.20**

**0.93**

**0.40**

**0.13**

**0.05**

Su loc 1

0.325

0.176

0.147

0.272

0.196

−0.375

0.767

0.245

0.291

0.395

0.414

0.379

0.585

−0.204

0.288

0.309

0.346

−0.198

−0.788

0.178

0.100

0.360

0.120

0.063

−0.817

0.426

0.055

0.016

0.493

−0.838

0.200

0.076

−0.071

0.056

−0.037

0.523

0.154

−0.796

0.152

−0.099

0.184

−0.043

0.324

0.221

0.158

0.138

0.014

−0.668

−0.597

**82.97**

**10.07**

**5.76**

**0.62**

**0.40**

**0.11**

**0.06**

Su loc 2

0.343

0.101

0.186

0.305

0.2812

−0.529

0.620

0.277

0.128

0.474

0.356

0.224

0.711

0.014

0.281

0.181

0.365

−0.119

−0.853

−0.059

0.100

0.365

0.108

0.145

−0.848

0.327

0.062

0.060

0.469

−0.866

−0.126

0.039

−0.094

0.066

0.000

0.506

0.408

−0.723

0.125

−0.086

0.176

−0.001

0.343

0.114

0.217

0.170

0.144

−0.414

−0.775

**76.48**

**15.88**

**6.06**

**0.80**

**0.54**

**0.17**

**0.08**

Su loc 3

0.338

0.124

0.204

0.308

−0.222

−0.378

−0.736

0.255

0.192

0.451

0.459

−0.202

0.634

0.202

0.282

0.200

0.356

−0.344

0.767

0.124

−0.176

0.322

0.126

0.152

−0.737

−0.555

0.070

0.035

0.425

−0.899

0.073

0.015

0.051

0.044

0.031

0.594

0.237

−0.739

0.097

0.089

0.161

0.038

0.322

0.166

0.231

0.153

0.044

−0.638

0.618

**87.32**

**6.74**

**5.15**

**0.38**

**0.27**

**0.08**

**0.06**

CPC results and bootstrap distributions for five statistics to compare

**CPC results and bootstrap distributions for five statistics to compare ****data covariance matrices in comparisons within sample (grey-lined boxplots; 1 to 3, Rbs; 4 to 6, Sus), between locations within morph (black-lined boxplots; 7 to 9, between Rbs; 10 to 12, between Sus) and between morphs (grey-filled boxplots; 13 to 18, between morphs of different locations; 19 to 21, between morphs of the same location). **The T% values were divided by 100 to make them comparable with the other statistics. The CPC box shows the number of common principal components; U: unrelated. No CPC analysis was done for the comparisons within samples. Plots do not include outliers. Circles mark the observed values for the statistics. No observed values are printed in the case of within sample comparisons (i.e., of matrices with themselves) because they were always equal to one for the RS and equal to zero for the other statistics. Comparison codes: 1, Rb1-Rb1; 2, Rb2-Rb2; 3, Rb3-Rb3; 4, Su1-Su1; 5, Su2-Su2; 6, Su3-Su3; 7, Rb1-Rb2; 8 Rb1-Rb3; 9, Rb2-Rb3; 10, Su1-Su2; 11, Su1-Su3; 12, Su2-Su3; 13, Rb1-Su2; 14, Rb2-Su1; 15, Rb1-Su3; 16, Rb3-Su1; 17, Rb2-Su3; 18, Rb3-Su2; 19, Rb1-Su1; 20, Rb2-Su2; 21, Rb3-Su3.

At least in the particular example analyzed here, differences related with matrix shape had the largest weight in the overall measure of differentiation S1, as the comparison results profiles of S1 and S3 were the most similar. The statistic S3 found large differences both between morphs and within the Su morph. The largest differences for S3 corresponded always to comparisons involving the matrix of the Sus from location 2, i.e., comparisons 10, 12, 13, 18 and 20 in Figure

Representation of the contribution (vertical axes) of each of the seven eigenvector pairs (1 to 7 from left to right in the horizontal axis) to the S1 (black points), S2 (solid lines) and S3 (dashed lines) statistics in each comparison between samples

**Representation of the contribution (vertical axes) of each of the seven eigenvector pairs (1 to 7 from left to right in the horizontal axis) to the S1 (black points), S2 (solid lines) and S3 (dashed lines) statistics in each comparison between samples. **All graphs are drawn to the same scale (minimum 0, maximum 0.0058) to ease comparison.

The statistic S2 found the most striking contrast between kinds of comparisons in Figure _{i12} and v_{i21} of the S expressions, see above). It can be seen that the differentiation between the Rb and Su matrices is related to a reversal in the variances explained by the second and third eigenvectors. In both morphs, the third eigenvector from the reciprocal morph explains more variation than the second reciprocal eigenvector. The figure shows also that the lowest differentiation was in comparisons involving sample Su3. Again, the inspection of Table

Proportions of the total variance (vertical axis; log-transformed for clarity of representation) of each sample explained by the eigenvectors (1 to 7 from left to right in the horizontal axis) obtained in the analysis of the reciprocal sample in each between-samples comparison

**Proportions of the total variance (vertical axis; log-transformed for clarity of representation) of each sample explained by the eigenvectors (1 to 7 from left to right in the horizontal axis) obtained in the analysis of the reciprocal sample in each between-samples comparison. **The gray circles mark the increases in variance explained by higher order reciprocal eigenvectors. The asterisks correspond to bootstrap tests of the change in proportion of variance explained (average of the two reciprocal comparisons) by successive eigenvectors. They mark changes in which the 97.5 percentile of the bootstrapped distribution was negative (i.e., the third reciprocal eigenvector explained more variance than the second).

The overall agreement between CPCA and the other procedures found in the simulations was lost in the analysis of

Discussion

The S statistics constitute sensitive tools for the detection of differences between covariance matrices. In the

The computer simulations shown in Figure

Since the S statistics introduced here simply measure what proportion of variation exists in a given sample along the axis of variation defined by the eigenvectors in the compared sample, they are similar to the RS and T% ones in that they do not compare and are not dependent on the matrices’ sizes. They focus instead on the more interesting differences in matrix shape and orientation. In any case, S statistics-based comparisons could use raw covariance components instead of proportions as in the example shown, so that the results would depend on between-matrix size differences. However, in that case the S statistics would not be able to separate the effect of size from those of other sources of differentiation between matrices. Similarly, the basic version of the T method proposed in

Calculating the amount of variance explained by a set of eigenvectors in a given dataset is straightforward in the case of datasets containing the phenotypic measures used to obtain **P** matrices. In the case of **G** matrices, the comparison would have to be based on additive genetic value estimates for individuals or families.

Since the proposed procedure is limited to two-sample comparisons, it cannot be used to make higher order analyses of the divergence among several populations (see ^{NS}; in the case of divergence in matrix shape, 0.568, 0.281 and 0.554; and in the case of divergence in both orientation and shape, 0.454, 0.572 and 0.341 (all correlations, P < 0.001 except when indicated).

The **P** matrices. The S1 statistic is not dependent on the eigenvectors’ ordering per se because it is based on comparisons within eigenvector, i.e., on the difference between the amount of variance explained by one eigenvector from one sample in the original and reciprocal samples. These differences do not change with eigenvector order. But S2 changes when the order of eigenvectors in one of the samples is reversed (see formulas) because this would be considered as a change in matrix orientation. In case the reversal in eigenvectors’ importance was complete, so that there were no changes in overall shape, S3 would remain unaffected (see the second row in Figure

Conclusions

The S-statistics procedure provides a simple and continuously-varying overall measure of differentiation that is distribution free and interpretable in terms of changes in matrix orientation and shape. In addition, it makes it easy to study the contribution of the different eigenvectors to the statistics values, which could provide further details on the nature of the differentiation, as was the case of the

Methods

Compared procedures

The random skewers (RS) procedure was proposed by Cheverud **A** and **B** are similar, the magnitude and direction of their responses to the same selection vector _{
i
} will be similar. The correlation between the two response vectors **A**
_{
i
} and **B**
_{
i
} is calculated as

and the measure of similarity between matrices as the average correlation for all vectors.

In the T method

where _{i1} and M_{i2} are such elements in the two matrices, and

Finally, I used the CPC (Common Principal Components) software of Phillips and Arnold

Simulations

The simulations compared pairs of samples of individuals differing in the shape and orientation of their covariance matrices for the measured variables. All variables considered in the simulations had two normally-distributed components, one (

In each sample, matrix orientation was controlled by the relative contribution (fixed within sample) _{i} of the common component to each variable’s value, and matrix shape, by the _{i} variances. Four kinds of sample matrix comparisons were made: between samples taken at random from the same population, between samples from populations whose covariance matrices differed in orientation, whose matrices differed in shape, and whose matrices differed in both orientation and shape. One sample was taken at random from each of the two populations compared in each simulation case, and their covariance matrices and comparison statistics calculated. The observed value of each statistic was compared with the distribution obtained by comparing 50 pairs of resamples of the same size taken from the first sample, and with that obtained by comparing 50 pairs of resamples of the same size taken from the second sample. If the observed value was greater than these 100 resampled values, I concluded that the statistic found differences between the two samples’ matrices. This process was repeated 1000 times for each simulation case. I considered three sample sizes, 25, 50 and 100, and three numbers of variables, 2, 4 and 7 (the number of variables in the

I assayed the proposed matrix comparison method on six sets of shell morphology data from the two morphs (Rb and Su) of the marine snail

Back and opercular view of shells of the lower-shore Su (left) and upper shore Rb (right) morphs of

**Back and opercular view of shells of the lower-shore Su (left) and upper shore Rb (right) morphs of ****form the Galician coasts. **The seven measures used are shown on the Su shell. Note: Rb snails are on average larger than Sus; shells of similar sizes were chosen to ease comparison on the image.

Appendix

Appendix 1

The v_{ijk} values in Table

Is the sum of the squared difference between diagonals:

**A** **eigenvector**

**B** **eigenvector**

Sample A

v_{i11}

v_{i12}

Sample B

v_{i21}

v_{i22}

and the squared difference between rows:

So that

and therefore

Appendix 2

The covariance matrices used to draw plots in Figure 2 corresponded to pairs of variables _{
i
} (

where sqrt is the square root. Note that sqrt(

The angles between eigenvector sets were determined by the

In this two-variables case, the variance explained by eigenvector

where **e**
_{
ij
} and **
e
**

Appendix 3

Summary of cases

The following tables show the values for the variances of variables

used to generate the data for variable

Four variables

Seven variables

**
Detailed list of cases
**

List of parameter sets used in every simulated case and resulting covariance matrices, eigenvectors and eigenvalues. The expected compositions of eigenvectors were obtained via eigenvector analyses applying R function ^{6}. Note that for four and seven variables cases it was not possible to obtain a constant set of eigenvector coefficients (beyond the first eigenvector) even for such large samples. In any case, The S statistics recognized their equivalence despite differences in eigenvectors’ coefficients (see the S3 and S2 values in the second row and third rows respectively of Figure

In the two variables case we had:

Reference sample:

where sqrt is the square root and _{1} and _{2} had distributions N(0, 0.2), and

The expected eigenvectors had coefficients (columns):

and the expected eigenvalues were: 1.8 and 0.2.

Compared sample with altered orientation:

where _{1} and _{2} had distributions N(0, 0.2), and c, N(0, 0.8). The expected covariance matrix was:

The expected eigenvectors had coefficients (columns):

and the expected eigenvalues were: 1.8 and 0.2.

Compared sample with altered shape:

where s_{1} and s_{2} had distributions N(0, 0.5), and c, N(0, 0.5). The expected covariance matrix was:

The expected eigenvectors had coefficients (columns):

and the expected eigenvalues were: 1.5 and 0.5.

Compared sample with both orientation and shape altered:

where s_{1} and s_{2} had distributions N(0, 0.5), and c, N(0, 0.5). The expected covariance matrix was:

The expected eigenvectors had coefficients (columns):

and the expected eigenvalues were: 1.5 and 0.5.

The expected total variance in all two variable samples was = 2.

In the four variables case, we had:

Reference sample:

where _{1,}
_{2,}
_{3} and _{4} had distributions N(0, 0.2), and c, N(0, 0.8). The expected covariance matrix was:

The expected eigenvectors had coefficients (columns):

and the expected eigenvalues were: 3.4, 0.2, 0.2, 0.2.

Compared sample with altered orientation:

where _{1,}
_{2,}
_{3} and _{4} had distributions N(0, 0.2), and c, N(0, 0.8). The expected covariance matrix was:

The expected eigenvectors had coefficients (columns):

and the expected eigenvalues were: 3.4, 0.2, 0.2, 0.2.

Compared sample with altered shape:

where _{1,}
_{2,}
_{3} and _{4} had distributions N(0, 0.4), and c, N(0, 0.6). The expected covariance matrix was:

The expected eigenvectors had coefficients (columns):

and the expected eigenvalues were: 2.8, 0.4, 0.4, 0.4.

Compared sample with both orientation and shape altered:

where _{1,}
_{2,}
_{3} and _{4} had distributions N(0, 0.4), and c, N(0, 0.6). The expected covariance matrix was:

The expected eigenvectors had coefficients (columns):

and the expected eigenvalues were: 2.8, 0.4, 0.4, 0.4.

The expected total variance in all four variable samples was = 4.

In the seven variables case, we had:

Reference sample:

where _{1,}
_{2,}
_{3,}
_{4,}
_{5,}
_{6,} and _{7} had distributions N(0, 0.2), and c, N(0, 0.8). The expected covariance matrix was:

The expected eigenvectors had coefficients (columns):

and the expected eigenvalues were: 5.8, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2.

Compared sample with altered orientation:

where _{1,}
_{2,}
_{3,}
_{4,}
_{5,}
_{6,} and _{7} had distributions N(0, 0.2), and c, N(0, 0.8). The expected covariance matrix was:

The expected eigenvectors had coefficients (columns):

and the expected eigenvalues were: 5.8, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2.

Compared sample with altered shape:

where s_{1,} s_{2,} s_{3,} s_{4,} s_{5,} s_{6,} and s_{7} had distributions N(0, 0.4), and c, N(0, 0.6). The expected covariance matrix was:

The expected eigenvectors had coefficients (columns):

and the expected eigenvalues were: 4.6, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4.

Compared sample with both orientation and shape altered:

where s_{1,} s_{2,} s_{3,} s_{4,} s_{5,} s_{6,} and s_{7} had distributions N(0, 0.4), and c, N(0, 0.6). The expected covariance matrix was:

The expected eigenvectors had coefficients (columns):

and the expected eigenvalues were: 4.6, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4.

The expected total variance in all seven variable samples was = 7.

Competing interests

The author declares that he has no competing interests.

Acknowledgements

I thank Raquel Cruz, Javier Mosquera and Carlos Vilas for allowing me to use the data from our previous experiments, which had been funded by Spain’s DGICYT grant PB94-0649, and Paul Hohenlohe and David Houle for helpful criticism of the manuscript. This work was funded by Ministerio de Ciencia y Tecnología (CGL2009-13278-C02),) and Fondos Feder.