Skip to main content

Shrinkage regression-based methods for microarray missing value imputation

Abstract

Background

Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets.

Results

To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do.

Conclusions

Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.

Background

Nowadays microarray technique has become an important and useful tool in functional genomics research. This high throughput technique allows the characterization of the gene expression of the whole genome by measuring the relative transcript levels of thousands of genes in various experimental conditions or time points [1]. Microarray data analyses have been widely used to investigate various biological processes such as the cell cycle process [28] and the stress response [9, 10].

Although the microarray technology has been developed for more than a decade, typical microarray data still contain more than 5% missing values with up to 90% of genes affected [11]. Missing values could be generated by various reasons, including technological failures, administrative error, insufficient resolution, image corruption, dust or scratches on the slide [12]. As many downstream analysis methods (such as gene clustering, disease classification and gene network reconstruction) require complete datasets, missing value estimation becomes an important pre-processing step in the microarray data analysis [1113].

The missing values in the microarray dataset are traditionally estimated by repeating the microarray experiments or simply replacing the missing values with zero or the row average (the average expression over the experimental conditions). Because these approaches are either time-consuming or leading to serious estimation errors, more advanced missing value imputation methods are needed to solve the missing value problems. In 2001, Troyanskaya et al. published the first two missing value imputation algorithms based on the k-nearest neighbors (kNNimpute) and the singular value decomposition (SVDimpute) [12]. Since then, a lot of missing value imputation methods have been proposed such as Bayesian principal component analysis (BPCA) [14], Gaussian mixture clustering imputation (GMCimpute) [11], conditional ordered list imputation [15], random-forest-based imputation [16] and so on.

Among the existing missing value imputation methods, the regression-based methods are very popular and contain many algorithms, including least squares imputation (LSimpute) [17], local least squares imputation (LLSimpute) [18], sequential local least squares imputation (SLLSimpute) [19], and iterated local least squares imputation (ILLSimpute) [13]. LSimpute estimates the missing values in the target gene by using a weighted average of the k estimates from the k most similar genes. Each estimate is attained by constructing a single regression model of the target gene by a similar gene. LLSimpute represents the target gene as a linear combination of k similar genes by a multiple regression model and uses the regression coefficients to estimate the missing values. SLLSimpute modifies the LLSimpute by estimating the missing values sequentially from the gene containing the fewest missing values and partially utilizing these estimated values. ILLSimpute modifies the LLSimpute by not choosing the similar genes with a fixed number k but defining the similar genes as the genes whose distances from the target gene are less than a distance threshold and then runs LLSimpute iteratively.

In this study, we focus on the regression-based methods because these methods have been shown to have better performances than the other existing methods in many testing microarray datasets [20, 21]. To further improve the performance of the regression-based methods, we propose shrinkage regression-based methods which use a shrinkage estimator to replace the least square estimator for the estimation of the regression coefficients in the regression model. The shrinkage estimator such as the James-Stein estimator has been shown to dominate the least square estimator in many statistical models [22, 23]. By adopting our new regression coefficients in the regression-based methods, we showed that an improvement on missing value estimation in six testing microarray datasets could be achieved.

Methods

In this study, we propose using the well-known shrinkage estimation approach to improve three existing regression-based methods (LLSimpute [18], SLLSimpute [19], and ILLSimpute [13]) for missing value estimation. We call our proposed methods the shrinkage regression-based methods (see Figure 1). In the following subsections, we first introduce the shrinkage estimation approach and then describe the proposed shrinkage LLSimpute, shrinkage SLLSimpute, and shrinkage ILLSimpute.

Figure 1
figure 1

The shrinkage regression-based methods.

Shrinkage estimation approach

One of the shrinkage estimators, the James-Stein estimator, for the normal distribution is introduced here. Suppose that Y1, Y2, ..., Y k are independent normal random variables and these k random variables all have a common known variance, but their means are unknown and different. Let Y i ~ N(θ i , σ2) and Y = (Y1, ..., Y k ). Then we have Y ~ N(θ, σ2I), where θ= (θ1, ..., θ k ) and I is a k × k identity matrix. Let d(Y) = (d1(Y), ..., d k (Y)) be an estimator of θ . Under the squared error loss function

L θ - d Y = i = 1 k θ i - d i Y 2 = θ - d Y 2 ,
(1)

we are interested in finding estimators of θ such that the mean squared error E Y [L (θ, d(Y))] is minimized. An intuitive estimator of θ is Y (i.e. θ ^ i = Y i ,i=1,,k). However, Stein [22] showed that when k ≥ 3, there exists other estimators with smaller mean squared error than the intuitive estimator Y. For k ≥ 3, under the squared error loss, the intuitive estimator Y is dominated by the estimator

θ ^ J S = 1 - k - 2 S Y 2 Y ,
(2)

where S Y 2 = i = 1 k Y i 2 [23]. The estimator in (2) is called the James-Stein estimator in the literature [23]. With the form in (2), the James-Stein estimator of θ i is

θ ^ i J S = 1 - k - 2 S Y 2 Y i .
(3)

It is worth noting that the estimator of θ i in (3) depends on not only the random variable Y i , but also the other variables Y1, ..., Y i -1, Yi + 1, ..., Y k because of the term S Y 2 . On the contrary, the intuitive estimator θ ^ i = Y i does not use the other variables Y1, ..., Yi - 1, Yi + 1, ..., Y k but only uses Y i to estimate θ i . It has been shown that estimators using other variables' information provide more accurate estimation for θ than the intuitive estimator does [22]. In fact, except for the estimator in (3), the estimators of the form

θ ^ i J S = 1 - c S Y 2 Y i
(4)

all have uniformly smaller mean squared error than the intuitive estimator Y i , for k ≥ 3 and 0 <c < 2 (k - 2). Among all the estimators of the form in (4), the estimator in (3) has the minimized mean squared error. The shrinkage estimation approach has also been shown to have good performance in interval estimation [24, 25]. Based on the James-Stein estimator in (3), we developed shrinkage regression-based imputation methods.

Notations

In a typical microarray data matrix, the rows are the genes under investigation and the columns are the experimental conditions or time points. The microarray data matrix is obtained by performing a series of experiments on the same set of genes. We use G m × n to represent a microarray data matrix with m genes and n experiments, and assume mn which is true for microarray data. In the matrix G, a row g i T 1 × n represents the expressions of the i th gene in n experiments:

G = g 1 T g m T m × n
(5)

where g i T denotes the transpose of a column vector g i . If there is a missing value in the l th position of the i th gene, we denote it as α, i.e. G i , l = g il = α.

Shrinkage local least squares imputation (Shrinkage LLSimpute)

In the LLSimpute method [18], a target gene with missing values is represented as a linear combination of k similar genes. Rather than using all genes in the dataset, only k genes with high similarity to the target gene are used. The procedure of selecting k similar genes is as follows. Suppose that the target gene is the first gene and has a missing value α in the first position, i.e. α= g11 in the matrix G m × n . The Pearson correlation coefficient is used to find the k similar genes. These k similar genes are called the k-nearest neighbor genes, which have the k largest absolute values of the Pearson correlation coefficients. The Pearson correlation coefficient r 1 j between the target gene and the j th gene is defined as

r 1 j = 1 n - 2 t = 2 n g 1 t - 1 σ 1 g j t - j σ j
(6)

where j and σ j denote the average and the sample standard deviation of the vector (gj 2, ..., g jn ). When computing the correlation coefficients, gj 1is not used because it corresponds to the position of the missing value in the target gene. Based on these selected k-nearest neighbor genes, a matrix A k × ( n - 1 ) and two vectors b k × 1 and w ( n - 1 ) × 1 can be formed as follows

g 1 T g S 1 T g S k T = α w T b A = α w 1 w 2 w n - 1 b 1 A 1 , 1 A 1 , 2 A 1 , n - 1 b k A k , 1 A k , 2 A k , n - 1

where αis the missing value in the target gene g 1 and g s 1 ,, g s k are the k-nearest nieghbor genes of the target gene g1. Each row of matrix A consists of the last n - 1 elements of one k-nearest neighbor gene g s i , 1 ≤ ik. The elements of the vector b comprise of the first elements of all these k-nearest neighbor genes and the elements of the vector w are the last n - 1 elements of the target gene g1. With the matrix A, and the vectors b and w, the least squares problem is formulated in LLSimpute as

min x A T x - w 2 .
(7)

Solving the above problem, the least square regression coefficients x ^ k × 1 are acquired as

x ^ ( x ^ 1 , x ^ 2 , , x ^ k ) T = ( A A T ) - 1 A w .
(8)

In the LLSimpute, the missing value is then estimated by

α = b T x ^ = x ^ 1 b 1 + x ^ 2 b 2 + + x ^ k b k .
(9)

In this study, we want to improve the performance of LLSimpute by adjusting the regression coefficients in (8). Our shrinkage LLSimpute associates the LLSimpute method with the shrinkage estimator to impute the missing values. Our method replaces the regression coefficient estimators x ^ in (8) by the shrinkage estimator, and then use the new estimator to estimate the missing value αin (9). However, we found that applying the existing shrinkage estimator in (3) did not always improve the performance of LLSimpute. Therefore, we tested different forms of the shrinkage coefficient estimators and conceived a feasible coefficient estimator to improve the LLSimpute method. We proposed using the shrinkage regression coefficients

x ^ i J S = 1 - ( k - 2 ) σ 2 ñ S 2 x ^ i
(10)

to replace the conventional coefficients in (8), where σ2 is the variance of the coefficients ( x ^ 1 , x ^ 2 , , x ^ k ) , S is the norm of the coefficients (i.e. S 2 = i = 1 k x ^ i 2 ), k is the row number of the matrix A and ñ is the column number of the matrix A, which equals n - 1 in this case. Finally, the missing value is estimated as

α = b T x ^ J S = x ^ 1 J S b 1 + x ^ 2 J S b 2 + + x ^ k J S b k
(11)

where x ^ J S = ( x ^ 1 J S , , x ^ k J S ) T .

Shrinkage sequential local least squares imputation (Shrinkage SLLSimpute)

In the LLSimpute, it does not use the information of genes with missing values since the existence of missing values hinders the use of the other observed values of that gene. In the SLLSimpute method, it estimates the missing values sequentially from the gene containing the fewest missing values and partially utilizes these estimated values. The details of SLLSimpute [19] is described as follow. First, the microarray matrix G m × n is divided into two submatrices: a complete matrix G 1 m 1 × n consisting of genes without missing values and an incomplete matrix G 2 ( m - m 1 ) × n consisting of genes with missing values. In the incomplete matrix G2, the genes are sorted by their missing rates. The first gene has the smallest missing rate and the last gene has the largest missing rate. The missing rate is calculated by

r i = c i n ,
(12)

where c i is the number of missing values in i-th gene. The imputation is executed sequentially from the first gene of G2. That is, the first gene of G2 which has the smallest missing rate is selected as the target gene firstly. Then LLSimpute is applied to estimate the missing values in the target gene by finding the k-nearest neighbour genes from the complete matrix G1 and then using the formula in (9) to estimate the missing values. After filling all the missing values in the target gene, it is moved to G1. Then the second gene of G2 is selected as the target gene and repeat the same process again. By moving the genes whose missing values have been imputed to the complete matrix, the previous target genes with imputed values can be utilized for the missing value estimation of the following target gene. However, too many missing values in a gene will result in big estimation error and reusing a gene with too many imputed values will reduce the imputation performance. Therefore, only the genes with missing rates less than a threshold r0 are reused, where r0 is set as the average missing rate of all genes containing missing values, i.e.,

r 0 = i = 1 m - m 1 c i ( m - m 1 ) × n
(13)

By a similar argument as for the shrinkage LLSimpute, we apply the shrinkage estimator to SLLSimpute. The shrinkage SLLSimpute adjusts the coefficients of the regression model by the formula in (10) and use the formula in (11) to estimate the missing values.

Shrinkage iterated local least squares imputation (Shrinkage ILLSimpute)

LLSimpute and SLLSimpute methods select k-nearest neighbor genes for a target gene, where k is a fixed number. However, in the ILLSimpute method [13], it does not fix the number of similar genes selected. Alternatively, it defines the similar genes as the genes whose distances to the target genes are less than a distance threshold δ. The rationale of using a distance threshold rather than using a fixed number of similar genes is that some of the k-nearest neighbor genes are already far away from the target gene and are not very similar to the target gene.

The procedure of ILLSimpute is as follows. In the first iteration, missing values of each target gene are filled with the row average. Then a distance threshold δ is used to select the similar genes of each target gene. Finally, LLSimpute method is used to estimate the missing values of each target gene. In the later iteration, ILLSimpute method uses the imputed results from the previous iteration to reselect the similar genes of each target gene (using the same distance threshold) and applies LLSimpute method to re-estimate the missing values.

By a similar argument as for the shrinkage LLSimpute, we apply the shrinkage estimator to ILLSimpute. The shrinkage ILLSimpute adjusts the coefficients of the regression model by the formula in (10) and use the formula in (11) to estimate the missing values.

Results and Discussion

We conducted several experiments to compare the performances of our shrinkage regression-based methods and the original regression-based methods under different scenarios. In the first subsection, we introduce the benchmark datasets. In the second subsection, we describe how we measure the performance of various imputation methods. In the following three subsections, we report the comparison results for different number of similar genes used, different missing rates, and different noise levels. Finally, we further compare the performances of our shrinkage regressioni-based methods and three existing non-regression-based methods.

Datasets

Considering the effects of dataset selection and types of microarray experiments on the performance of an imputation method, six representative datasets (three non-time series and three time series) were used in our simulations. They were Ogawa's data from the study of phosphophate accumulation and poly-phosphophate metabolism (denoted as Ogawa, non-time series) [26], Bohen's follicular lymphomas data (denoted as BohenSH, non-time series) [27], the data from a lymphoma study (denoted as Lymphoma, non-time series) [28], the data from Brauer's experiments which studied the physiological response to glucose limitation in batch and steady-state cultures of yeasts (denoted as Brauer05, time series) [29], and Shapira's oxidative stress data (denoted as Shapira04A and Shapira04B, time series) [30]. We divided Shapira's data into two datasets because the authors used one kind of oxidative chemical in the experiment in Shapira04A, but they used another kind of oxidative chemical in the experiment in Shapira04B. The six microarray datasets were used as benchmark datasets in numerical experiments to compare the performances of our shrinkage regression-based methods and the original regression-based methods. Each dataset was processed by deleting the genes with missing values to generate a complete data matrix, and the details of these datasets were listed in Table 1.

Table 1 Benchmark datasets.

The performance measure

A common criterion used to compare the performances of different imputation methods is the normalized root mean squared error (NRMSE) [1113, 1719]. From a microarray dataset, we can obtain an original data matrix M0 with m genes and n experiments, and then we can construct a complete matrix M 1 m 1 × n ( m 1 m ) by deleting the genes with missing values. After the complete data matrix M1 is established, we randomly select a specific percentage of the elements of M1 and regard these elements as missing values. Then we estimate the missing values using various imputation methods and compare their performances using NRMSE which is shown below:

N R M S E = m e a n [ ( y guess - y ans ) 2 ] s t d ( y ans )
(14)

where yguess and yans are vectors whose elements are the estimated values by an imputation method and the known answers for all missing entries, respectively.

Performance comparison for different k values

A parameter k, the number of similar genes used, has to be determined before using two regression-based methods (LLSimpute and SLLSimpute). Since the performance of both algorithms is known to be affected by the k value used and different microarray datasets may have different optimal k values [18, 19], we tested several possible k values (50, 100, 150, 200, 250 and 300) on six benchmark datasets. Table 2 listed the optimal k values for LLSimpute and SLLSimpute on each of the six benchmark datasets. Another regression-based method (ILLSimpute) does not have the parameter k and therefore was not considered in this numerical experiment.

Table 2 The optimal k value for each benchmark dataset.

For each of the six benchmark dataset, we also compared the performances of the proposed shrinkage regression-based methods and the original regression-based methods for several possible k values (50, 100, 150, 200, 250 and 300). In our numerical experiments, missing rate for each benchmark dataset was set to be 5%. Namely, for each dataset, we randomly removed 5% entries of the complete matrix to generate a matrix with missing values, and then estimated the missing values using the shrinkage and the original regression-based methods. The same procedure was run for five independent rounds and the average NRMSE of these five simulations was used to compare the performances of different imputation methods.

As shown in Figure 2, the proposed shrinkage LLSimpute outperforms LLSimpute for all k values and all benchmark datasets. Similarly, the proposed shrinkage SLLSimpute outperforms SLLSimpute for all k values and all benchmark datasets (see Figure 3). The simulation results suggest that utilizing a shrinkage estimation approach to adjust the coefficients of the regression model can improve the performances of the original regression-based methods.

Figure 2
figure 2

Performance comparison between shrinkage LLS (shr_LLS) and LLS for different k values.

Figure 3
figure 3

Performance comparison between shrinkage SLLS (shr_SLLS) and SLLS for different K values.

Performance comparison for different missing rates

In real applications, different microarray data may have different missing rates to be imputed. It is informative to know how an imputation method performs for different missing rates. Therefore, we compared the performances of the shrinkage regression-based methods and the original regression-based methods on the microarray data with different missing rates (1%, 5%, 10%, 15% and 20%). Namely, for each of the six benchmark dataset, we randomly removed x% (x = 1, 5, 10, 15 or 20) entries of the complete matrix to generate a matrix with missing values, and then estimated the missing values using the shrinkage and the original regression-based methods. The same procedure was run for five independent rounds and the average NRMSE of these five simulations was used to compare the performances of different imputation methods. Note that the optimal k value used for each benchmark dataset was listed in Table 2.

Figure 4 shows that the proposed shrinkage LLSimpute outperforms LLSimpute for all missing rates and all benchmark datasets. Figure 5 shows that the proposed shrinkage SLLSimpute outperforms SLLSimpute for all missing rates and all benchmark datasets. Figure 6 shows that the proposed shrinkage ILLSimpute outperforms ILLSimpute for all missing rates and all benchmark datasets. The simulation results suggest that utilizing a shrinkage estimation approach to adjust the coefficients of the regression model can improve the performances of the original regression-based methods.

Figure 4
figure 4

Performance comparison between shrinkage LLS (shr_LLS) and LLS for different missing rates.

Figure 5
figure 5

Performance comparison between shrinkage SLLS (shr_SLLS) and SLLS for different missing rates.

Figure 6
figure 6

Performance comparison between shrinkage ILLS (shr_ILLS) and ILLS for different missing rates.

Performance comparison for different noise levels

In real applications, different microarray data may contain different levels of noises. It is informative to know how an imputation method performs for different levels of noises inherent in the microarray data. Therefore, we compared the performances of the shrinkage regression-based methods and the original regression-based methods on the microarray data with different noise levels. For each of the six benchmark dataset, we added Gaussian noises with different levels into the data. The magnitudes of the noises were set in terms of the standard deviations ranging from 0 to 0.25 with a step size 0.05. In our numerical experiments, missing rate for each benchmark dataset was set to be 5% and the optimal k value used for each benchmark dataset was listed in Table 2. Namely, for each dataset (after adding Gaussian noises into the data), we randomly removed 5% entries of the complete matrix to generate a matrix with missing values, and then estimated the missing values using the shrinkage and the original regression-based methods. The same procedure was run for five independent rounds and the average NRMSE of these five simulations was used to compare the performance of different imputation methods.

Figure 7 shows that the proposed shrinkage LLSimpute outperforms LLSimpute for all noise levels and all benchmark datasets. Figure 8 shows that the proposed shrinkage SLLSimpute outperforms SLLSimpute for all noise levels and all benchmark datasets. Figure 9 shows that the proposed shrinkage ILLSimpute outperforms ILLSimpute for all noise levels and all benchmark datasets. The simulation results suggest that utilizing a shrinkage estimation approach to adjust the coefficients of the regression model can improve the performances of the original regression-based methods.

Figure 7
figure 7

Performance comparison between shrinkage LLS (shr_LLS) and LLS for different noise levels.

Figure 8
figure 8

Performance comparison between shrinkage SLLS (shr_SLLS) and SLLS for different noise levels.

Figure 9
figure 9

Performance comparison between shrinkage ILLS (shr_ILLS) and ILLS for different noise levels.

Performance comparison with three existing non-regression-based methods

We have shown that our shrinkage regression-based methods perform better than the existing regression-based methods. Still, it would be interesting to know whether our shrinkage regression-based methods provide more accurate missing value imputation than the existing non-regression-based methods do. Therefore, we compared the performances of our shrinkage regression-based methods and three existing non-regression-based methods (kNNimpute [12], SVDimpute [12], and BPCA [14]) on the six benchmark microarray datasets. As shown in Figures 10, 11, 12, the proposed shrinkage regression-based methods outperform these three existing non-regression-based methods for almost all missing rates and all benchmark datasets. Taken together, our shrinkage regression-based methods are competitive alternatives to the existing methods for microarray missing value imputation.

Figure 10
figure 10

Performance comparison between shrinkage LLS (shr_LLS) and three non-regression-based methods for different missing rates.

Figure 11
figure 11

Performance comparison between shrinkage SLLS (shr_SLLS) and three non-regression-based methods for different missing rates.

Figure 12
figure 12

Performance comparison between shrinkage ILLS (shr_ILLS) and three non-regression-based methods for different missing rates.

Conclusions

Imputation of missing values is a very important aspect of microarray data analyses because most of downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. In this study, regression-based methods associated with a shrinkage estimation approach are proposed to estimate missing values in the microarray data. Our methods take the advantage of the correlation structure existing in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and apply the new coefficients of the regression model to estimate missing values. Simulation results show that the proposed shrinkage regression-based methods provide more accurate missing value estimation for various types of datasets than the original regression-based methods do. Since our proposed methods can be applied to modify any kind of regression-based methods and can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.

References

  1. Schena M, Shalon D, Davis R, Brown P: Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science. 1995, 270: 467-470. 10.1126/science.270.5235.467.

    Article  CAS  PubMed  Google Scholar 

  2. Wu W, Li W, Chen B: Computational reconstruction of transcriptional regulatory modules of the yeast cell cycle. BMC Bioinformatics. 2006, 7: 421-10.1186/1471-2105-7-421.

    Article  PubMed Central  PubMed  Google Scholar 

  3. Rowicka M, Kudlicki A, Tu B, Otwinowski Z: High-resolution timing of cell cycle-regulated gene expression. Proc Natl Acad Sci USA. 2007, 104: 16892-16897. 10.1073/pnas.0706022104.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  4. Wu W, Li W, Chen B: Identifying regulatory targets of cell cycle transcription factors using gene expression and ChIP-chip data. BMC Bioinformatics. 2007, 8: 188-10.1186/1471-2105-8-188.

    Article  PubMed Central  PubMed  Google Scholar 

  5. Futschik M, Herzel H: Are we overestimating the number of cell-cycling genes? The impact of background models on time-series analysis. Bioinformatics. 2008, 24: 1063-1069. 10.1093/bioinformatics/btn072.

    Article  CAS  PubMed  Google Scholar 

  6. Wu W, Li W: Systematic identification of yeast cell cycle transcription factors using multiple data sources. BMC Bioinformatics. 2008, 9: 522-10.1186/1471-2105-9-522.

    Article  PubMed Central  PubMed  Google Scholar 

  7. Siegal-Gaskins D, Ash J, Crosson S: Model-based deconvolution of cell cycle time-series data reveals gene expression details at high resolution. PLoS Comput Biol. 2009, 5: e1000460-10.1371/journal.pcbi.1000460.

    Article  PubMed Central  PubMed  Google Scholar 

  8. Wang H, Wang Y, Wu W: Yeast cell cycle transcription factors identification by variable selection criteria. Gene. 2011, 485: 172-176. 10.1016/j.gene.2011.06.001.

    Article  CAS  PubMed  Google Scholar 

  9. Gasch A, Spellman P, Kao C, Carmel-Harel O, Eisen M, Storz G, Botstein D, Brown P: Genomic expression programs in the response of yeast cells to environmental changes. Mol Biol Cell. 2000, 11: 4241-4257. 10.1091/mbc.11.12.4241.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  10. Wu W, Li W: Identifying gene regulatory modules of heat shock response in yeast. BMC Genomics. 2008, 9: 439-10.1186/1471-2164-9-439.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  11. Ouyang M, Welsh W, Georgopoulos P: Gaussian mixture clustering and imputation of microarray data. Bioinformatics. 2004, 20: 917-923. 10.1093/bioinformatics/bth007.

    Article  CAS  PubMed  Google Scholar 

  12. Troyanskaya O, Cantor M, Sherlock G, Brown P, Hastie T, Tibshirani R, Botstein D, Altman R: Missing value estimation methods for DNA microarrays. Bioinformatics. 2001, 17: 520-525. 10.1093/bioinformatics/17.6.520.

    Article  CAS  PubMed  Google Scholar 

  13. Cai Z, Heydari M, Lin G: Iterated local least squares microarray missing value imputation. J Bioinform Comput Biol. 2006, 4: 935-957. 10.1142/S0219720006002302.

    Article  CAS  PubMed  Google Scholar 

  14. Oba S, Sato M, Takemasa I, Monden M, Matsubara K, Ishii S: A Bayesian missing value estimation method for gene expression profile data. Bioinformatics. 2003, 19: 2088-2096. 10.1093/bioinformatics/btg287.

    Article  CAS  PubMed  Google Scholar 

  15. Yu T, Peng H, Sun W: Incorporating nonlinear relationships in microarray missing value imputation. IEEE/ACM Trans Comput Biol Bioinform. 2011, 8: 723-731.

    Article  PubMed Central  PubMed  Google Scholar 

  16. Stekhoven D, Bühlmann P: MissForest-non-parametric missing value imputation for mixed-type data. Bioinformatics. 2012, 28: 112-118. 10.1093/bioinformatics/btr597.

    Article  CAS  PubMed  Google Scholar 

  17. Bø T, Dysvik B, Jonassen I: LSimpute: accurate estimation of missing values in microarray data with least squares methods. Nucleic Acids Res. 2004, 32: e34-10.1093/nar/gnh026.

    Article  PubMed Central  PubMed  Google Scholar 

  18. Kim H, Golub G, Park H: Missing value estimation for DNA microarray gene expression data: local least squares imputation. Bioinformatics. 2005, 21: 187-198. 10.1093/bioinformatics/bth499.

    Article  CAS  PubMed  Google Scholar 

  19. Zhang X, Song X, Wang H, Zhang H: Sequential local least squares imputation estimating missing value of microarray data. Comput Biol Med. 2008, 38: 1112-1120. 10.1016/j.compbiomed.2008.08.006.

    Article  PubMed  Google Scholar 

  20. Celton M, Malpertuy A, Lelandais G, de Brevern A: Comparative analysis of missing value imputation methods to improve clustering and interpretation of microarray experiments. BMC Genomics. 2010, 11: 15-10.1186/1471-2164-11-15.

    Article  PubMed Central  PubMed  Google Scholar 

  21. Brock G, Shaffer J, Blakesley R, Lotz M, Tseng G: Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes. BMC Bioinformatics. 2008, 9: 12-10.1186/1471-2105-9-12.

    Article  PubMed Central  PubMed  Google Scholar 

  22. Stein C: Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability. 1956, 1: 197-206.

    Google Scholar 

  23. James W, Stein C: Estimation with quadratic loss. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability. 1961, 1: 361-379.

    Google Scholar 

  24. Wang H: Brown's paradox in the estimated confidence approach. The Annals of Statistics. 1999, 27: 610-626. 10.1214/aos/1018031210.

    Article  Google Scholar 

  25. Wang H: Improved confidence estimators for the multivariate normal confidence set. Statistica Sinica. 2000, 10: 659-664.

    Google Scholar 

  26. Ogawa N, DeRisi J, Brown P: New components of a system for phosphate accumulation and polyphosphate metabolism in Saccharomyces cerevisiae revealed by genomic expression analysis. Molecular Biology of the Cell. 2000, 11: 4309-4321. 10.1091/mbc.11.12.4309.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  27. Bohen S, Troyanskaya O, Alter O, Warnke R, Botstein D, Brown P, Levy R: Variation in gene expression patterns in follicular lymphoma and the response to rituximab. Proc Natl Acad Sci USA. 2003, 100: 1926-1930. 10.1073/pnas.0437875100.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  28. Alizadeh A, Eisen M, Davis R, Ma C, Lossos I, Rosenwald A, Boldrick J, Sabet H, Tran T, Yu X, Powell J, Yang L, Marti G, Moore T, Hudson J, Lu L, Lewis D, Tibshirani R, Sherlock G, Chan W, Greiner T, Weisenburger D, Armitage J, Warnke R, Levy R, Wilson W, Grever M, Byrd J, Botstein D, Brown P, Staudt L: Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling. Nature. 2000, 403: 503-511. 10.1038/35000501.

    Article  CAS  PubMed  Google Scholar 

  29. Brauer M, Saldanha A, Dolinski K, Botstein D: Homeostatic adjustment and metabolic remodeling in glucose-limited yeast cultures. Mol Biol Cell. 2005, 16: 2503-2517. 10.1091/mbc.E04-11-0968.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  30. Shapira M, Segal E, Botstein D: Disruption of yeast forkhead-associated cell cycle transcription by oxidative stress. Mol Biol Cell. 2004, 15: 5659-5669. 10.1091/mbc.E04-04-0340.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

This study was supported by the National Cheng Kung University and Taiwan National Science Council NSC 99-2628-B-006-015-MY3 and NSC 101-2118-M-009-006-MY2.

Declarations

The full funding for the publication fee came from Taiwan National Science Council and College of Electrical Engineering and Computer Science, National Cheng Kung University.

This article has been published as part of BMC Systems Biology Volume 7 Supplement 6, 2013: Selected articles from the 24th International Conference on Genome Informatics (GIW2013). The full contents of the supplement are available online at http://www.biomedcentral.com/bmcsystbiol/supplements/7/S6.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei-Sheng Wu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

WSW conceived the research topic and provided essential guidance. HW developed the alogirthm. CCC did all the simulations. HW, CCC, YCW, and WSW wrote the manuscript. All authors have read and approved the final manuscript.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( https://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article

Wang, H., Chiu, CC., Wu, YC. et al. Shrinkage regression-based methods for microarray missing value imputation. BMC Syst Biol 7 (Suppl 6), S11 (2013). https://doi.org/10.1186/1752-0509-7-S6-S11

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1752-0509-7-S6-S11

Keywords