Abstract
Background
The calciumimaging technique allows us to record movies of brain activity in the antennal lobe of the fruitfly Drosophila melanogaster, a brain compartment dedicated to information about odors. Signal processing, e.g. with source separation techniques, can be slow on the large movie datasets.
Method
We have developed an approximate Principal Component Analysis (PCA) for fast dimensionality reduction. The method samples relevant pixels from the movies, such that PCA can be performed on a smaller matrix. Utilising a priori knowledge about the nature of the data, we minimise the risk of missing important pixels.
Results
Our method allows for fast approximate computation of PCA with adaptive resolution and running time. Utilising a priori knowledge about the data enables us to concentrate more biological signals in a small pixel sample than a general sampling method based on vector norms.
Conclusions
Fast dimensionality reduction with approximate PCA removes a computational bottleneck and leads to running time improvements for subsequent algorithms. Once in PCA space, we can efficiently perform source separation, e.g to detect biological signals in the movies or to remove artifacts.
Introduction
The fruitfly Drosophila melanogaster is a model organism for research on olfaction, the sense of smell. Calciumimaging, i.e. microscopy with fluorescent calciumsensitive dyes as reporters of brain activity, allows us to answer questions on how information about odors is processed in the fruitfly's brain [1].
The datasets we consider are in vivo calciumimaging movies recorded from the antennal lobe (AL). Here, information from the odor receptors on the antennae is integrated, processed and then relayed to higherorder brain regions. In the AL, each odor smelled by the fly is represented as a spatiotemporal pattern of brain activity (see schematic in Figure 1). The coding units of the AL are the socalled glomeruli that exhibit differential responses to odorants. The combined response of all the ca. 50 glomeruli in a single fruitfly AL forms an odorspecific pattern [2].
Figure 1. Odor coding. An odor molecule is encoded as a pattern of glomerulus responses in the ALs of the fruitfly brain. The green and yellow glomeruli remain inactive (not shown), whereas the blue and magenta glomeruli respond to the odor presentations (black bars mark two pulses of 1s each) with differential strength. Left and right ALs, that receive input from the left and right antennae, are mirrorsymmetric and contain the same types of glomeruli.
A major objective of biological research in this field is to map the Drosophila olfactome, i.e. odor representation and similarity as sensed by Drosophila. Odor response patterns recorded so far are available in the DoOR database [3].
In terms of data analysis, our goal is to extract glomerular signals and patterns from calciumimaging movies. Ideally, we would like to do this in a fast and memoryefficient way, keeping in mind that the size of the movies is going to increase further in the future due to the advent of highresolution and threedimensional 2Photon microscopy [4].
Here, we process imaging movies from the Drosophila AL with Independent Component Analysis (ICA) [5]. Source separation with ICA has proven helpful in the analysis of brain imaging data [68], and can be employed to "find" glomeruli in calciumimaging movies, i.e. to separate their signals from noise and artifacts [7].
ICA algorithms are typically performed after decorrelation and dimensionality reduction with a Principal Component Analysis (PCA) [9,10], delegating the main computational load to the PCA preprocessing step [6,7,11,12]. While PCA is generally feasible from a computational point of view, the standard approach to PCA by Singular Value Decomposition (SVD) [13] of the data matrix scales quadratically with the number of columns (or rows), and can be slow on the large movies files.
We thus propose an approximate solution to PCA that, while being substantially faster than exact PCA, keeps biological detail intact. Apart from our specific ICA application, fast dimensionality reduction is also of general utility for computations on imaging movies.
How do we achieve a highquality approximation to PCA? The observation is that, after processing, we usually deem only a small fraction of the pixels to be relevant, while many others do not report a biological signal. Following a feature selection paradigm [14], we could, at some computational expense, optimise a small set of most relevant pixels as input for PCA.
Instead, we propose to quickly select not few but many pixels (out of many more), and we do so by investing a small amount of time into computing pixel sampling probabilities that allow us to pick relevant pixels preferentially. Evaluation of a pixel's relevance relies on a priori knowledge about the nature of the biological sources: signals from neighbouring pixels in the regions of interest, the glomeruli, are correlated.
We proceed as follows: In the methods section, we first introduce our notation and summarise prior work. We then consider a general framework for approximate SVD and modify it for our approximate PCA that is explicitly designed for the imaging movies. In the results section, we provide a technical evaluation with respect to speed and accuracy of the results, as well as practical examples for the fast analysis of Drosophila imaging data with approximate PCA followed by ICA.
Methods
Preliminaries
Notation
PCA [9,10] provides the following lowrank approximation to a data matrix A based on orthogonal basis vectors, the "lines of closest fit to systems of points in space" [9], socalled principal components:
For our purposes, A is the calciumimaging movie with m timepoints and n pixels (images flattened into vectors). Consequently, the rankk approximation A_{k }consists of a matrix T with a temporal interpretation (distribution of loadings, timeseries) and a matrix S with a spatial interpretation (principal component images). Regarding notation, we refer to the jth column of A as A_{Ij}, and denote the element at the intersection of the ith row and the jth column as A_{i, j}. When we refer to column selection from matrix A, we select pixels, or, more precisely, pixeltimeseries vectors of length m.
Computing PCA and features for PCA
PCA can be computed by a singular value decomposition (SVD): A = UΣV [13]. SVD is a minimiser of A  A_{k}_{Fr}, i.e. the error incurred by a rankk approximation A_{k }to matrix A with respect to the Frobenius norm. When the data is centered, which we can assume as our algorithms require one pass over the matrix prior to PCA, the topk right singular vectors V correspond to the topk principal components [15]. The usual approach is to compute the SVD with full dimensionality in V , which is then truncated to the topk singular vectors with highest singular values. In contrast, NIPALSstyle PCA [16,17] (s.a. Algorithm 3) computes only the topk components. Another approach to PCA is the eigenvalue decomposition of the covariance matrix [10].
Regarding feature selection for PCA, Jolliffe [18,19] provided evidence that many variables can be discarded without significantly affecting the results of PCA. Several methods based on clustering or multiple correlation were tested in these studies aimed at selecting few nonredundant features in a PCA context. Similar, more recent work was performed by Mao [20] and Li [21].
A paper on feature selection for PCA by Boutsidis et al. [14] guarantees an error bound for the approximate solution to PCA based on a subset of the columns of matrix A. While conceptually related to the randomised framework discussed below, running time is in fact slightly above that of PCA, the objective being not speedup but identifying representative columns for data analysis.
Source separation with ICA
On imaging movies, source separation with ICA can be cast into the same notation as PCA (1). Where PCA relies on orthogonal, i.e. uncorrelated basis vectors, the goal of ICA [5] is to find statistically independent basis vectors, i.e. independent timeseries in T, or independent images in S. ICA falls into the category of "blind source separation" (BSS). It tries to unmix signal sources, such as glomerular signals, artifacts and noise, mostly blind with respect to the nature of both signals and mixing process, based solely on a statistical model. The model assumption behind ICA is that the sources are (approximately) independent and (for all but one source) nonGaussian.
ICA can detect the glomerular sources in calciumimaging movies [7] and therefore serves as an application example: it is useful to compute ICA on such movies and we can solve the unmixing problem much more efficiently if we first perform fast dimensionality reduction with approximate PCA. We employ one of the most common ICA algorithms, the fixedpoint iteration fastICA [5,22].
Monte Carlo approximate SVD
Here, we rely on a Monte Carlotype approximate SVD proposed by Drineas et al. [23,24]. Randomly selecting c columns from A into C^{m×c}, we can achieve an approximation to the sample covariance of A with an error of AA^{T } CC^{T}_{Fr}.
In [24], the following relationship between the optimal rankk matrix A_{k}: = SVD(A) and the approximation H_{k}: = SVD(C) was shown:
The error of the approximate SVD of A thus depends on the optimal rankk approximation A_{k }from exact SVD plus the difference in covariance structure due to column sampling. The factor reveals that the error bound is tighter for small k, implicating that, if larger k are desired, we should attempt to reduce the error AA^{T } CC^{T}_{Fr }, e.g. by selecting more columns.
The main result of [24] was that, given appropriate sampling of c columns from A, the expected error with respect to the Frobenius norm of A is ε:
This result holds for column sampling probabilities p_{j }that are not uniform, but depend on the euclidean column norms A_{Ij}:
In particular, the upper bound from (3) holds if we sample with replacement columns. This means that the error ε can be made arbitrarily small by sampling a sufficient number of columns c, and we can compute in advance the c required to achieve the desired ε.
Following the Monte Carlo framework, we can sample c pixeltimeseries into C and achieve an upper bound on the error by approximate SVD with respect to and the approximation of the time × time covariance AA^{T}.
The upper bound, is, however, not very tight. If we wish to achieve ε = 0.05 for k = 20, we would need to sample with replacement 32, 000 pixels, which leads to considerable speedups on large datasets (≈ 150, 000 pixels), but is impractical for the mediumsize datasets (≈ 20, 000 pixels).
The main contribution of the normbased Monte Carlo approach is thus to show that the correctness of SVD/PCA does not collapse under pixel sampling, but that the error is rather asymptotical and can be decreased further and further by sampling more pixels.
Covariation sampling
Although this pixel sampling may work well in practice, the theoretical bound is not very tight. Can we then more explicitly select biologically relevant pixels so as to ensure our confidence in the fast approximation?
The intuition is, that, if our pixel sample covers all glomeruli, the "biological error" will be small. We thus motivate a biological criterion, covariation between neighbouring pixeltimeseries, as an importance measure. The assumption we rely on is about the spatial aspect of the data, namely that a glomerulus in an imaging movie covers several adjacent pixels that all report the same signal (plus noise). This a priori knowledge is also exploited in the "manual" analysis of imaging movies by visualising the amount of neighbourhood correlation for each pixel (see for example Figure 2 in [25]).
Figure 2. Probability distributions. a) Image from the Drosophila2D movie, distribution of norm probabilities and distribution of covariation probabilities. A 5% pixel sample (Algorithm 1 for norms, Algorithm 2 for covariance) is superimposed in black. b) Drosophila3D. For visualisation, we discretised the continuous zaxis into 9 layers.
Our approach is to compute a small part of the pixels × pixels covariance matrix exactly, and then to sample those pixels that contribute much to the norm of this matrix. We are interested in the local part of the sample covariance matrix which we denote as L = f (A^{T }A), f (X_{i, j}) being defined as follows:
The column norms of L^{n×n }correspond to the amount of covariation with neighbouring pixels, i.e. if the column is from within one of the spatially local sources (glomeruli), the norm is high. Consequently, if we apply the column norm sampling according to (4) not to the movie matrix A but to the derived matrix L, we will more explicitly select columns with biological signal content.
Departing from the error bound scheme regarding the norm, we can now estimate in advance the biological signal content by computing for how much of L_{Fr }the pixel sample accounts. In the results section we will see that small pixel samples can explain a large part of L_{Fr}.
In practice, it is more convenient not to construct the entire matrix L, but to directly compute the column norms of L on the movie A. Here, the index r enumerates the 8 immediate neighbour pixels of the pixel in column j, i.e. the pixels (x, y  1), (x, y + 1), etc. in x/y coordinates of the (unflattened) images.
Sampling from L with norm probabilities (4) amounts to sampling from A with covariation probabilities p^{cov}, where can be computed on the fly while computing the column norms.
Fast PCA for calciumimaging movies
We first propose two alternative methods for pixel sampling (Algorithm 1 and 2) which we then utilise to perform PCA on a small matrix (Algorithm 3). Sampling allows for an adaptive resolution without a sharp cutoff by a threshold.
Pixel sampling
In Algorithm 1, we sample exactly c pixeltimeseries with replacement from the movie matrix A and scale them as in the Monte Carlo framework [24]. We employ normbased probabilities (4), such that we can make use of the theoretical upper bounds.
Algorithm 1 Pixel sampling with replacement, input: movie matrix A ∈ ℝ^{m×n}, number of pixels c, norm probabilities p^{norm }= (p_{0},..., p_{(n  1)}), output: sample matrix C ∈ ℝ^{m×c}
for all t ∈ [1, c] do
pick column j from A with probability p_{j}
end for
The above sampling strategy is necessary for the Monte Carlo scheme to work, however, for the covariation probabilities (7), the most parsimonious approach is simply sampling without replacement: Algorithm 2.
Algorithm 2 Pixel sampling without replacement, input: movie matrix A ∈ ℝ^{m×n}, number of pixels c, covariation probabilities p^{cov }= (p_{0},..., p_{(n  1)}), output: sample matrix C ∈ ℝ^{m×c}
R: = {}
for all t ∈ [1, c] do
sample j ∉ R from A with probability p_{j}
C[, t]: = A[, j]; R: = R ∪ j;
end for
Note that we can generally assume absence of movement, i.e. pixel identity remains the same throughout the measurement. The AL is a fixed anatomical structure, and smallscale movement that leads to shaky recordings can be eliminated by standard image stabilisation (as e.g. in [1]).
Computing PCA
We employ NIPALSstyle PCA [16,17] for computing the topk components. Complexity for NIPALSstyle PCA is for k principal components and i iterations until convergence of the components. Typically, k and i are small numbers (i ≈ 5  10). In contrast, SVD with a space and time complexity of is generally not efficient. In particular, the number of timepoints m can still be the smaller dimension after sampling.
Note that Drineas et al. [24] assume that SVD is used for H_{k}: = SVD(C), however proofs for the error bounds do not depend on algorithm structure but rather on the eigenvalue spectrum.
We have summarised the approach in Algorithm 3. The first step consists of running Algorithm 1 or 2 in order to obtain the n × c sample matrix C. To achieve the PCA decomposition (1), we then sequentially compute the topk components in T and obtain fullsize images in S by S: = T^{+ }A, where T^{+ }is the generalised MoorePenrose pseudoinverse of T.
The approximate PCA requires only for the timeseries in T and for both timeseries and images. On top of that, we need for precomputing the probabilities. In practice, we also profit from the redistribution of the computational load, which allows for greater speedups: unlike sequential PCA computation, the final matrix multiplication is highly parallelisable.
Algorithm 3 Approximate PCA, input: A ∈ ℝ^{m×n}, number of samples c, number of components k, output: T ∈ ℝ^{m×k }, S ∈ ℝ^{k×n}
select c columns from A into C with Algorithm1 or Algorithm2
//compute NIPALSstyle PCA on matrix C
for all l ∈ [1, k] do
while not converged do
end while
end for
//compute fullsize images
S: = T^{+ }A
Results
Datasets and pixel selection strategies
Our test datasets are "Drosophila2D" (Figure 2a: left and right Drosophila AL; light microscopy, staining with GCaMP dye, 19, 200 pixels × 1, 440 timepoints), and "Drosophila3D" (Figure 2b: single Drosophila AL; threedimensional 2Photon microscopy, GCaMP, 147, 456 pixels × 608 timepoints).
Both datasets are concatenations of multiple measurements. In the middle of each measurement (except for controls), an odor was presented to the fly. A series of different odors was employed which enables us to tell apart glomeruli based on their differential response properties.
In Figure 2, we give also visual examples for the probability distributions. In contrast to the norms, covariance probabilities are concentrated on few regions, which can be sampled very densely even with small c.
Empirical evaluation
As evaluation criteria we rely on the Frobenius norm error A  TS_{Fr }= A  A_{k}_{Fr }as a standard measure for lowrank approximation, and on the biologically motivated covariation energy, the amount of local covariation accounted for by the pixel sample (unique column indices in R):
Results are presented in Figure 3. As baselines, we give results from exact NIPALSstyle PCA and approximate PCA with uniform pixel sampling. All algorithms were implemented in Java, using the Parallel Colt library [26].
Figure 3. Performance. Means and standard deviations for time and error measures (10 repetitions) for exact and approximate PCA. Number of pixels c is given in % of the total number n. Running times (Intel Core Duo T6400, 2GHz) are for the entire Algorithm 3, including computation of probabilities. All measurements are for rankk = 30 approximations, as we found that 2030 components are typically sufficient to detect all glomeruli. Lower principal components only explain more of the noise (see also Figure 4).
Already small samples lead to low additional error with respect to the Frobenius norm. E.g., on the Drosophila2D dataset, exact PCA achieves a Frobenius norm error of 73, 754.64 for a rankk = 30 approximation, where A_{Fr }= 117, 668.99. In comparison, covariation sampling with Algorithm 2 achieves a Frobenius norm error of 75, 187.93 based on only 1% of the pixels.
Both, norm error and covariation energy, reach about the level of accuracy of exact PCA already with sample sizes of between 10% to 15% of the pixels, whereas time consumption grows only slowly (Figure 3). Generally, sampling based on norms or covariation is superior to uniform pixel sampling, and the covariation sampling with Algorithm 2 accumulates more covariation energy in smaller samples than the other strategies. Error bars for Algorithm 1 and 2 are small, indicating that results are reproducible despite of the randomised techniques.
How many pixels do we need to sample? While our empirical measurements suggest that between 10% to 15% of the pixels are sufficient, even smaller samples of about 1% of the pixels give good results in practice, the error being already much lower than the expected upper bounds. As a "safe" strategy we suggest to sample pixels with Algorithm 2 until the cumulated covariation energy exceeds a threshold, e.g. 0.95 (straight line in Figure 3).
To give a visual impression of how the technical quality measures translate into image quality, we compare principal component images in S that were computed with exact and approximate PCA (Figure 4). Both span approximately the same space, however, due to the different input matrices, there is not necessarily a onetoone correspondence.
Figure 4. Example for PCA. Top principal components computed by exact PCA and approximate PCA with covariation probabilities (1% pixel sample).
Application example: ICA
Recall that both PCA and ICA result in a decomposition of the form A_{k }= T ^{PCA }S^{PCA}, or A_{k }= T ^{ICA }S^{ICA}, respectively. As input for ICA, we can either take the principal component images in S^{PCA }or the principal component timeseries in matrix T^{PCA}.
In Figure 5a we give an example for temporal ICA on principal component timeseries (Drosophila2D data, covariation probabilities, c = 0.15n). Here, the highest (black) coefficients in the image indicate the positions of a glomerulus pair, the same type of glomerulus in the left and right AL. Both AL halves are mirrorsymmetric and each contain a full set of glomeruli. Judging from their positions, the two glomeruli are very likely a pair, i.e. both receive input from the same types of receptor neurons and therefore have equal (plus noise) response properties.
Figure 5. Example for temporal ICA. Performing ICA on the principal component timeseries matrix T^{PCA}. a) above: spatial component that contains a glomerulus pair (black pixels); below: image from raw movie, indicating the shapes of the left and right ALs. b) Timeseries component (that corresponds to ) on a 200timepoints interval including a double odor presentation (marked by the bars). c) For comparison, we show the mean timeseries for the glomerulus pair on the raw movie A.
Taking into account the corresponding timeseries in (Figure 5b), we can assume that we indeed have found glomeruli and not some other pair of objects: we see a double response to the double odor stimulation, where a response is a sharp increase in fluorescence, followed by a decline below baseline.
For comparison, we extracted (by thresholding) positions of all black pixels in and computed their mean timeseries on the raw movie A, i.e. the raw signal of the glomerulus pair: Figure 5c. Here, we can see that the movie consists of a concatenation of measurements that each exhibit a strong trend: the dye bleaches due to measurement light, an artifact which is absent in the ICA component.
As another example, we have applied spatial ICA, working on S^{PCA }as input. This can be helpful to find glomerulus positions in order to construct a glomerulus map [7]. In Figure 6, we show all independent component images from S^{ICA }that "contain" glomeruli. Note that the sign is arbitrary in an ICA decomposition [5], i.e. glomeruli can appear black on white or vice versa. Based on approximate PCA we can detect all but one (marked with a star) component already with a 1% pixel sample, whereas with a 15% sample we can also recover the missing component.
Figure 6. Example for spatial ICA. Performing ICA on the principal component images matrix S^{PCA}. We show all spatial independent components that capture glomeruli. Top: ICA was run after exact PCA, bottom: ICA was run after approximate PCA with a 1% or 15%, respectively, pixel sample (covariation probabilities). Closest matches are placed in the same column.
Here, we have regarded the spatial and temporal aspect of the data separately leading e.g. to spatial components that are not entirely local (Figure 5a). For future applications, it might be helpful to consider a spatiotemporal criterion [11,12] that balances between spatial and temporal independence of the sources.
Conclusions
We have shown that source separation can, in principle, detect glomerulus positions and remove artifacts in Drosophila imaging movies. Many source separation algorithms exist that optimise different criteria and it remains subject to further research which method is most robust for a particular data type.
Here, we have concentrated on finding a fast approximate solution to PCA that reduces data size prior to source separation. Delegating the main computational load to the preprocessing with fast PCA allows any source separation algorithm to scale up easily with the growing data sizes in imaging. A further promising area of application is, with due modifications, online analysis such that denoised movies are available already during the course of the experiment.
Our strategy for fast approximate PCA relies on simple precomputations that can be performed in a single pass over the data. Based on a priori knowledge and the information gathered in this step, we can sample pixels from the movie in order to perform exact PCA much more efficiently on a smaller matrix. Sampling with norm probabilities gives rise to an upper bound for the expected error. Sampling with covariation probabilities, we can ensure a highquality approximation by requiring a high amount of covariation energy in the sample.
Our empirical results show that small pixel samples reliably lead to approximations with low error. It remains as an interesting question for further research, whether it is possible to translate these results into theory, e.g. by proving tight error bounds that incorporate the a priori knowledge.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
MS performed research and wrote the manuscript. CGG supervised research and edited the manuscript. All authors read and approved the final manuscript.
Acknowledgements
We are grateful to Daniel Münch, Ana F. Silbering and Werner Göbel for recording imaging data, and to Henning Proske for technical assistance with data format and preprocessing. We thank Fritjof Helmchen and Werner Göbel for sharing their expertise on the 2Photon imaging technique and for providing equipment. Financial support by BMBF, DFG and the University of Konstanz is acknowledged. MS was supported by the DFG Research Training Group GK1042 and a LGFG scholarship issued by the state of BadenWürttemberg.
This article has been published as part of BMC Medical Informatics and Decision Making Volume 12 Supplement 1, 2012: Proceedings of the ACM Fifth International Workshop on Data and Text Mining in Biomedical Informatics (DTMBio 2011). The full contents of the supplement are available online at http://www.biomedcentral.com/bmcmedinformdecismak/supplements/12/S1.
References

Silbering AF, Okada R, Ito K, Galizia CG: Olfactory information processing in the Drosophila antennal lobe: anything goes?
J Neurosci 2008, 28(49):1307513087. PubMed Abstract  Publisher Full Text

Vosshall LB: Olfaction in Drosophila.
Curr Opin Neurobiol 2000, 10(4):498503. PubMed Abstract  Publisher Full Text

Galizia CG, Münch D, Strauch M, Nissler A, Ma S: Integrating heterogeneous odor response data into a common response model: a DoOR to the complete olfactome.
Chem Senses 2010, 35(7):551563. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Grewe BF, Langer D, Kasper H, Kampa BM, Helmchen F: Highspeed in vivo calcium imaging reveals neuronal network activity with nearmillisecond precision.
Nat Methods 2010, 7(5):399405. PubMed Abstract  Publisher Full Text

Hyvärinen A, Oja E: Independent component analysis: algorithms and applications.
Neural Netw 2000, 13(45):411430. PubMed Abstract  Publisher Full Text

Reidl J, Starke J, Omer D, Grinvald A, Spors H: Independent component analysis of highresolution imaging data identifies distinct functional domains.
Neuroimage 2007, 34:94108. PubMed Abstract  Publisher Full Text

Strauch M, Galizia CG: Registration to a neuroanatomical reference atlas  identifying glomeruli in optical recordings of the honeybee brain. In Proceedings of the German Conference on Bioinformatics (GCB), September 912, 2008, Dresden, Germany, Volume 136 of Lecture Notes in Informatics. Edited by Beyer A, Schroeder M. Bonn: GI; 2008:8595.

Mukamel EA, Nimmerjahn A, Schnitzer MJ: Automated analysis of cellular signals from largescale calcium imaging data.
Neuron 2009, 63(6):747760. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Pearson K: On lines and planes of closest fit to systems of points in space.
Philosophical Magazine Series 6 1901, 2(11):559572. Publisher Full Text

Jolliffe IT: Principal Component Analysis. Berlin, Heidelberg: Springer; 2002.

Stone JV, Porrill J, Porter NR, Wilkinson ID: Spatiotemporal independent component analysis of eventrelated fMRI data using skewed probability density functions.
Neuroimage 2002, 15(2):407421. PubMed Abstract  Publisher Full Text

Theis FJ, Gruber P, Keck IR, Lang EW: Functional MRI analysis by a novel spatiotemporal ICA algorithm. In Proceedings of the 15th International Conference on Artificial Neural Networks: Biological Inspirations (ICANN), September 1115, 2005, Warsaw, Poland, Volume 3696 of Lecture Notes in Computer Science. Edited by Duch W, Kacprzyk J, Oja E, Zadrozny S. Berlin, Heidelberg: Springer; 2005:677682.

Golub GH, Van Loan CF: Matrix Computations. 3rd edition. Baltimore: Johns Hopkins University Press; 1996.

Boutsidis C, Mahoney MW, Drineas P: Unsupervised feature selection for principal components analysis. In Proceedings of the 14th International Conference on Knowledge Discovery and Data Mining (ACM SIGKDD), August 2427, 2008, Las Vegas, USA. Edited by Li Y, Liu B, Sarawagi S. New York: ACM; 2008:6169.

Wall ME, Rechtsteiner A, Rocha LM: Singular value decomposition and principal component analysis. In A Practical Approach to Microarray Data Analysis. Edited by Berrar D, Dubitzky W, Granzow M. Norwell: Kluwer; 2003:91109.

Wold H: Estimation of principal components and related models by iterative least squares. In Multivariate Analysis. Edited by Krishnaiah P. New York: Academic Press; 1966:391420.

Miyashita Y, Itozawa T, Katsumi H, Sasaki SI: Comments on the NIPALS algorithm.
J Chemom 1990, 4:97100. Publisher Full Text

Jolliffe IT: Discarding variables in a principal component analysis. I: Artificial data.
J R Stat Soc Ser C Appl 1972, 21(2):160173. Publisher Full Text

Jolliffe IT: Discarding variables in a principal component analysis. II: Real data.
J R Stat Soc Ser C Appl 1973, 22:2131. Publisher Full Text

Mao KZ: Identifying critical variables of principal components for unsupervised feature selection.
IEEE Trans Syst Man Cybern B Cybern 2005, 35(2):339344. PubMed Abstract  Publisher Full Text

Li Y, Lu BL: Feature selection for identifying critical variables of principal components based on Knearest neighbor rule. In Proceedings of the 9th International Conference on Advances in Visual Information Systems (VISUAL), June 2829, 2007, Shanghai, China, Volume 4781 of Lecture Notes in Computer Science. Edited by Qiu G, Leung C, Xue X, Laurini R. Berlin, Heidelberg: Springer; 2007:193204.

Hyvärinen A: Fast and robust fixedpoint algorithms for independent component analysis.
IEEE Trans Neural Netw 1999, 10(3):626634. PubMed Abstract  Publisher Full Text

Drineas P, Kannan R, Mahoney MW: Fast Monte Carlo algorithms for matrices I: Approximating matrix multiplication.
SIAM J Comput 2006, 36:132157. Publisher Full Text

Drineas P, Kannan R, Mahoney MW: Fast Monte Carlo algorithms for matrices II: Computing a lowrank approximation to a matrix.
SIAM J Comput 2006, 36:158183. Publisher Full Text

Fernandez PC, Locatelli FF, PersonRennell N, Deleo G, Smith BH: Associative conditioning tunes transient dynamics of early olfactory processing.
J Neurosci 2009, 29(33):1019110202. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Wendykier P, Nagy JG: Parallel colt: a highperformance Java library for scientific computing and image processing.