Abstract
Background
Reliable segmentation of cell nuclei from three dimensional (3D) microscopic images is an important task in many biological studies. We present a novel, fully automated method for the segmentation of cell nuclei from 3D microscopic images. It was designed specifically to segment nuclei in images where the nuclei are closely juxtaposed or touching each other. The segmentation approach has three stages: 1) a gradient diffusion procedure, 2) gradient flow tracking and grouping, and 3) local adaptive thresholding.
Results
Both qualitative and quantitative results on synthesized and original 3D images are provided to demonstrate the performance and generality of the proposed method. Both the oversegmentation and undersegmentation percentages of the proposed method are around 5%. The volume overlap, compared to expert manual segmentation, is consistently over 90%.
Conclusion
The proposed algorithm is able to segment closely juxtaposed or touching cell nuclei obtained from 3D microscopy imaging with reasonable accuracy.
Background
Reliable segmentation of cell nuclei from three dimensional (3D) microscopic images is an important task in many biological studies as it is required for any subsequent comparison or classification of the nuclei. For example, zebrafish somitogenesis is governed by a clock that generates oscillations in gene expression within the presomitic mesoderm [1,2]. The subcellular localization of oscillating mRNA in each nucleus, imaged through multichannel microscopy, can be used to identify different phases within the oscillation. To automate the classification of the phase of an individual nucleus, each nucleus within the presomitic mesoderm first needs to be accurately segmented.
In recent years, there has been significant effort towards the development of automated methods for 3D cell or cell nuclei image segmentation [39,15,16]. Thresholding, watershed and active surface based methods are among the most commonly used techniques for 3D cell or cell nuclei segmentation. Unfortunately, thresholdingbased methods often have difficulties in dealing with images that do not have a welldefined constant contrast between the objects and the background. Given this characteristic of the thresholdingbased methods, they often have difficulties in segmenting images with clustered or juxtaposed nuclei. Watershedbased methods are also very popular for segmentation of clustered cell nuclei [35,10]. However, these methods often result in the oversegmentation of clustered cell nuclei. In order to deal with this issue, heuristic rules have been developed for region merging [35] as a postprocessing step. Segmentation problems have also been targeted through the use of active surfacebased methods [8,9,15,16] in the literature. However, such algorithms suffer from an inherent dependency on the initial guess. If the initial guess is wrong, these methods have difficulties in dealing with clustered cell nuclei.
Despite active research and progress in the literature, development of a fully automated and robust computational algorithm for 3D cell nuclei segmentation still remains a challenge when dealing with significant inherent nuclei shape and size variations in image data. Examples include cases where the contrast between nuclei and background is low, where there are differences in shapes and sizes of nuclei, and where we are dealing with 3D images of low quality [3,4,68]. Complications also arise when nuclei are juxtaposed or connected to one another, increasing the rate of oversegmentation or undersegmentation.
In this paper, we present a novel automated method that aims to tackle the aforementioned challenges of segmentation of clustered or connected 3D cell nuclei. We approach the segmentation problem by first generating the gradient vector field corresponding to the 3D volume image, and then diffusing the gradient vector field with an elastic deformable transform. After the elastic deformable transform is completed, the noisy gradient vector field is smoothed and the gradient vectors with large magnitude are propagated to the areas with weak gradient vectors. This gradient diffusion procedure results in a gradient flow field, in which the gradient vectors are smoothly flowing towards or outwards from the centers of the nuclei. Subsequently, a gradient flow tracking procedure is performed from each vector point to find the corresponding center to which the points flow. We group all points that flow to the same center into a region, and refer to this region as the attraction basin of the center. Once we have completed the process of tracking the gradient flow, the boundaries of juxtaposed nuclei are formed naturally and hence these juxtaposed nuclei are divided. The final step includes performing local thresholding in each attraction basin in order to extract the nuclei from their corresponding background. We have evaluated and validated this algorithm and have presented results attesting its validity.
Results
In this section, a series of experiments are designed to evaluate and validate the gradient flow tracking method for segmentation of 3D images with juxtaposed nuclei. Both qualitative and quantitative results on synthesized and original 3D images are provided to demonstrate the performance and general applicability of the proposed method.
Validation using synthesized 3D image
We present an example of the results obtained from applying our proposed nuclei segmentation method on a synthesized image. Despite the fact that objects are touching each other and the presence of additive noise, the proposed segmentation method has segmented the touching objects perfectly, as shown in Figure 5.
Figure 5. Result on synthesized 3D image segmentation. (a) The crosssectional views of binary mask of the synthesized image. (b) The crosssectional views of synthesized image with added noise. (c) The crosssectional views of the overlaid boundaries of segmentation on synthesized noisy 3D image. (d) The crosssectional views of randomly colorcoded segmentation result.
We employ volume overlap measurement methods in order to quantitatively validate our segmentation results. Since there is no ground truth information available for real data and it is very time consuming to manually segment juxtaposed nuclei, instead of using real data, we take advantage of synthesized 3D images. The volume overlap between the automated results and the ground truth is defined as:
where R_{a }is the automated extracted region and R_{g }is the ground truth region. The ⋂ operator takes the intersection of two regions. S(·) is the volume of the region.
The synthesized 3D touching cell nuclei image is generated as follows: 1) Randomly select a voxel as seed point, and construct a mask of a sphere with a radius 10 mm centered at that point. 2) Generate 6 masks of spheres, which are tangent to the central sphere, with their corresponding radii ranging from 7 mm to 15 mm. This is done to simulate the variations between radii of real nuclei. 3) Blur the mask images by convolving with a 3D Gaussian kernel, and corrupting it with additive Gaussian noise. For the convenience of visual inspection of the segmentation results, we provide the volume rendering of the original 3D image and surface rendering of the segmentation results. Figure 6a shows the synthesized 3D cell nuclei image, in which the six nuclei are closely touching the central nucleus. The segmentation results with the proposed method are illustrated in Figure 6b. Figure 6c shows the segmenation results obtained by global Otsu thresholding method. As we can see, the touching objects are not divided correctly in figure 6c.
Figure 6. Volume and surface rendering of synthesized 3D cell nuclei image and segmentation results. (a) Volume rendering of the synthesized noisy image. (b) Surface rendering of the segmentation results with the proposed method, in which each color indicates a segmented object. (c) Surface rendering of the segmentation results with the global Otsu thresholding, in which the touching objects are not divided correctly.
We have synthesized seven 3D nuclei images. In order to present a quantitative measure for the validation of our results, the average value of volume overlap measurement for the seven cases is around 0.971, and the standard deviation is 0.014. As it is clear from these results, our proposed segmentation method achieves significant volume overlap with the ground truth, indicating the accurate performance of the gradient flow tracking method.
Validation using 3D C. elegans embryo images
In order to further quantitatively evaluate the segmentation method, we applied the method to C. elegans embryo images. Dr. Sean Megason of California Institute of Technology provided us the C. elegans embryo images. The nuclei are labelled with histoneEGFP. The scope used for the C. elegans image set is a Zeiss LSM510 with a 40X CApochromat objective and a 488 nm laser. The original voxel size is 0.14*0.14*0.74 micron in x, y and z directions. In our experiment, the voxel size is resampled to isotropic in all directions. The details of the experimental and imaging settings are provided at http://www.digitalfish.org/beta/samples/ webcite. Two metrics of oversegmentation and undersegmentation were utilized for evaluation of the segmentation method. The oversegmentation metric indicates that a nucleus has been separated into more than one object, or an extracted object has not been labeled as nucleus. This is done in comparison to visual inspection of an expert. The undersegmentation indicates that clusters of nuclei have not been appropriately divided or a nucleus marked by visual inspection was not at all extracted. Four 3D images of C. elegans embryos were used to evaluate the proposed segmentation method based on the above two metrics. Table 1 shows the performance of the proposed segmentation method. On average, the oversegmentation and undersegmentation rates are 1.59% and 0.39% respectively, indicating a desirable performance by our segmentation method. The errors are probably caused by the interpolation and resampling and inherent noise in the images. For the convenience of visual inspection of the segmentation results, we provide the volume rendering of the original 3D image and surface rendering of the segmentation results. As an example, Figure 7a provides the volume rendering of the original juxtaposed 3D nuclei. The segmentation results represented by surface boundaries are shown in Figure 7b. To validate the segmentation results, two experts manually segmented the nuclei respectively, and then we computed the volume overlap between the automated result and that of each of the two experts. We also calculated the volume overlap between the two experts. The mean value of volume overlap is over 95% for both expert 1 and expert 2, and the standard deviation is around 0.02 for both experts, indicating that the automated 3D cell nuclei segmentation results are comparable to manual segmentation results.
Table 1. The segmentation results of 3D C. elegans embryo images
Figure 7. An example of segmentation result using C. elegans embryo image. (a) Volume rendering of original 3D image. (b) Surface rendering of segmentation results. For the convenience of inspection of touching cell nuclei, the results are randomly colorcoded.
Validation using 3D zebrafish nuclei images
To further evaluate the proposed segmentation method, we have applied the method to ten 3D zebrafish images in which nuclei are labeled. The 3D zebrafish image datasets are from the Holley Lab at Yale University. These are fluorescent confocal images of fixed 12 somite stage zebrafish embryos stained with propidium iodide. All images were collected on a BioRad 1024 confocal microscope using a 25X Zeiss Neofluor objective. For the convenience of visual inspection of the segmentation results, we provide the volume rendering of the original 3D image and surface rendering of the segmentation results. An example of the original 3D image and nuclei segmentation results are shown in Figure 8, in which it is evident that most of the nuclei are segmented correctly, in spite of the fact that many of the nuclei are touching and have irregular shapes. Figure 9 shows a 2D slice of the original image and the segmentation result as shown in Figure 8. The two metrics of oversegmentation and undersegmentation are used to evaluate the segmentation result. Table 2 provides the details of the performance of the segmentation method. On average, there exist 310 cell nuclei for each image, and the oversegmentation and undersegmentation rates are 4.93% and 5.03% respectively. Furthermore, the average volume overlap is over 90% for both expert 1 and expert 2, and the standard deviation is less than 0.02 for both experts, indicating a desirable performance by our segmentation method
Figure 8. An example of segmentation results of 3D cell nuclei image of zebrafish. (a) Volume rendering of original 3D image. (b) Surface rendering of segmentation results. For the convenience of inspection of touching cell nuclei, the results are randomly colorcoded.
Figure 9. 2D slice view of the original image and segmentation result in Figure 8. (a). Original image. (b). The segmented image in which the green curves represent the boundaries of the cell nuclei.
Table 2. The segmentation results of 3D cell nuclei image of zebrafish
Discussion
Our method has several advantages over previous approaches. The major advantage of the method is the ability to robustly segment densely packed, touching, or connected nuclei. Additionally, no sophisticated rules are used. The only assumption is that the centers of nuclei are brighter or darker than a nearby region. The fundamental difference between our method and existing methods lies in the diffused gradient vector information. In existing methods such as the threshold or watershed methods, intensity is the only adopted information, hence those methods are sensitive to the noise in the image, which usually results in oversegmentation. In contrast, in our method the gradient vector diffusion procedure propagates gradient vectors with large magnitudes to the areas with weak gradient vectors and smoothes the noisy gradient vector field. Meanwhile, it preserves the potential structural information of the gradient vector field. For example when two nuclei are touching each other, the diffused gradient vectors point toward the corresponding centers of the nuclei. This step greatly contributes to the success of touching nuclei segmentation. The disadvantage of this method is that it may have difficulty in processing the images of textured blob objects, since in that situation the gradient vector at the centers of nuclei are cluttered and the condition is violated. Currently the method is implemented using the C/C++ language, without using any other common library. Without any optimization, it takes less than 50 seconds on an Intel Pentium4 2.4 GHz machine with 1 GB memory to segment a volume image with a size of 230*162*80. The running time can be reduced further with multiresolution implementation and code optimisation. After evaluating this method on larger and more diverse image datasets, we intend to release the algorithm to the cell biology community.
Conclusion
We presented a novel, automated algorithm for 3D cell nuclei segmentation based on gradient flow tracking. To validate the efficacy and performance of the proposed segmentation algorithm, we evaluated it by using synthesized and real biological images. The results show that the algorithm is able to segment juxtaposed nuclei correctly, a persistent problem in the field of cellular image analysis.
Methods
Gradient vector diffusion by elastic deformation transformation
Gradient information is an important factor in threedimensional (3D) nuclei segmentation due to the fact that in any given nuclei image, the gradient vectors either point towards the central area of a bright nucleus, or outwards from the central area of a dark nucleus. However, in practice, the gradient magnitude is very small, and the direction of the gradient vector is usually not trustworthy due to the noise present in the image when approaching the central area of a nucleus. Additionally, when we are dealing with nuclei that are of irregular shapes, the gradient vectors tend to be cluttered. Motivated by these facts, here, we introduce a physical model that incorporates the diffused gradient vectors from the boundaries of the image to generate a smooth gradient field. Our gradient vector diffusion procedure propagates gradient vectors with large magnitudes to the areas with weak gradient vectors and smoothes the noisy gradient vector field [11]. For a detailed introduction to gradient vector diffusion, we refer to [11]. We adopt an elastic deformation transformation, under which the image is modeled as elastic sheets warped by an external force field to achieve gradient vector diffusion. This model has been previously employed for image registration [12,13], where the deformation of boundary points are fixed and then the deformation field is propagated to inner regions of the image by solving the elastic model equation. Here, we extend this model to analyze 3D microscopic nuclei images.
The diffused gradient vector field v(x, y, z) = (u(x, y, z), v(x, y, z), w(x, y, z)) (u(x, y, z), v(x, y, z) and w(x, y, z) are three components of the diffused gradient vector projecting to x, y and z axis respectively) in a 3D image is defined to be a solution to the partial differential equation (PDE), also known as a NavierStokes equation, describing the deformation of an elastic sheet [13]:
where ∇^{2 }is the Laplacian operator, div is the divergence operator, ∇ is the gradient operator, ∇f is the original gradient vector field, and Lame's coefficients μ and λ refer to the elastic properties of the material. In this paper, we aim to diffuse the gradient vectors toward the central areas of nuclei objects to obtain a gradient flow field. Therefore, f is set to be
where I(x, y, z) is a 3D intensity image and G_{σ}(x, y, z) is a 3D Gaussian function with standard derivation σ. Note that before computing the convolution and gradient vector, the images should have been interpolated using a splinebased method and resampled to isotropic voxel sizes. q is a function indicating whether or not the displacement is prefixed at the position. In our method, the indicator function is set as
In our current implementation, the Threshold is set to be 0. Once the Threshold is large, the gradient vectors with small magnitudes will be omitted, including some noisy gradient vectors and some useful gradient vectors. Therefore, the Threshold creates a compromise between keeping useful gradient vectors and removing noisy gradient vectors. The model is solved by treating u, v and w as functions of time:
where v_{t}(x, y, z, t) denotes the partial derivative of v(x, y, z, t) with respect to time t. The equation is decoupled as:
With the finite difference method, by setting the spacing interval Δx, Δy, Δz and time interval Δt all to be 1 and letting the indices i, j, k and n correspond to x, y, z and t respectively, the equations are approximated as:
The solution to Equation 2 defines the displacement of each position in a 3D elastic object, where displacements at some locations are prefixed. In Equation 2, variable v represents velocity and hence, considering hydromechanics rules, the second term in Equation 2 denotes the compression level of a compressible fluid. Given this description, setting div(v) = 0 represents an uncompressible fluid. The terms μ and λ in Equation 2 determine the tradeoff between conformability to the prefixed deformation vectors and smoothness of the deformation field [13]. As it is clear from Equation 2, when μ and λ are small, the prefixed deformation vectors are preserved. Moreover, having large values for terms μ and λ will result in obtaining a smoother deformation field. As an example, in Figure 1, we demonstrate a comparison between the zoomed diffused gradient vector field with elastic deformation transformation and the original gradient vector field of a slice from a 3D nucleus image. As is clear from Figure 1, the diffused vector field using the elastic deformable model flows more smoothly towards the central areas of nuclei compared to the original gradient vector field. Moreover, even though nuclei are closely juxtaposed, the diffused flow field splits along a clear boundary and flows towards the corresponding central areas of each nucleus. This property greatly contributes to the success of 3D nucleus segmentation.
Figure 1. The 3D view of gradient vector field and diffused gradient vector field with elastic deformation transformation of a slice cropped from a 3D cell nuclei image. (a). A slice of the 3D image. (b). The original gradient vector field. (c). The diffused gradient flow field with elastic deformation transformation. Obviously, the diffused vector field with elastic deformation transformation smoothly flows toward the central areas of cell nuclei. (d). Zoomed view of (b). (e). Zoomed view of (c).
Gradient flow tracking
In the diffused gradient vector field, the vectors flow toward the sinks, which correspond to the centers of nuclei. To follow the vectors until they stop at the sinks, the gradient flow tracking procedure is performed as follows. From any starting point x = (x, y, z), the next point x' = (x', y', z') that x flows through in the diffused gradient field is computed as:
Here, v(x) is the diffused gradient vector at point x, and "round" returns the nearest integer. The angle between the diffused gradient vectors of these two adjacent points is determined as:
When the angle between two consecutive diffused gradient vectors is less than 90 degrees, the gradient flow tracking procedure continues. Otherwise, the gradient flow tracking procedure is stopped, and a sink is reached. In this way, the vectors at each point along the tracking curve define a smooth path leading to a sink. In practice, segmentation of images into nuclei can be obtained by starting a gradient flow tracking procedure from every point in image. The set of pixels that flow to the same sink naturally produce the attraction basin of the sink. All points in the same attraction basin are segmented as an object (nucleus). All points are tracked independently, thus the attraction basin can be of arbitrary shape. After the gradient flow tracking step if the sinks are very close to each other, the attraction basins of the sink are combined together to obtain a larger attraction basin. In all our experiments, if the distance between two sinks is less than three pixels the attraction basins of the two sinks are combined together to obtain a larger attraction basin. Figure 2 shows an illustration of the result of the gradient flow tracking based method.
Figure 2. Illustration of the gradient flow tracking based segmentation method. (a). Clustered gradient vectors in 3D space. (b). Zoomed view of selected clusters of gradient vectors. Each color represents a separated cell.
The algorithm of the gradient flow tracking is summarized as follows.
1. Randomly select a point x as the initial point x^{0}.
2. Obtain x^{n + 1 }(n = 0,1,2...) using Equation 3 based on x^{n}.
3. Compute the angle θ_{n }of diffused gradient vector between x^{n + 1 }and x^{n }with Equation 4. If θ_{n }is larger than , stop.
4. Replace x^{n }with x^{n + 1}. Return to step 2.
Gradient flow tracking is applied to each point in the image. All points in the same attraction basin are grouped into the same cluster. Since it is time consuming to run the tracking algorithm for every point, in order to improve the performance of our method, gradient flow tracking is not applied to the points that have already been on the gradient flow trajectory of a previously processed pixel. Instead, these visited points are directly associated with the sink to which the path flows. This improvement not only speeds up the segmentation, but also yields reproducible segmentation results.
Local adaptive thresholding
After the gradient flow tracking step, the image is segmented into smaller regions each of which is expected to contain only a single nucleus. From here the nuclei segmentation problem is turned into binary classification problem where we are interested in distinguishing the nuclei from their background in a small region. Therefore an intensity thresholding method is appropriate for extracting the nuclei from the background. In order to approach this problem, we can take advantage of the method employed by Otsu in [14], which has the ability to extract the nucleus from each attraction basin. Another approach for dealing with this problem is through designing a more involved method that employs techniques such as graph cut, level set, etc. Here, we employ the locally adaptive method of Otsu [14] because of its ability to deal with situations where the intensity of nuclei and background are not constant across an image. In each segmented region, pixels whose intensities are larger than the automatically determined local Otsu threshold are grouped as nuclei, otherwise they are grouped as background. Finally, an optional procedure is performed after extracting the nuclei to eliminate small regions, which contain a lower number of pixels than a threshold.
Summary of 3D cell nuclei segmentation method
The algorithm of the 3D cell nuclei segmentation method based on gradient flow tracking is summarized as follows.
1. Obtain the diffuse gradient vector field using the elastic deformation transformation
2. From each point, run the gradient flow tracking procedure, and label each passed pixel with a converged sink position.
3. Combine the attraction basins of the sinks whose distance is less than three pixels.
4. Assign the same label to the points in the same attraction basin.
5. Perform local adaptive thresholding in each attraction basin to extract the nucleus.
6. Optional procedure: eliminate regions with smaller number of pixels than a threshold T.
Running example
Figure 3 provides an illustration of 3D nuclei segmentation on a 2D slice. In Figure 3b, we demonstrate the initial subdivision of the image into nuclei areas after the gradient flow tracking procedure. As it is clear from these images, each nucleus is enclosed by a boundary. The nuclei segmentation results after the adaptive thresholding is provided in Figure 3c, and their randomly colorcoded representation is shown in Figure 3d.
Figure 3. Illustration of 3D cell nuclei segmentation on a 2D slice. (a) A slice from 3D cell nuclei image. (b)Boundaries of small regions overlaid on the slice. The boundaries are obtained by the method in Section 2.2. (c) Edges of cell nuclei overlaid on the slice after the step of adaptive thresholding in Section 2.3. (d) Randomly colorcoded extracted cells.
Figure 4 provides an illustration of the 3D nuclei segmentation procedure in the 3D space. In Figure 4a, the crosssectional views of a 3D image are shown. Figure 4b renders the boundary surfaces of extracted 3D nuclei. For the convenience of inspection of touching nuclei, the extracted nuclei are randomly colorcoded.
Figure 4. Illustration of cell nuclei segmentation in 3D space. (a). Crosssectional views of a 3D cell nuclei image. (b) Volume rendering of 3D cell nuclei image. (c) Surface rendering of extracted 3D cell nuclei. For the convenience of inspection of touching cell nuclei, the results are randomly colorcoded. (d) Zooming view of the results.
Authors' contributions
GL and TL proposed the algorithm. GL implemented the algorithm. JN helped implement the algorithm. AT and STCW verified the algorithm and the result. AM and SH acquired the image data and provided biological input to this work.
Additional file 1. The computer program, called CellSegmentation3D, loads the 3D Analyze format image (the suffix ".img" or ".hdr" is not needed in the input image name), and the segmentation result is also saved as the Analyze format. For each 3D input image, the program will output two results: 1) the segmentation result, in which all voxels belonging to the same cell are labelled with the same unique intensity; 2) the boundary map that separates segmented cells. Usage: CellSegmentation3D image_Input f fusion_threshold m min_Region d diffusion_iteration s sigma; Default parameters: fusion_threshold 3, min_Region 50, diffusion_iteration 15, sigma 1.0; For example: CellSegmentation3D elegans0101 f 3 m 35. This will produce the two result images: elegans0101_edge.img and elegans0101_segmentation.img. Notes that the segmentation results can be inspected by many visualization tools, provided that they can load 3D Analyze format images. The NIH ImageJ progam (freely downloadable at: http://rsb.info.nih.gov/ij/ webcite) is recommended.
Format: EXE Size: 232KB Download file
Additional file 2. The elegans0101.img is an example image file.
Format: IMG Size: 2.8MB Download file
Additional file 3. The elegans0101.hdr is the header file associated with the image.
Format: HDR Size: 1KB Download file
Acknowledgements
This work is funded by a Bioinformatics Research Center Program Grant from Harvard Center for Neurodegeneration and Repair, Harvard Medical School (STCW). We would like to thank Dr. Sean Megason of California Institute of Technology for providing the C. elegans embryo images.
References

Holley SA, Geisler R, NüssleinVolhard C: Control of her1 expression during zebrafish somitogenesis by a Deltadependent oscillator and an independent wavefront activity.
Genes & Dev 2000, 14:16781690. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Jülich D, Hwee LC, Round J, Nicoliaje C, Scroeder J, Davies A, Geisler R, Lewis J, Jiang YJ, Holley SA: beamter/deltaC and the role of Notch ligands in the zebrafish somite segmentation, hindbrain neurogenesis and hypochord differentiation.
Dev Biol 2005, 286:391404. PubMed Abstract  Publisher Full Text

Lin G, Adiga U, Olson K, Guzowski J, Barnes C, Roysam B: A hybrid 3D watershed algorithm incorporating gradient cues and object models for automatic segmentation of nuclei in confocal image stacks.
Cytometry 2003, 56A:2336. Publisher Full Text

Lin G, Chawla MK, Olson K, Guzowski JF, Barnes CA, Roysam B: Hierarchical, modelbased merging of multiple fragments for improvoed threedimensional segmentation of nuclei.
Cytometry 2005, 63A:2033. Publisher Full Text

Umesh Adiga PS, Chaudhuri BB: An efficient method based on watershed and rulebased merging for segmentation of 3D histopathological images.
Pattern Recognition 2001, 34:14491458. Publisher Full Text

Belien JAM, Ginkel HAHM, Tekola P, Ploeger LS, Poulin NM, Baak JPA, Diest PJ: Confocal DNA Cytometry: A ContourBased Segmentation Algorithm for Automated ThreeDimensional Image Segmentation.
Cytometry 2002, 49:1221. PubMed Abstract  Publisher Full Text

Wahlby C, Sintorn IM, Erlandsson F, Borgefors G, Bengtsson E: Combining intensity, edge and shape information for 2D and 3D segmentation of cell nuclei in tissue sections.
J Microsc 2004, 215:6776. PubMed Abstract  Publisher Full Text

Dufour A, Shinin V, Tajbakhsh S, GuillenAghion N, OlivoMarin JC, Zimmer C: Segmentation and Tracking Fluorescent Cells in Dynamic 3D Microscopy with Coupled Active Surfaces.
IEEE Trans Image Processing 2005, 14:13961410. Publisher Full Text

Sarti A, de Solorzano CO, Locket S, Malladi R: A Geometric Model for 3D Confocal Image Analysis.
IEEE Trans Biomedical Engineering 2000, 47:16001609. Publisher Full Text

Vincent L, Soille P: Watersheds in digital spaces: an efficient algorithm based on immersion simulations.
IEEE Trans Pattern Anal Mach Intell 1991, 13:583598. Publisher Full Text

Xu C, Prince J: Snakes, shapes, and gradient vector flow.
IEEE Trans Image Processing 1998, 7:359369. Publisher Full Text

Bajcsy R, Kovacic S: Multiresolution elastic matching.
Computer Vision, Graphics and Image Processing 1989, 46:121. Publisher Full Text

Davatzikos C, Prince J, Bryan R: Image Registration Based on Boundary Mapping.
IEEE Trans Med Imaging 1996, 15:112115. Publisher Full Text

Otsu N: A threshold selection method from graylevel histograms.

Ortiz de Solorzano C, Garcia Rodriguez E, Jones A, Pinkel D, Gray JW, Sudar D, Lockett SJ: Segmentation of confocal microscope images of cell nuclei in thick tissue sections.
J Microsc 1999, 193(Pt 3):21226. PubMed Abstract  Publisher Full Text

De Solorzano CO, Malladi R, Lelievre SA, Lockett SJ: Segmentation of nuclei and cells using membrane related protein markers.
J Microsc 2001, 201(Pt 3):404415. PubMed Abstract  Publisher Full Text