Figure 1 illustrates the workflow of our proposed approach for detection of synapses. We will give a detailed description of each procedure of the method after the introduction to the image stack in this paper.
Materials
The image data used in this study was obtained from Institute of Neuroscience, State key Laboratory of Neuroscience, Chinese Academy of Sciences, Center for Excellence in Brain Science and Intelligence Technology. The transgenic mice (YFP-H line), both male and female, were imaged using a two-photon microscope (Sutter), controlled by Scanimage (Janelia). Auditory cortex of mice was exposed surgically and covered with glass cranial window for repeated two-photon imaging in vivo. Surgical details refer to Y. Yang [19]. Image stack was acquired from the cortical surface to 100–150 μm depth with 0.7 μm intervals. A 25 × objective with 1.05 numerical aperture was used (Olympus). A Ti:sapphire laser (Spectra-Physics) was used as the light source, and tuned to 92 nm for imaging. YFP (Yellow Fluorescent Protein) and GFP (Green Fluorescent Protein) signals were collected using filters 495/40 and 535/50 (Chrome). The 535/50 filter (Channel 1) collected both GFP and YFP signals, and the 495/40 filter (Channel 2) collected GFP-only signals. By subtracting the GFP signals from Channel 1 signals, the YFP-only images were obtained [19].
The dual-color images, as shown in Fig. 2, are the two-photon images, where the red section and green section represent YFP (containing dendrites and axons) and GFP images (contains long-range projecting axons only) respectively and the spine-bouton pairs are thought to be synapse. The x-y resolution and the z resolution of the image data are 137 nm/pixel and 700 nm/pixel respectively, and the image size (x-y) is 512-by-512.
Detection of axonal boutons
In this section, we provide algorithmic details for axonal bouton detection. The proposed algorithm is divided into three parts. First, a 3D deconvolution operation is required due to the noise in the original image stacks. Next, we enhanced the bright swellings in the deconvolved images and segmented them. Finally, we identified true axonal boutons based on a series of criteria. The whole workflow for detecting axonal boutons is shown in Fig. 3.
3D deconvolution
Although confocal microscopy images are known to be sharper than standard epifluorescence ones, they are still inevitably degraded by Poisson noise and residual out-of-focus light due to photon-limited detection [20]. Thus, several deconvolution methods have been proposed. In this study, we adopt the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS), which decouples the array design and processing influence from the noise being measured using a simple and robust algorithm [21]. The details of 3D deconvolution operation implemented in ImageJ [22] are shown in Appendix.
One deconvolved axon image is depicted in Fig. 3
b. To demonstrate the performance of the 3D deconvolution operation, we show an axonal bouton indicated by the red rectangle in Fig. 3
b. We then show two different states of this image in Fig. 3
c, with the deconvolved one on the top and the original one on the bottom. A significant difference can be seen from the detailed comparison, showing that the 3D deconvolution operation helps to identify the axonal boutons.
Enhancing bright swellings
The thresholding on a deconvolved image does not necessarily ensure perfect segmentations, or even good ones. This is because the range of the intensity of different axonal boutons can vary dramatically. A low threshold will reserve the bright axon shaft, but a high threshold will eliminate weak axonal boutons. To avoid the loss of data, we first enhanced the bright swellings.
By the statistics carried on the corresponding electron microscope data, the average diameter of terminal boutons is 1.0 μm. By setting the pixel size to 137 nm, we find the average radius of an axonal bouton is about 4 pixels. We randomly select an axonal bouton as shown in Fig. 4
a and show the plot of its corresponding intensity image in Fig. 4
b. Note that the axonal bouton is a “rounded” profile. We can see that the image in Fig. 4
b looks very similar to the three-dimensional Gaussian surface plotted in Fig. 4
c, suggesting it is reasonable to model the intensity of axonal bouton using a three-dimensional Gaussian surface,
$$ R(x,y) = C\exp\left(-\frac{{{{\left(x - {x_{0}}\right)}^{2}} + {{\left(y - {y_{0}}\right)}^{2}}}}{{2{\delta^{2}}}}\right), $$
(1)
where C is a constant corresponding to the coordinate of the maximum magnitude point (x
0,y
0), and δ is the variance of the Gaussian surface. A very small part of the axonal boutons can be approximated by a ridge, we then construct a Hessian-based ridge detector. Let m=(x−x
0)2+(y−y
0)2. The intensity of enhanced image is set as additive inverse of the eigenvalue with minimum absolute, i.e. [Appendix A],
$$ \lambda (m) = \left\{{\begin{array}{c} { - \exp\left(- \frac{m}{{2{\delta^{2}}}}\right)\left(m - 2{\delta^{2}}+\sqrt {{{\left(m + 2{\delta^{2}}\right)}^{2}} - 4{\delta^{4}}} \right)/2{\delta^{5}},m \le 2{\delta^{2}}}\\ { - \exp\left(- \frac{m}{{2{\delta^{2}}}}\right)\left(m - 2{\delta^{2}} - \sqrt {{{\left(m + 2{\delta^{2}}\right)}^{2}} - 4{\delta^{4}}} \right)/2{\delta^{5}},m > 2{\delta^{2}}.} \end{array}} \right. $$
(2)
Here we analyze in three cases:
For the parabolic line profile, the magnitude of the second derivative of the extracted position is always maximum at the line position [23]. We can conclude that the relationship between the variance δ and the radius r of the axonal bouton is \(\delta =r/\sqrt 3\) [Appendix B]. Combined with the radius of axonal boutons, we set the variance as \(\delta =r/\sqrt 3\) in this study.
To allow visual interpretation, we plot the chosen eigenvalue of model (1) in Fig. 5, from which we can see that the central region is enhanced while the surrounding region weakens gradually. This provides the theoretical basis for image enhancement and segmentation. Inspired by [23–25], we select the above variance. Figure 3 depicts the enhanced image of one bright swelling, whose variation tendency consistently conforms to that of Fig. 3 almost everywhere, supporting the correctness of our theoretical analysis. Compared to the image in Fig. 3
b, the enhanced image shown in Fig. 3
d has an advantage for weaker axonal boutons because of its more obvious profile. The following work is based on the enhanced image in Fig. 6, and the detail is stated in Algorithm 2.
Obtaining axonal boutons
As discussed in last section, the application of a relatively lower threshold will inevitably generate false positives. Fortunately, the shapes of the axonal boutons are homogeneous and each has a sole maximum point. Therefore, we first find local maximum points as candidate points for axonal boutons, a simple but effective strategy. The detail is stated in Algorithm 1.
We then evaluate whether each region in the resulting segmentation contains a local maximum points and we delete the regions lacking local maximum points. On this basis, we compute some statistical characteristics including the eccentricity, major axis, and minor axis. Then we reserve the regions that exhibit statistical characteristics similar to a disk. Finally, we record the location of the reserved regions and determine whether the peak intensity of each region is more than three times brighter than its axon shaft in the original image [19]. The final result of axonal boutons analysis is shown in Fig. 3
e.
Detection of dendritic spines
In this section, we explicate the details of our method for the detection of dendritic spines. Dendritic spines are small with spine head volumes ranging from 0.01 μ
m
3 to 0.8 μ
m
3. According to the shape, dendritic spines can be classified into following types: thin, mushroom, and stubby, as shown in Fig. 7 [26]. The variable shape of these spines is related to the strength and maturity of the synapses [27]. Thus, based on the forms, it is reasonable to locate the dendritic spines by looking for the spur pixels that are connected to the bifurcation points.
The proposed algorithm consists of three parts: enhancement of the line structure in the images after pretreatment; segmentation of dendrites and extraction of their skeletons; and identification of the dendritic spines based on the dendritic skeletons. The workflow is shown in Fig. 8 and Algorithm 4.
Enhancing line structure
Before performing other operations, we first normalize the images to reduce the impact of noise by using the following formula:
$$ I(x,y) = \frac{I(x,y) - I_{min}}{I_{max} - I_{min}}, $$
(3)
where I(x,y) is the intensity value in I at (x,y), I
max
and I
min
represent the maximum intensity and minimum intensity value of the image respectively.
Next, we enhance the linear structure. As shown in Fig. 9, the intensity value of each section of the dendritic linear structure can be modeled as a Gaussian curve [23], which can be written as
$$ I(x') = {C_{den}}\exp\left(- \frac{{x'^{2}}}{{2{\sigma^{2}}}}\right) = {C_{den}}\exp\left(- \frac{{{{\left(x\cos \theta - y\sin \theta \right)}^{2}}}}{{2{\sigma^{2}}}}\right), $$
(4)
where x
′ is the abscissa on Cartesian coordinate system X
′- Y
′; x,y are the abscissa and ordinate on Cartesian coordinate system X-Y, respectively; C
den
is the maximum pixel value of the cross section; σ is the variance of the Gaussian curve, and θ is the angle between the cross section and the main direction of the linear structure, which is shown in Fig. 9
a. According to [23], we can obtain the relationship between the variance σ and the radius of the lines structure w [Appendix C]: \(\sigma = w/\sqrt 3\).
The average diameter of the dendritic spine of the part is less than 0.9 μm, while the x-y resolution is 137 nm/pixel, so the average radius w is equal to 3 pixels.
As in previous part, we construct a Hessian-based ridge detector and take the additive inverse of the eigenvalue with maximum absolute value as the intensity of the enhanced image [Appendix D]:
$$ I_{enh}(x,y) = \left\{ {\begin{array}{c} { - {\sigma^{2}}\lambda (x,y),~\text{if}~\lambda (x,y) < 0}\\ {0,~\text{otherwise}} \end{array}} \right. $$
(5)
The approach for enhancing line structure can be summarized as follows:
Extracting skeleton and finding branch points
We use the following Algorithm 6 to get the dendritic skeleton C (Fig. 8
(5)) at the basis of the binary image B (Fig. 8
(4)), which is obtained by segmenting the enhanced image I (Fig. 8
(3)) using a suitable threshold [28]:
-
1.
In the first sub-iteration, delete pixel p if and only if the condition (a), (b), (c) are all satisfied.
-
2.
In the second sub-iteration, delete pixel p if and only if the condition (a), (b), (d) are all satisfied.
-
Condition (a): X
H
(p)=1
where \({X_{H}}(p) = \sum \limits _{i = 1}^{4}{b_{i}}\), \({b_{i}} = \left \{ {\begin {array}{*{20}{c}} { 1,~if~x_{2i-1} = 0~\text {and}~(x_{2i} = 1~or~x_{2i+1} = 1)}\\ { 0,~\text {otherwise}} \end {array}} \right.\)
x
1,x
2,…,x
8 are the values of the eight neighbors of p, starting from the east neighbor and numbered in counter-clockwise order.
-
Condition (b): 2≤ min{n
1(p),n
2(p)}≤3,
where \({n_{1}}(p) = \sum \limits _{k = 1}^{4} {{x_{2k - 1}} \cup {x_{2k}}}\), \({n_{2}}(p) = \sum \limits _{k = 1}^{4} {{x_{2k}} \cup {x_{2k + 1}}}\).
-
Condition (c): \(({x_{2}} \cup {x_{3}} \cup {\overline x_{8}}) \cap {x_{1}} = 0\)
-
Condition (d): \(({x_{6}} \cup {x_{7}} \cup {\overline x_{4}}) \cap {x_{5}} = 0\)
The two sub-iterations together make up one iteration of the algorithm and the iterations are repeated until the resulting image stops changing. The approach for extracting skeletons can be summarized as follows:
In this study, the operation for finding branch points is a two-dimensional convolution of the binary image of skeletons and a 3-by-3 filter, with an intensity value of 0 for 4 vertices and 1 for the rest positions. The points with an intensity value equal or greater than 4 are considered as branch points. Figure 8
(6) illustrates the branch point detection results.
Locating dendritic spines
We locate the suspected dendritic spines as follows: 1 〉 Remove spur pixels of the dendritic skeletons. The removed pixels are the putative locations of the dendritic spines. 2 〉 In a bifurcation-centered and properly sized region of skeleton, if there is an alternative point connected with the branch point, we consider the alternative point indicates the position of dendritic spine. This process is illustrated in Fig. 10 and the detail is stated in Algorithm 4.
Filtering points on axons
The transgenic mouse used in this study is YFP-H line, in which a subset of layer 5 cortical neurons express YFP. Therefore, YFP signals in these images contain both dendrites and axons. When searching for dendritic spines, it is essential to determine whether these points are on an axon. For each of the structures that centered on suspected spines with a proper size, illustrated in Fig. 11, we take the ratio of its area to its perimeter and average intensity as judging criteria.
As shown in Fig. 12, the positions marked with red circles are the results before screening and the positions marked with green plus sign are results after screening. The positions only marked with red circle are likely locations of axons, rather than spines. The detail is presented in Algorithm 7.
Detection of synapses
Through the discussion in the previous two sections, we obtained the position of the axonal boutons and dendritic spine in the two-photon image stacks. As mentioned in above section, the presynaptic part is located on an axon and the postsynaptic part is located on a dendrite in mostly synapses. Then, it is reasonable to get the locations of the synapses on 2D by integrating the locations of the axon boutons and dendritic spines. Specifically, we calculate the distance between the axon and dendritic spine to determine whether the two are overlapping. Furthermore, we can count the synapses on 3D based on the detection in continuous 2D images. As shown in Algorithm 8, for each synapse in the 2D image, find its nearest synapse in the next layer. If this synapse is also the nearest of the synapse in the next layer, and the distance between them is close enough, these two synapses are the same synapse in the view of 3D.