Beruflich Dokumente
Kultur Dokumente
brain. Since the fish brains were all scanned to slightly different depths and slightly different
orientations respective to the capture region of the microscope, the mutually imaged area by all X
number of fish is smaller than the imaged area of any one fish individually.
Currently, using ANTS on my 3D .tif stacks, I get borders at the edges of acquisition and mutual
coverage. In the four images below, the template result from the script it shown, followed by a (non
exhaustive) red outlining of the artefactual borders. In other words, the template image that is
produced by antsMultivariateTemplateConstruction2.sh in this case is only smooth and applicable in
brain regions that were mutually contained in all input scans.
At the upper and lower bounds of acquisitions, the diffeomorphic transform produces artefactual
borders that follow the anatomical contours of the constituent scans. These borders are sharp
because the anatomy of the brain has caused some regions to dip below or above the acquisition
volume- they have been warped to match the anatomical contours of the images in which the data is
not missing. Good! But too sharp.
My goal is to take my multiple scans and stitch together a brain that is most confidently imaged in
the mutual regions which are currently smooth, but artefact-free and equally bright in the more
distal regions where imaging is less confident.
Perhaps a tighter level of histogram matching, and some application of gaussian blur along the input
scan edges after transformation would be needed to resolve this? It seems that
antsMultivariateTemplateConstruction2.sh (and perhaps all of ANTS?) is designed to map scans
which fully encompass a target. Alternatively, I could take the warped forms of each input and
successively apply something like pairwise stitching in ImageJ
Currently, in the example database I linked at the beginning, there is a nice set of anatomical border
data. I have a list of ROIs in my acquired dataset, and the goal is to merge all of my dataset together
into a reference brain, then register the anatomical border data to said template. That way, I will be
able to run some scripts to tell me which ROIs are in which brain regions. My concern is that the red-
outlined regions shown earlier will bias this registration, since they would be interpreted as regions
of high contrast, right? (Plus they dont make for a pretty picture.) You can see the overlays below-
Id warp the imaging data to my template, then apply said warp to its coloured overlay.
As seen below, there also appears to some imperfect diffeomorphic transformation happening.
Outlined in green is the boundary of a bright brain region, and in blue is the ghost of the
corresponding region from one input scan, which appears not to have been correctly aligned. Im not
concerned by this, since I dont expect perfection, but I assume its something that could be resolved
by modifying my script call, correct?
Furthermore, there appears to be some variability in the output of the script. In the examples above,
I used the call
Here is another example- sharp, diagonal lines which do not correspond to acquisition boundaries,
which have a lighter and a darker region equidistant from a central line. At the bottom of the image,
the curved red lines highlight an anatomical feature which was not correctly registered and warped-
only affine-transformed.
Any suggestions for the source of differences between the two outputs from almost identical script
calls? Any ideas about the sharp borders in the first example?
Many, many thanks for reading through such a big post and your thorough assistance.
-Harry