Sie sind auf Seite 1von 6

The goal is to take multiple time-averaged z-stacks of fish brains and merge them into one model

brain. Since the fish brains were all scanned to slightly different depths and slightly different
orientations respective to the capture region of the microscope, the mutually imaged area by all X
number of fish is smaller than the imaged area of any one fish individually.

An example of the goal can be seen at http://engertlab.fas.harvard.edu:4001/#/home. This is a


typical zebrafish brain- compiled from averages of multiple fish.

Currently, using ANTS on my 3D .tif stacks, I get borders at the edges of acquisition and mutual
coverage. In the four images below, the template result from the script it shown, followed by a (non
exhaustive) red outlining of the artefactual borders. In other words, the template image that is
produced by antsMultivariateTemplateConstruction2.sh in this case is only smooth and applicable in
brain regions that were mutually contained in all input scans.
At the upper and lower bounds of acquisitions, the diffeomorphic transform produces artefactual
borders that follow the anatomical contours of the constituent scans. These borders are sharp
because the anatomy of the brain has caused some regions to dip below or above the acquisition
volume- they have been warped to match the anatomical contours of the images in which the data is
not missing. Good! But too sharp.
My goal is to take my multiple scans and stitch together a brain that is most confidently imaged in
the mutual regions which are currently smooth, but artefact-free and equally bright in the more
distal regions where imaging is less confident.

Perhaps a tighter level of histogram matching, and some application of gaussian blur along the input
scan edges after transformation would be needed to resolve this? It seems that
antsMultivariateTemplateConstruction2.sh (and perhaps all of ANTS?) is designed to map scans
which fully encompass a target. Alternatively, I could take the warped forms of each input and
successively apply something like pairwise stitching in ImageJ

Currently, in the example database I linked at the beginning, there is a nice set of anatomical border
data. I have a list of ROIs in my acquired dataset, and the goal is to merge all of my dataset together
into a reference brain, then register the anatomical border data to said template. That way, I will be
able to run some scripts to tell me which ROIs are in which brain regions. My concern is that the red-
outlined regions shown earlier will bias this registration, since they would be interpreted as regions
of high contrast, right? (Plus they dont make for a pretty picture.) You can see the overlays below-
Id warp the imaging data to my template, then apply said warp to its coloured overlay.

As seen below, there also appears to some imperfect diffeomorphic transformation happening.
Outlined in green is the boundary of a bright brain region, and in blue is the ghost of the
corresponding region from one input scan, which appears not to have been correctly aligned. Im not
concerned by this, since I dont expect perfection, but I assume its something that could be resolved
by modifying my script call, correct?

Furthermore, there appears to be some variability in the output of the script. In the examples above,
I used the call

antsMultivariateTemplateConstruction2.sh -d 3 -o 6DPFMap -c 2 -g 0.2 -j 12


-r 1 -n 1 -m CC -s 20x10x5x2 -t SyN 8DPF*.tif
which involved 7 contributing images. There were numerous tif-image-reader type errors, which I
forget the specifics of, but involved there being irrelevant or unreadable header metadata (I assume
relics from the acquisition and preprocessing in ImageJ). I dont think this would have affected the
processing, but I mention it just in case. There was also an error about a kernel reaching its
maximum size.

Using a nearly identical call, however,

antsMultivariateTemplateConstruction2.sh -d 3 -o 6DPFMap -c 2 -g 0.2 -j 12


-r 1 -n 1 -m CC -s 20x10x5x2 -t SyN 6DPF*.tif
in which only the input files differed, the result was an image with *only* the rigid affine transforms
completed and no diffeomorphic warps. It also took 35 hours to complete as opposed to the 21
hours for the good results you see above. Its worth noting that in this error case the computing
cluster was also being used simultaneously for other work, so it may have been a module crashing or
RAM being full or some such.

An example of the type of result is shown below.


These sharp borders cannot represent the edges of the acquisition- theyre at angles and places in
the brain that would not have been cut off by the imaging volume. In other words, these artefacts
are being introduced purely by the script.

Here is another example- sharp, diagonal lines which do not correspond to acquisition boundaries,
which have a lighter and a darker region equidistant from a central line. At the bottom of the image,
the curved red lines highlight an anatomical feature which was not correctly registered and warped-
only affine-transformed.

Any suggestions for the source of differences between the two outputs from almost identical script
calls? Any ideas about the sharp borders in the first example?

Many, many thanks for reading through such a big post and your thorough assistance.

-Harry

Das könnte Ihnen auch gefallen