Sie sind auf Seite 1von 51

Basic Introduction to Image Processing

This presentation has been put together as a common effort of Urs Ziegler, Anne Greet Bittermann, Mathias Hoechli. Many pages are copied from Internet web pages or from presentations given by Leica, Zeiss and other companies. Please browse the internet to learn interactively all about optics. For questions & registration please contact www.zmb.unizh.ch .

Presentation of multidimensional data


3D-data has to be presented in in a 2D-fashion for publication on paper. The data set might be represented i.e. as image gallery, top+side view, projection. A virtual light source and/or shadows on a virtual projection plane are helping to recognize spatial relations.

Interactive models, movies and animations can be published on web-pages or into power point-presentations.

Image processing and analysis


After registration on the microscope the digital images are loaded to image processing software for further processing. The data includes information about pseudo color, pixel dimensions, time scale etc. First image data get adjusted by background subtraction, contrast enhancement, etc. Colors might be assigned; subvolumes selected; z-mismatchs corrected by pixel-shifts. The softwares offer different options to look at the multidimensional data sets. i.e. slice viewer, gallery view, section view, projections, full 3D volume representations, surface models, time bar, color coded overlays of several channels, transparencies, ... The software offers analytical tools for measurement and quantication: automated counting of features, measurements of areas and volumes, tracing of laments, measuring of distances, evaluation of colocalization, ...

Automated Multidimensional Data Processing


Dimensions:

micrograph processing softwares:


Imaris (Bitplane)* Volocity (Improvision) NIH image ** BioImageXD **
Campus-Lizenz an der Uni Zrich **scientic freeware on the internet

xy = 2D xyz = 3D xyzt = 4D xyzt! = 5D

Digital images:
2-dimensional distribution of image Points (Pixel)
x y

Digital resolution
Detectors record a limited amount of image points (pixel number) within a xy grid. Each image point has its own grey level (dynamic range). Increasing the amount of image points as well as the number of grey levels leads to bigger image les and longer calculation times. 256 grey levels are coded by 8 bit. 256 grey levels are presented by a computer monitor. Today, detectors are pushed to discriminate 1024, 4096 or more grey levels. The human eye can discriminate about 60 gray levels (6 bit).

3D Data set
x y z
)!z )!z )!z )!z

The information within the optical sections along the z-axis can be used to reconstruct a 3-dimensional image.

4D Data set
3-D stacks recorded along the time course

x y z
)!z )!z )!z )!z )!z )!z )!z )!z

t1

t2

5D Data set
Wavelenghts adding another dimension of uorescent data. Time laps of multi-channel 3D stacks generate a 5D data set. Wavelenght information is displayed as pseudo-colors.

x y z
)!z )!z )!z )!z )!z )!z )!z )!z )!z )!z )!z )!z

t1, t2, t3, ...

Voxels
A voxel (= volume element) is the 3D-equivalent of the 2D-pixel. It is the smallest unit of a sampled volume.

The given maximal lateral (x,y) resolution of 0.2 m and the axial (z) resolution of 0.4 m of a voxel results in an elongated shape (point spread function).

Neighbours
For the calculation and visualisation are the neighbor voxels of great importance.

2D -> each Pixel has 4 neighbor pixels

3D -> each Voxel has 6 neighbor voxels

Presentation and effort:


* Simple presentations (fast, allows 2D-publishing): gallery view, section view, projections * Intense calculations (time consuming, for analysis): full 3D volume representation, surface rendering, shadowing, stereo view * Animations (time consuming, analysis & presentation): rotating 3D models, time sequences of 3D volumes

Image Gallery
Galleries of images are the most simple data presentation. for xyz xyt xy! ...

Projecting optical sections to one plane


Optical section through a cube containing bers

Projecting the structures of all sections to the ground level (Extended Focus)

Projection types
Average Projection:
Simple to very complex mathematical procedures. Summing up the grey values of all voxels with identical xy-coordinates along the z-stack, divided by the numbers of optical sections.

Maximal Intensity Projection (MIP):


Only the voxel in the z-stack, which has the highest grey value, will be projected.

Background signal gets projected too and might cause noise/blur. Suppress background rst!!

x1 y1 x2 y2 x3 y3
Z3 Z2

x1
Z1

y1 x2 y2 x3 y3

Z1

Z2

Z3

Z4

Z4

Z5

Z5

Maximal Intensity Point Projection -> sharp image

Projektion

Averaging may lead to enlarged structures and background

Projektion

Gallery presentation of a neuron

10

Maximal intensity projection of the optical sections of the neuron

Maximum intensity projection with one sided illumination and shadow. (easy3D)

stack of images gallery of images


x y z

Section through the stack Image of the section

x z

Sectioning through a stack of images


- perpendicular
y z x

y z

Section through the stack along the y-axis

X-Y

Y-Z

{ { { {
Computer representation of section levels in XY, XZ, YZ

X-Z

Intense calculation for 3-D representations


1. Volume rendering
Ray tracing

2. Surface rendering
Segmentation of z-stacks Depth encoding of voxels Shadowing

3. Animations
time course rotations zooms etc.

Volume rendering

Even if fog (background) limits the visibility, we get an idea of the structure of the trees.

Volume rendering
Volume Screen

Virtual ray

Ray Tracing
A virtual ray passing the volume accumulates the grey levels of the voxels, normalizes the summed value and presents it on the screen.

Volume rendering with adjustments of the grey values

Adjustment of the grey level according to the distance between voxel and screen.

Voxels hit by the virtual ray

Adjustment of the grey value according to the grey value of the voxel just passed.

screen

Volume rendering - example

3D representation of a multiuorsescent cellmonolayer (4 channels)

Surface rendering

Creating objects with solid surfaces.

Surface rendering: Iso-Surface modelling


1st Step: Segmentation of the z-stacks. Identication of Voxels belonging to an object.The criteria for the identication is the grey value of the voxel. All voxels, whose grey value are higher (brighter) than the chosen threshold belong to the object, the others belong to the background and will be discriminated. This threshold value is chosen by the scientist. (Neighborhood rule: If a voxel belongs to the object, but one of its 6 neighbor voxels does not belong to the object, it will be dened as a surface voxel.)

Surface rendering: Iso-Surface modelling


2nd Step: Depth encoding of the Voxels. The previously identied surface voxels have all the same grey value and would result as a non structured evenly grey image on the screen of the monitor. Therefore, in a second step, the grey values of the voxels are adjusted according to the distance of the surface voxels to the screen.
z y x
distance (depth)

All voxels have the same grey value

Depth dependent adjustment of the grey values.

Surface rendering: Iso-Surface modelling


3rd step: Shadowing The topology can be accentuated using a one sided shadowing effect. To do that, neighboring surface voxels are connected to form a polygon. The grey values of the surface voxels are adjusted dependent on the angle between the viewing direction and the normal of the polygon surface.

Viewing direction and incident light


!

Surface voxels dene polygons

The normal to the polygon and the viewing direction include the angle " .

Representation of several surfaces

hite) + Transparency (w 60 d un d) (re 0 11 s old sh re Th

Surface modeling: setting the threshhold

Threshold 68

Threshold 138

Surface models of the same dendrites using different threshold values

Which model shows the real surface ?

Adequate Filament Imaging

Stereo-Representation
The depth feeling can be simulated by calculating two separate slightly tilted 3D-models of the same scene as if they were viewed by the left eye and the right eye. The nal stereo pair can be observed using different techniques.

The 3D impression can be achieved squinting the eyes or using special stereo viewers (or crossing the eyes).

StereoRepresent ation II

The 2 pictures of the stereo pair are colored in red & green and superimposed. The 3D impression can be achieved using bicolor goggels.

Looking inside
Surface view

Surface view combined with the visualization of internal structures

Gallery view of 20 optical sections

Section view of 20 sections x-y y-z

x-z
Mo l,stained with acridine orange - 20 optical sections 3D-representation: x-y, x-z, y-z

Looking inside

Transparency & slicer tool

Looking inside

... by using transparency

Animations - y through
Volume and surface rendering allow you to turn and zoom the data set. Extreme Zoom allows you to virtually enter the sample.

Measurements
i.e.: - Automated data segmentation - Particle counting - Size regognition - Distance measuerments - Filament tracking - Movement tracing " Results are visualized in the 3D model " Results are listed as numbers in Exel-sheets

Colocalisation
The relation of the intensity values from 2 channels are presented in a two dimensional histogram. In case of colocalization, the intensity clouds of both channels are overlapping. Colocalization is not an absolute fact but allways relate to voxel size and resolution.

Animations
Animations are series of single images put together into a movie. The images might be a volume view, a projection, a slice, a time point. The animation is done by just playing the sequential data set, or by rotating 3D models or volume representations, by zoom-in & ythrough motions, changing of surfaces and transparencies, etc.

Today#s computer allow to calculate and represent animated sequences reasonably fast. Movie les can be published i.e in power point or on the web. Also interactive le formats are possible.

Animation in time
t1 t2 t3 t4
Changes of a 3D-volume with time might be presented as a gallery of projection views or as a movie. Animation and stereo view facilitate the recognition of spheric relations in this context.

Analysis & Animation


Particles recognition and tracing in time

Deconvolution
What is to be gained? Increase in resolution x, y, z Noise is reduced The image formation process is optimized (astigmatism, point spread function, ...)
Wideeld uorescent data can be improved a lot by deconvolution. Confocal data show less z-distortions, less out-of-focus blur,... -> deconvolution shows only very little effect.

Convolution - Theory

Fluorescent bead with a diameter of 0,1 m

Deconvolution procedure
measured

Measure object of known size, but smaller than the resolution of the microscope (i.e. 100nm uorescent beads) Compare the microscope image with the ideal/theoretical representation of the object. Determine the difference of the measured and the real object. Correct unknown objects with the determined difference.

real

Deconvolution effect

3D, 4D, 5D- data reconstruction is time consuming!!!


=>Only correctly recorded images are worth to spend the time to deal with the 3D presentation!!! => Keep your data small: Reduce image resolution (512 x 512 pixel = 262 kB). Crop images so that they containing only the most important structural details. Work with as less channels as possible. stay with 8 bit =>Keep the coffee pot hot in order to wait patiently until the calculations are nished. =>Use classical image processing tools to improve the quality of the images.

and: Don$t expect to much of a 3-D presentation.

Das könnte Ihnen auch gefallen