Beruflich Dokumente
Kultur Dokumente
Level:
• Beginner
Time:
• This unit should not take you more than 1 hour
Resources:
• A licence of Definiens Developer (Version 7.0 was used to develop
these units).
• An image you can load into Definiens for this exercise. For this unit
the Landsat 7 image orthol7_20423xs100999.img (Row: 204 Path: 23
Date 10/09/1999) over North Wales and available from the Landmap
service is recommended.
1.1. Background
This section will provide you with a brief outline of the purpose of the
Definiens software and an overview of the collection of software tools
Definiens have to offer. For a more detail on Definiens products please visit
the Definiens website http://www.definiens.com or contact them directly.
1.1.1 Purpose
The software can be divided into three categories End-user, Developer and
Server-side, where the use of each depends on your role within the image
processing chain.
Definiens provide customer (and user to user) support through their online
forums (http://forum.definiens.com/index.php) where you can post your
problems or read previous answers. Additionally there is a section where you
can download sample rulesets. Definiens also make a number of documents
available online including presentations, case studies, white papers and
scientific papers (http://www.definiens.com/resource-center_61_24_0.html).
Additionally, your installation of Definiens Developer contains sample data
and a user guide alongside a technical reference guide which contains a
wealth of information/
Once started, you will be presented with an interface similar to the one shown
in Figure 1.2.
Data Viewer: The image and classification data viewer. The viewer allows
you to view the imagery you are classifying, including manipulating the band
order and image stretching.
Process Tree: The window within which you develop your ruleset script.
Image Object Information: This window displays selected feature values for
a selected object.
Feature View: This window displays a list of all the available features within
Definiens Developer and allows the current image objects to be coloured
(green high values and blue low values) given their value for a select feature.
Table 1.1 provides a glossary of the icon available on the various toolbars
within Definiens Developer.
Icon Description
File Toolbar
Create New Project
Open Existing Project
Save Project
New Workspace
Open Workspace
Save Workspace
Predefined Import
View Settings Toolbar
Workspace view
Analysis View
Results View
Developer View
View Image data
View Classification
View Samples (for Nearest Neighbour Classification)
Feature View
Toggle Object Means and Pixel Data
Toggle Object Outlines
Toggle Polygons
Toggle Skeletons
Toggle Image View and Project Pixel View.
Single Layer (Grey Scale)
Mix Three Layers RGB
Show Previous Layer
Show Next Layer
Select Layers to be Displayed and Image Stretch
View Navigation Toolbar
Delete Level
Select Level For Display
Down a Level
Up a Level
Tools Toolbar
Object Information
Object Table
Undo process
Redo process
Class hierarchy
Process tree
Feature View
Managed customised features
Toggle manual editing toolbar
Zoom Toolbar
To create your new project either select File > New Project or select the new
project icon ( ). You will be presented with the following dialog box (Figure
1.5a) through which you will enter your datasets and parameters to create
your project. In this example you will load the image
orthol7_20423xs100999.img (Figure 1.5b), by selecting ‘Insert’ next to the
Image Layer List. Once you have loaded your data you need to define the
layer aliases, as shown in Figure 1.5b, where bands 1-6 correspond with
aliases BLUE, GREEN, RED, NIR SWIR1 and SWIR2, respectively. To bring
up the layer properties dialog double click on each image band in turn, or
select the band and select on the edit button.
Now click OK to create the project and you will move back to the Definiens
Developer interface, Figure 1.7.
Figure 1.7. The Definiens Developer interface once the project has been loaded.
1.5. Using the Display Options within Definiens Developer
Once the project has been loaded you can pan and zoom around the data, in
the display region, using the zoom toolbar, shown below in Figure 1.8.
If the zoom functions toolbar is not displayed you can turn it on using the
View>Toolbars menu.
To select the layer(s) to be displayed you need to use the ‘Edit Image Layer
Mixing’ dialog, Figure 1.9, available via the following icon .
Using the ‘Layer Mixing’ drop down menu you can select the number of layers
to be mixed in the display and then by selecting the individual layers you may
turn then on and off (or increase the weight), Figure 1.20.
a) One layer grey level b) 3 layer RBG model c) Six layer mixing
Figure 1.20. Selecting the layers for display.
Also, you can adjust the equalisation (or stretch) of the data layers being
displayed using the ‘Equalizing’ drop down menu. The available options are
‘Linear (1.00%)’, ‘Standard Deviation (3.00)’ ‘Gamma Correction (0.50)’,
‘Histogram’ and ‘Manual’.
Definiens Developer also allows you to split your display, therefore allowing
you to have multiple views of the same data. This functionality is available
from the Window menu (Figure 5.21.).
Here the current display can be split horizontally and/or vertically and once
split can be ‘linked’ to provide views which automatically move together.
Once you have split your screen by selecting the window you wish to change
the same tools as outlined above can be used to manipulate the display
properties in each of the different views.
1.6. Conclusion
In summary, you should now be able to open Definiens Developer, create a
project and manipulate the display to view the data as you wish. The following units
will take you through the segmentation and classification of the
imagery you have loaded into your project and some more advanced features
of the Definiens software.
1.7. Exercises
1) Experiment with the layer properties, such that you can view each image
band individually and then a number of 3 and 6 band mixings. Observe how
the different land cover types visually change as you change the band
mixings.
2) Using the layer combination of your choice (R: NIR G: SWIR1 B: RED, is
recommended) experiment wit the image equalisations available. Again,
observe how the various land cover types change to these changes.
3) Produce a four way split of the display (i.e., a vertical and horizontal split)
and set each region to different viewing properties. Finally, link all four
together (side by side).
Unit 2: Image Segmentation
Level:
• Beginner
Time:
• This unit should not take you more than 1.5 hours
Resources:
• A licence of Definiens Developer (Version 7.0 was used to develop
these units).
• An image you can load into Definiens for this exercise. For this unit
the multispectral Landsat 7 image orthol7_20423xs100999.img and its
corresponding panchromatic scene o20423_pan.tif (Row: 204 Path:
23 Date 10/09/1999) over North Wales and available from the
Landmap Service is recommended.
Processes:
• RulesetTemplate.dcp
• Chessboard_Segmentation.dcp
• Quadtree_Segmentation.dcp
• Multiresolution_Segmentation.dcp
• SpectralDifference_Segmentation.dcp
• ContrastSplit_Segmentation.dcp
• ContrastFilter_Segmentation.dcp
2.1. Introduction
Segmentation is always the first step of any process within Definiens
Developer as it generates the image objects on which the classification
process will be performed. The important part is for the segmentation process
to identify objects which a representative of the features you wish to classify
and are distinct in terms of the features available within Definiens (e.g.,
spectral values, shape, texture).
Please note the order in which the image files have been loaded, i.e., the
panchromatic band first, as this will decide on the image resolution for the
project. In this case the 25 m multispectral landsat 7 data will be resampled to
the 15 m of the panchromatic data.
Once you have matched your project window to those shown in Figure 2.1.
select OK and create your project.
Figure 2.3. The Definiens Developer interface with the project and display parameters defined
The process tree ( ) will contain the script that you produce to control the
processes (algorithms) which run and the order in which they are executed. It
is important to keep the script that you produce during your segmentation and
classification procedures as organised as possible, this will allow you to
understand what you have done when you come back to it. With this in mind
Figure 2.4 contains the template to which you should aim to adhere to.
Figure 2.4. Template Process Tree.
To insert a process right-click within the process tree window and the
following menu will appear, Figure 2.5. Select ‘Append New’ and the Edit
Process dialog will appear, Figure 2.6.
Name:
The name of the process. This can either be manual entered or
automatically provided by the software. A good convention is to only manually
edit the name where nothing else within the process has been changed,
otherwise use the automatic. Finally, the note icon ( ) allows a comment to
be written about the process.
Algorithm:
The algorithm to execute. This drop down menu allows you to select
the algorithm you wish to execute, there is an extensive list of algorithms a
number of which will be used during these units.
Algorithm Description:
A simple description of the algorithm you are using.
Algorithm Parameters:
These are the parameters which are associated with algorithm which
has been selected.
To recreated the template shown in Figure 2.4, edit the name of the process
to be ‘Process Template’ and select the comments button and enter ‘This is a
template ruleset which most process trees will adhere to.’, the rest of the
process should be left unchanged. Select OK, you have now created your first
process which will simply execute any process which is create beneath it.
To create the next process ‘Segmentation’ right-click’ of the process you have
just created and select ‘Insert Child’, this will create a new process under your
previous process. Edit the name of this new process to be ‘Segmentation’ and
select OK. Select the ‘Segmentation’ you have just created right-click and
select ‘Append New’, edit this name of this process to be ‘Classification’. Now
repeat this to add the processes ‘Merge’ and ‘Export’.
To move processes you can drag and drop them while holding down the left
mouse button. To place a process under another process drag and drop
holding down the right mouse button.
Finally, you can save and load your process independently of your project
(although, your process is saved within the project), this is done by right-
clicking within the process tree window and selecting ‘Save Rule Set…’.
Alongside the contains of your process tree this will also save any classes or
customised features you have created, which are associated with you
process.
Shape - Colour A weighting between the objects shape and its spectral
colour whereby if 0, only the colour is considered
whereas if > 0, the objects shape along with the colour
are considered and therefore less fractal boundaries
are produced. The higher the value, the more that
shape is considered.
To run the segmentation process, leave the parameters at their default values
and click execute but it is recommended that you give your level a suitable
name. A common level name convention is to number them, starting with
Level 1. Once you are happy with the parameters and have executed the
process you should have successful completed your first segmentation.
Once the segmentation has been executed, select the ‘Show or Hide Outlines’
icon ( ) and the outlines of the objects (segments) created will be displayed
over the image. Making sure you have the cursor in the ‘cursor mode’ rather
than the ’zoom mode’, select the objects (with either the outlines turned on or
off) in turn. Using the ‘Image Object Information’ window ( ), you will see
the values for features associated with selected object (e.g., band values)
displayed.
To remove your segmentation and try new parameters, you need to delete the
level before re-executing your segmentation process, this is done using the
“Delete Level” icon ( ).
The aim of this algorithm is to split bright and dark objects using a threshold
that maximises the contrast been the resulting bright objects (consisting of
pixel values above the thresholds) and dark objects (consisting of pixel values
below the threshold). The algorithm aims to optimize this separation by
considering different pixel values, within the range provided by the user
parameters, with values selected based on the inputted step size and
stepping parameter. Table 2.2 provides a list of the parameters for the
algorithm.
Parameter Description
Chessboard If no level is already present then a chessboard
Tile Size segmentation is undertaken to generate a set of large
objects which are iterated through during the
segmentation process.
Step Size The sizes of the steps the algorithm will use to move from
the minimum threshold to the maximum threshold.
Large values will make the algorithm quicker to calculate
but smaller values will tend to produce better results.
Image Layer The image layer where on which the algorithm will be
applied.
Class for Bright The class the brighter objects (above the threshold) will
Objects be given.
Class for Dark The class the darker objects (below the threshold) will be
Objects given.
Minimum Rel. The minimum (relative) area identified as dark objects for
Area Dark the segmentation to be performed. Range 0-1.
Minimum Object The minimum object size for the segmentation to take
Size place.
Table 2.2. The parameters associated with the contrast split segmentation.
To execute this algorithm you will need to create two classes, one for the
bright objects and one for the dark objects. To do this within Definiens
Developer right-click within the class hierarchy window and select New Class,
you do not need to enter any parameters at this point, so just select OK.
Using one of the segmentations you have previously generated extend your
process to include a spectral difference segmentation process. Remember, to
add a process right-click and select ‘Append new process…”, in this case on
the previous segmentation process. Again try to achieve a segmentation that
you think is the best for the landscape within the scene.
Parameter Description
Chessboard The chessboard segmentation parameters for producing
Settings the final segmentation from the filter results.
Parameter Description
Shape Criteria Larger values reduce the inclusion of irregularly shaped
Value objects.
Parameter Description
Enable Class If set as no, the remaining parameters are not used.
Assignment
No Objects The class to be pixels with the value ‘no objects’ will be
given.
Object in First The class to be pixels with the value ‘Object in First Layer’
Layer will be given.
Object in The class to be pixels with the value ‘Object in Second
Second Layer Layer’ will be given.
Object in Both The class to be pixels with the value ‘Object in Both
layers layers’ will be given.
Table 2.5. The classification parameters for the contrast filter segmentation.
2.11. Conclusion
Following the completion of this unit you should have knowledge of all the
segmentation processes available within Definiens Developer and
implemented each of the algorithms on the image provided.
2.12. Exercises
1) Decide on the most appropriate segmentation algorithm for segmenting this
scene. As you are doing this think of what elements you think provide a good
segmentation and how the different characteristics of the various algorithms
could be used to achieve the segmentation you require.
Unit 3: Nearest Neighbour Classification
Level:
• Beginner
Time:
• This unit should not take you more than 1.5 hours
Resources:
• A licence of Definiens Developer (Version 7.0 was used to develop
these units).
• The multispectral Landsat 7 image orthol7_20423xs100999.img and
its corresponding panchromatic scene o20423_pan.tif (Row: 204
Path: 23 Date 10/09/1999) over North Wales and available from the
Landmap service.
Processes:
• NN_Classification_Process.dcp
3.1. Introduction
Within in this worksheet you will create a nearest neighbour classification of a
segmented Landsat 7 image of the area around Llyn Brenig, Denbigh Moors,
North Wales (Figure 3.1). This area contains extensive tracts of upland heath
and bog as well as coniferous forest plantations and grasslands at various
levels of improvement.
Figure 3.1. Ordnance Survey Map of the study area
(http://www.ordnancesurvey.co.uk/getamap).
Please note the order in which the image files have been loaded, i.e., the
panchromatic band first, as this will decide on the image resolution for the
project. In this case the 25 m multispectral Landsat 7 data will be resampled
to the 15 m of the panchromatic data.
Once you have matched your project window to those shown in Figure 3.2.
select OK and create your project.
To insert a class, right click in the Class Hierarchy window and select ‘Insert
Class’ (Figure 3.4).
Figure 3.4. Inserting a class into the class hierarchy.
The next step is to edit your class description by first giving your class a
name. For example, give the class the name “Water” and assign a blue
colour. When you have done this, insert and name new classes of “Forest”,
“Other vegetation” and “Not Vegetation”. You should then have four classes
inserted and named:
• Water
• Forest
• Other Vegetation
• Not Vegetation
After giving each class a name, select an appropriate colour for each. This
can be anything you wish, although the final classification will be easier to
understand and interpret if you chose a logical colour (e.g., Green for Forest).
Next, the features (e.g., mean object spectral response) to be used for
classification (in this case, the standard nearest neighbour algorithm) need to
be inserted into the class. To do this, right-click on the ‘and (min)’ and select
‘Insert new Expression’ (Figure 3.6).
This will present the window (Figure 3.7), where you need to select ‘Standard
Nearest Neighbour’ and click Insert.
Figure 3.7. Selecting the expression to be used for the classification.
Your resulting class description should be similar to that shown in Figure 3.8
for the forest class.
Figure 3.8. The resulting class description to be used for the classification.
The same procedure now needs to be repeated for the remaining three
classes so that you end up with a classification hierarchy similar to that shown
in Figure 3.9.
Figure 3.9 The final class hierarchy.
To select the features used for the nearest neighbour classification use the
‘Edit Standard NN feature Space…’ function, Figure 3.10a, where initially you
should just use the mean spectral values of the objects, Figure 3.10b.
a) The menu for editing the NN feature b) The dialog for selecting the features within the
space NN feature space
Figure 3.10. Editing the features used within the nearest neighbour classification.
Note, that the layer weighting for the panchromatic band (PAN) has been
increased to 2. This is in take advantage of the extra spatial resolution of the
panchromatic band, 15 m rather than 25 m of the multispectral.
3.4.2. Classification
After inputting the parameters into the process, click on the ‘OK button at the
bottom, you need to select samples before performing your classification.
Your process tree should now be similar to that shown in Figure 3.14.
Figure 3.14: The process tree after the inclusion of the classification process.
The next step is to setup the processes which will merge your classification so
that all neighbouring objects of the same class will form single objects. It is
important to merge your classification to identify complete objects. For
example, once merged you can query the lake to find its complete area. To
merge the result you will need to enter a merge process for each class (Figure
3.16; ‘Insert Child’), the merge parameters for the Forest class are shown in
Figure 3.15. The class for merging is defined using the Image Object Domain
where the class of interest is defined, if you were to select multiple classes all
the select classes would be merged, removing the boundaries and
classification of these objects.
Figure 3.15. The process parameters to merge the Forest class.
To save time, once you have created you first merge process you can copy-
and-paste (ctrl-c, ctrl-v or right-click on the process) this process to duplicate it
and then edit the class you wish to merge.
Finally, we usually wish to export the classification result from Definiens into a
GIS for further processing or the production of a map. Therefore, our final
process will be to export the classification to an ESRI shapefile, Figure 3.17.
Figure 3.17. Process parameters to export the classification as a shapefile.
To select the classes to export you again edit the Image Object Domain,
remember these parameters define the image objects the process will be
applied to. The name of the outputted shapefile has been defined as
‘Classification’ while the features to be exported are the area (of the image
object) and the class name. Area is found under Object Features > Shape >
Generic while class name is found under Class-Related features > Relations
to Classification > Class name. For the class name feature you will need to
create it, right-click on the ‘Create new Class name’ and select Create, leave
the parameters as their default values and just select OK. The shapefile will
output to the directory within which your project is saved, if you have not yet
saved your project then the shapefile will be outputted to the directory
containing the input imagery.
You final process tree should then be the same as the one shown below in
Figure 3.18.
Once you have activated sample selection, highlight the class you wish create
a sample for in the class hierarchy window. Either double click on the objects
you wish to select as samples or hold down the Shift key and use a single
click. To unselect a sample, repeat the process of selection for each chosen
object.
To aid the selection of your samples, Definiens Developer offers two windows
(both available from the menu in Figure 3.19) of information based on the
selected samples. Firstly the ‘Sample Editor’ window (Figure 3.20) and
secondly the ‘Sample Selection Information’ window (Figure 3.21).
Figure 3.20. Sample Editor Window
The Sample Editor provides a visual comparison of two classes using a range
of selected features. In Figure 3.20 the Forest and Water classes are
compared using the object means from each spectral band of the Landsat
data. When an object is selected, a red arrow is displayed to illustrate where
the object mean fits in relation to the mean of the other samples. To change
the displayed features, right-click within the main window and select ‘Features
to Display…’ or, if you only want the features being used within the NN
calculation, select ‘Display Standard Nearest Neighbour Features’.
Once you have selected your samples, you should have an image which is
similar in appearance to that shown in Figure 3.22. Bear in mind that the
selection of samples does not have a correct answer. Just select the samples
you consider to be most representative of the classes you wish to separate
and which also give the best separation in the Sample Editor and Sample
Selection windows.
Once you are happy with your classification, execute the merge image objects
processes you have previous created, your results should appear similar to
those shown in Figure 3.19b.
Finally, run the process to export the results. This will result in an ESRI
shapefile and allow the creation of a map such as the one shown in Figure
3.20.
Figure 3.20. A map produced using ESRI ArcMap from the result of the Definiens
classification
To use this tool, select the features you wish to compare – Initially try the
mean, standard deviation and the pixel ratio but later try other combinations.
Then select calculate, once the calculation has finished then select advanced
to see which features offered the best separation, and ‘Apply to the Std NN’ to
use within the classification.
3.9. Exercises
1) Experiment with different segmentation parameters, both within the multi-
resolution segmentation and the other segmentation algorithms. Be aware
that you will have to select new samples each time you delete the level.
References
Leckie, D.G., Gougeon, F.A., Tinis, S., Nelson, T., Burnett, C.N., & Paradine,
D. (2005). Automated tree recognition in old growth conifer stands with high
resolution digital imagery. Remote Sensing of Environment, 94, 311-326.
Unit 3: Nearest Neighbour Classification
Level:
• Beginner
Time:
• This unit should not take you more than 1.5 hours
Resources:
• A licence of Definiens Developer (Version 7.0 was used to develop
these units).
• The multispectral Landsat 7 image orthol7_20423xs100999.img and
its corresponding panchromatic scene o20423_pan.tif (Row: 204
Path: 23 Date 10/09/1999) over North Wales and available from the
Landmap service.
Processes:
• NN_Classification_Process.dcp
3.1. Introduction
Within in this worksheet you will create a nearest neighbour classification of a
segmented Landsat 7 image of the area around Llyn Brenig, Denbigh Moors,
North Wales (Figure 3.1). This area contains extensive tracts of upland heath
and bog as well as coniferous forest plantations and grasslands at various
levels of improvement.
Figure 3.1. Ordnance Survey Map of the study area
(http://www.ordnancesurvey.co.uk/getamap).
Please note the order in which the image files have been loaded, i.e., the
panchromatic band first, as this will decide on the image resolution for the
project. In this case the 25 m multispectral Landsat 7 data will be resampled
to the 15 m of the panchromatic data.
Once you have matched your project window to those shown in Figure 3.2.
select OK and create your project.
To insert a class, right click in the Class Hierarchy window and select ‘Insert
Class’ (Figure 3.4).
Figure 3.4. Inserting a class into the class hierarchy.
The next step is to edit your class description by first giving your class a
name. For example, give the class the name “Water” and assign a blue
colour. When you have done this, insert and name new classes of “Forest”,
“Other vegetation” and “Not Vegetation”. You should then have four classes
inserted and named:
• Water
• Forest
• Other Vegetation
• Not Vegetation
After giving each class a name, select an appropriate colour for each. This
can be anything you wish, although the final classification will be easier to
understand and interpret if you chose a logical colour (e.g., Green for Forest).
Next, the features (e.g., mean object spectral response) to be used for
classification (in this case, the standard nearest neighbour algorithm) need to
be inserted into the class. To do this, right-click on the ‘and (min)’ and select
‘Insert new Expression’ (Figure 3.6).
This will present the window (Figure 3.7), where you need to select ‘Standard
Nearest Neighbour’ and click Insert.
Figure 3.7. Selecting the expression to be used for the classification.
Your resulting class description should be similar to that shown in Figure 3.8
for the forest class.
Figure 3.8. The resulting class description to be used for the classification.
The same procedure now needs to be repeated for the remaining three
classes so that you end up with a classification hierarchy similar to that shown
in Figure 3.9.
Figure 3.9 The final class hierarchy.
To select the features used for the nearest neighbour classification use the
‘Edit Standard NN feature Space…’ function, Figure 3.10a, where initially you
should just use the mean spectral values of the objects, Figure 3.10b.
a) The menu for editing the NN feature b) The dialog for selecting the features within the
space NN feature space
Figure 3.10. Editing the features used within the nearest neighbour classification.
Note, that the layer weighting for the panchromatic band (PAN) has been
increased to 2. This is in take advantage of the extra spatial resolution of the
panchromatic band, 15 m rather than 25 m of the multispectral.
3.4.2. Classification
After inputting the parameters into the process, click on the ‘OK button at the
bottom, you need to select samples before performing your classification.
Your process tree should now be similar to that shown in Figure 3.14.
Figure 3.14: The process tree after the inclusion of the classification process.
The next step is to setup the processes which will merge your classification so
that all neighbouring objects of the same class will form single objects. It is
important to merge your classification to identify complete objects. For
example, once merged you can query the lake to find its complete area. To
merge the result you will need to enter a merge process for each class (Figure
3.16; ‘Insert Child’), the merge parameters for the Forest class are shown in
Figure 3.15. The class for merging is defined using the Image Object Domain
where the class of interest is defined, if you were to select multiple classes all
the select classes would be merged, removing the boundaries and
classification of these objects.
Figure 3.15. The process parameters to merge the Forest class.
To save time, once you have created you first merge process you can copy-
and-paste (ctrl-c, ctrl-v or right-click on the process) this process to duplicate it
and then edit the class you wish to merge.
Finally, we usually wish to export the classification result from Definiens into a
GIS for further processing or the production of a map. Therefore, our final
process will be to export the classification to an ESRI shapefile, Figure 3.17.
Figure 3.17. Process parameters to export the classification as a shapefile.
To select the classes to export you again edit the Image Object Domain,
remember these parameters define the image objects the process will be
applied to. The name of the outputted shapefile has been defined as
‘Classification’ while the features to be exported are the area (of the image
object) and the class name. Area is found under Object Features > Shape >
Generic while class name is found under Class-Related features > Relations
to Classification > Class name. For the class name feature you will need to
create it, right-click on the ‘Create new Class name’ and select Create, leave
the parameters as their default values and just select OK. The shapefile will
output to the directory within which your project is saved, if you have not yet
saved your project then the shapefile will be outputted to the directory
containing the input imagery.
You final process tree should then be the same as the one shown below in
Figure 3.18.
Once you have activated sample selection, highlight the class you wish create
a sample for in the class hierarchy window. Either double click on the objects
you wish to select as samples or hold down the Shift key and use a single
click. To unselect a sample, repeat the process of selection for each chosen
object.
To aid the selection of your samples, Definiens Developer offers two windows
(both available from the menu in Figure 3.19) of information based on the
selected samples. Firstly the ‘Sample Editor’ window (Figure 3.20) and
secondly the ‘Sample Selection Information’ window (Figure 3.21).
Figure 3.20. Sample Editor Window
The Sample Editor provides a visual comparison of two classes using a range
of selected features. In Figure 3.20 the Forest and Water classes are
compared using the object means from each spectral band of the Landsat
data. When an object is selected, a red arrow is displayed to illustrate where
the object mean fits in relation to the mean of the other samples. To change
the displayed features, right-click within the main window and select ‘Features
to Display…’ or, if you only want the features being used within the NN
calculation, select ‘Display Standard Nearest Neighbour Features’.
Once you have selected your samples, you should have an image which is
similar in appearance to that shown in Figure 3.22. Bear in mind that the
selection of samples does not have a correct answer. Just select the samples
you consider to be most representative of the classes you wish to separate
and which also give the best separation in the Sample Editor and Sample
Selection windows.
Once you are happy with your classification, execute the merge image objects
processes you have previous created, your results should appear similar to
those shown in Figure 3.19b.
Finally, run the process to export the results. This will result in an ESRI
shapefile and allow the creation of a map such as the one shown in Figure
3.20.
Figure 3.20. A map produced using ESRI ArcMap from the result of the Definiens
classification
To use this tool, select the features you wish to compare – Initially try the
mean, standard deviation and the pixel ratio but later try other combinations.
Then select calculate, once the calculation has finished then select advanced
to see which features offered the best separation, and ‘Apply to the Std NN’ to
use within the classification.
3.9. Exercises
1) Experiment with different segmentation parameters, both within the multi-
resolution segmentation and the other segmentation algorithms. Be aware
that you will have to select new samples each time you delete the level.
References
Leckie, D.G., Gougeon, F.A., Tinis, S., Nelson, T., Burnett, C.N., & Paradine,
D. (2005). Automated tree recognition in old growth conifer stands with high
resolution digital imagery. Remote Sensing of Environment, 94, 311-326.
Unit 4: Rule Based Classification
Level:
• Beginner
Time:
• This unit should not take you more than 1 hour
Resources:
• A licence of Definiens Developer (Version 7.0 was used to develop
these units).
• The multispectral Landsat 7 image orthol7_20423xs100999.img and
its corresponding panchromatic scene o20423_pan.tif (Row: 204
Path: 23 Date 10/09/1999) over North Wales and available from the
Landmap Service.
Processes:
• Rulebased_Classification_Process.dcp
4.1. Introduction
Following on from the previous unit, you will now implement a more
detailed rule-based classification by using thresholds manually defined within
the class hierarchy rather than a nearest neighbour classification.
This unit uses the same Landsat 7 subset of Llyn Brenig in the Denbigh
Moors, North Wales, although it now aims to identify more classes to increase
the detail of the habitat classification. The aim of the unit is to provide you
with experience in entering thresholds for a rule-based classification and
creating the corresponding processes. You are not expected to identify any
thresholds as these will be given and the next unit will cover techniques
used commonly to identify these.
1) Absolute Thresholds
2) Fuzzy Thresholds
When selected, you will be presented with the window shown in Figure 4.2.
Here, you set the threshold and the operator (e.g., <, ≤, =, > or ≥).
Within the class description you can add as many of these thresholds as you
require. You can also include ‘and’ and ‘or’ statements, as shown in Figure
4.3. By default, all the features you introduce are considered within an ‘and’
statement and therefore all thresholds have to be met for the object to be
classified. On the other hand, if the statement is an ‘or’ statement, only one of
the thresholds needs to be met for the object to be classified. By combining
these statements (as shown in Figure 4.3), more complex class descriptions
can be developed.
Figure 4.3: A class description using both ‘and’ and ‘or’ statements.
To edit the ‘and(min)’ to ‘or(max)’, right-click on the ‘and(min)’ (Figure 4.4) and
select ‘Edit Expression’.
Within the resulting window (Figure 4.5), select ‘or(max)’ and click OK. To add
‘and(min)’ operators beneath the ‘or(max)’ (as in Figure 3), right-click on
‘or(min)’ as before and select ‘Insert new Expression’. From the list of features
(see Figure 1), you will find the same operators (at the bottom) shown in
Figure 5. By selecting ‘and(min)’ and then adding other features/thresholds
under this operator, you can create structures similar to those in Figure 4.3.
Forest = 0.8
Water = 0.7
Urban = 0.2
In this example, the object is assigned to the class forest but the fuzzyness of
other classes (water and urban) will also be allocated within Definiens
Developer to give a fuller picture of the contents of the object. Since the
introduction of the processes into the functionality of Definiens Developer,
careful consideration needs to be given to the use of the fuzzy logic
thresholds. Therefore, for most of these units, only absolute thresholds are
included.
Once you have matched your project window to those shown in Figure 4.7.
select OK and create your project.
NIR − RED
NDVI =
NIR + RED
Equation 4.1. The normalised vegetation difference index.
To setup the customised feature enter the feature view ( ; object features >
Customised) and Select ‘Create new Arithmetic Feature’ this will produce a
dialog in which you enter your customised feature, Figure 4.8a. Enter the
NDVI into this customised feature, Figure 4.8b, and select OK.
a) Empty Edit Customised Feature b) Customised Feature for NDVI
Figure 4.8. Edit Customised Feature.
Tables 4.1 – 4.6 give the thresholds for each class. Note that when an upper
and a lower boundary are required, a membership function (see explanation
of fuzzy logic) can be used (see Figure 4.10).
Bog/Heath
Mean GREEN > 30
Mean GREEN < 42
Table 4.2. Rules of the class Bog/Heath.
Forest
Mean NIR < 100
Mean SWIR1 < 40
Mean NDVI > 0.3
Mean NDVI < 0.6
Table 4.3. Rules for the class Forest.
Improved Grassland
Mean NIR > 100
Mean NDVI >= 0.5
Table 4.4. Rules for the class Improved Grassland.
Not Vegetation
Mean NDVI <= 0.275
Table 4.5. Rules for the class Not vegetation.
Water
Mean NDVI <= 0.05
Table 4.6. Rules for the class Water.
Figure 4.10. Setting a membership function with an upper and lower bound.
4.6.1. Segmentation
Figure 4.12. Parameters used for the segmentation of the Landsat image.
4.6.2. Classification
The classification process is similar to the previous unit but here each
class will be classified with a separate classification process and the
classification will only be performed on the objects which are remaining to be
classified. Therefore, you need to update your process tree to appear like the
one in Figure 4.13, please make sure you have the same order as shown as
the order is important for the classification to work correctly.
Figure 4.13. The process tree including the classification processes.
Figure 4.14. shows the parameters for the classes Water and Not Vegetation.
Make sure that you match these parameters, paying attention to the Image
Object Domain for the Not Vegetation classification process which restricts the
classification to only those object which are currently unclassified.
By classifying the scene in this way the aim is to initial remove those elements
when can be easily identified and classified, in this case water, and remove it
from the scene before classifying the next class.
The merging and exportation operation is, again, the same as the one used in unit
3 but with the inclusion of the extra classes. Therefore, your final
process tree should be like the one shown in Figure 4.15.
Figure 4.15. The final process tree.
4.8. Conclusion
Following the completion of this unit you should now be aware of the ability
to define an object oriented rule based classification within Definiens
Developer. Using a rule based classification can allow you to encode your
expert know, for example Lucas et al (2007) developed an object oriented rule
based classification for upland habitats within Wales using Definiens
Developer to encode the expert knowledge of ecologists. One of the problems
with rule based classification is to define the rules used within the
classification the next unit will go through the techniques available with
Definiens to aid the development of these rules.
4.9. Exercises
1) Experiment with different segmentation algorithms and parameters, you
should not have to edit the thresholds you have already entered to reclassify
the resulting segments but you may notice varying levels of accuracy between
different segmentations.
3) In addition to the rule used within the classification there maybe other
features available within Definiens Developer which could aid the
classification. Review the feature available and try to include extra features (or
remove currently used features) from the classification to try and improve the
result. Please refer to the reference guide for details of other features.
References
Lucas R.M. Rowlands A., Brown, A., Keyworth, S and Bunting, P. (2007).
Rule-based classification of multi-temporal satellite imagery for habitat and
agricultural land cover mapping. International Society for Photogrammetry and
Remote Sensing, 62(3), 165-185.
Unit 5: Threshold Identification
Level:
• Beginner
Time:
• This unit should not take you more than 2 hours
Resources:
• A licence of Definiens Developer (Version 7.0 was used to develop
these units).
• A copy of the image ‘Identification_of_Thresholds.tif’ generated for
this exercise.
Processes:
• ClassifyImage_example.dcp
• ClassifyImage.dcp
By the end of this unit you should:
• Be aware of the different tools and methods available within Definiens
Developer to help you identify suitable thresholds to undertake a
classification.
5.1 Introduction
Through this unit, you will go over a number of techniques to aid the
identification of thresholds. To illustrate the techniques more easily and
simply, an artificial image (Figure 5.1) has been created and will be used
throughout this unit. Afterwards, you can try these techniques on actual
data acquired by remote sensing instruments.
Figure 5.1. Artificial image created to illustrate the different techniques of threshold
identification.
5.2. Getting started within Definiens Developer
As before, the first step is to create your project using the image
“Identification_of_Thresholds.tif” as input, where Bands 1, 2 and 3 should be
named Red, Green and Blue respectively. When viewing the image use no
stretch to see the same image as shown in Figure 5.1.
As with previous projects, the first step is to create your process outline
(Figure 5.2) and perform a multi-resolution segmentation. The parameters for
the segmentation are shown in Figure 5.2, through the automatic name
provided.
Initially, when creating the class hierarchy, create empty classes (without
associated features) as shown in Figure 5.3. Through the processes outlined
below, you will identify and create the required thresholds within the classes.
You also need to create new processes to perform the classification once you
have created the rules within your class hierarchy. You can do this in two
ways:
1) Create an individual classification process for each class as in the previous unit
, or
2) Create a single process and edit while developing the rules and finally
select all classes and classify them in one process once the rules have been
developed (Figure 5.4).
5.2.4. Create the merging processes.
These are the same as the processes created in the previous unit and
need to be created for each of the classes within the hierarchy, as shown in
Figure 5.4.
But, also the extent to which you know your imagery in terms of.
• The range of values.
• What you are seeing. (e.g., What is vegetation type X likely to be
doing at the time of image capture?)
• The nature of the objects you are trying to extract (e.g., in the form of
a model such as a ‘hill and valley’ model for tree crown delineation).
• Interpreting the colour you can see within the image. For example, if
the object is yellow in the image, which bands need to be used for
classification?
But above all it comes down to experience!! So, take your time going through
the following exercises and consider how the features and options outlined
above help. Experiment with each of these and decide which ones you are
most comfortable with and use these. Note, that you quite often produce a
different result using these different methods but there is no ‘right’ answer and
the most important consideration is that your classification works and is
appropriate to your application.
The feature view window (Figure 5.5) can be used to colour the objects (using
a colour bar) within the scene based on a single feature. The upper (green)
and lower (blue) bounds can be edited manually. Moving these upper and
lower bounds until only the area of interest is in the coloured area allows the
upper and lower bounds to be identified. These values can then be inserted
as a rule into the appropriate class.
Write down the brightness thresholds in the table below. Note, that not all
classes may be identified using the brightness feature.
Object Thresholds
Black object
Blue object
Green object
Orange object
Red object
Yellow object
White
background
1 nL
b= ⋅ ∑ Ci
n L i =1
Now, try identifying further thresholds for the objects which could not be
separated using the brightness feature by using other features. Once you
have identified thresholds for each of the classes, add these thresholds into
the class descriptions and run the classification process.
5.3.2 Sample Editor
Before using the Sample Editor, delete your existing classification (using
‘Classification > Class Hierarchy > Delete Classification’). If you have fused
the classes previously, delete the level and resegment. Now, turn on the
‘Sample Selection’ and select samples for each of the classes (in the same
way as with the NN classification preparation). Once you have created your
samples, open the Sample Editor window and select the features you wish to
compare by right-clicking within the window and select ‘Select Features to
Display …’. Then, move the features you wish to display in the feature editor
to the right-hand side of the window and click OK. Now, using the drop down
boxes at the top of the window (Figure 5.7), select the two classes you wish to
compare. Note, that the features you wish to display need to match those
which are specified to be used for the NN classifier.
From Figure 5.7, you can see that the black and yellow classes have a good
separation using the features ‘brightness’, ‘mean red’ and ‘mean green’ but a
reduced separation in the ‘mean blue’ feature. Therefore, you can start to get
a feel for where suitable thresholds may exist. By continuing the process
through comparing the black class to all others, you should be able to identify
feature rules or combinations of these that separate the classes of interest.
Again, identify features with their thresholds to separate the given classes and
list below. These may differ from those you might have listed using the
Feature View. After you have defined these, add the new thresholds to the
hierarchy and classify your image.
Object Thresholds
Black object
Blue object
Green object
Orange object
Red object
Yellow object
White
background
Within Definiens Developer, another useful function for identify features that
provide best separation of classes is the Feature Space Optimization tool.
The function is available through the icon or by navigating through
Classification > Nearest Neighbour > Feature Space Optimization. To use the
function, you again need to create samples representing each of your classes
and then run the Feature Space Optimization afterwards. The Feature Space
Optimization window will be similar to that in Figure 5.8.
The first step is to select the classes you wish to consider. In Figure 5.8, all of
the available classes have been selected but you can select a subset of
classes if you want to focus on these. Second, select the features you wish
to consider for the separation, and select the level (if appropriate) you wish to
work on (Levels are discussed in the next unit so for the moment, you don’t
need to worry about this). Finally, you need to select the number of
dimensions you wish to consider, which equates to the maximum number of
features you want to be use together to identify the best separation of your
classes.
Once you have entered those parameters, click on ‘Calculate’ and you will
notice that numbers appear in the lower left box. These indicate a) the best
separation distance and b) the number of dimensions (features) used to arrive
at that separation. To see which features were used to identify a separation,
click on ‘Advanced’ and a dialog similar to that shown in Figure 5.9 will
appear.
Figure 5.9. Advanced results window for the Feature Space Optimization.
In Figure 5.9, the most significant information is within the textbox where, as
you saw in the previous window (Figure 5.8) and in the displayed graph, 5
dimensions produce the best separation of the classes. By scrolling down to
the ‘Dimension 5’ information you can discover the features which produced
the separation. You could now, by using the ‘Apply to Std NN’ button, add
these classes to the standard nearest neighbour and use the nearest
neighbour classifier but here we are identifying thresholds so we will not do
this.
Now you have identified the features which give the best separation for all
classes, experiment to identify those features which are most suited for the
separation of individual or groups of classes. After which, use that knowledge
and the two techniques above to refine the thresholds required for the
classification.
List these in the table below.
Object Thresholds
Black object
Blue object
Green object
Orange object
Red object
Yellow object
White background
Another interface Definiens Developer offers for exploring the data is the
Object Information window, which is accessed using the icon. When an
object is selected, this window will display the values for the selected features.
To select the features you wish to have displayed, right-click within the
window and select ‘Features to Display…’. To help the identification of
thresholds (once you have an initial classification), you can go through the
objects you judge to be in error and find the reason for the errors. Through
this approach, you can adjust your thresholds accordingly and subsequently
refine your classification.
Object Thresholds
Black object
Blue object
Green object
Orange object
Red object
Yellow object
White
background
The order of classification does not really aid the identification of thresholds
but it does provide another layer of logic that you can include within your
classification hierarchy and associated processes. By using the classification
process and limiting the objects being considered for classification, the rules
within your hierarchy can be made simpler. For example, in the previous unit
, the class description for ‘Acid Semi Improved Grassland’ was left
empty but by classifying objects to the other classes first and restricting the
classification of ‘Acid Semi Improved Grassland’ to the currently unclassified
objects, this class can be identified. This class might otherwise be very
difficult and complex to identify because of the variation in the data values
associated with the broad range of vegetation types that is likely to exist within
this class.
One of the most important aspects of classification is to know what you are
viewing and equally what you are not viewing within the imagery. For
example, in the Landsat 7 imagery for North Wales you have used for the
previous units, the date of the imagery is important as the vegetation
behaves differently at different times of the year and will therefore need a
different set of rules at different times. Equally, with temporal data from
different seasons these variations can be exploited for identifying and
classifying the land cover.
Also, in knowing your imagery and the objects you wish to classify you may be
able to think of them in the form of a model. For example, when trying to
identify tree crowns, it is useful to visualise the image as conforming to a ‘hill
and valley’ model, where the crowns form the hills. This can be used to
identify seeds at the crown tops (brightest parts of the image on the hill tops)
which can be expanded to identify the crown edges (in the valleys).
The image you are seeing on the screen is displayed (for the most part) as a
Red, Green and Blue (RGB) composite and therefore, if the object looks red
on the screen you know it must have a large contribution from the channel you
are displaying as red. From this observation, you can use the channel in red
in the classification. Figure 5.10 shows the RGB colour space and by
considering the colours you observe in the image and in this figure, you can
start to establish which channels are contributing to the appearance of the
image as displayed in a particular colour combination. Note that when using
this approach, consider also the stretch you are applying to the image to
enhance the display as this can change the colour you see and the contrast
between features.
Figure 5.10. RGB Colour model.
5.3.8. Experience
Finally, and perhaps the most important thing to recognise, is that it takes
experience to become good at identifying thresholds and developing the
processes and methods which fit around those thresholds and which form
your classification. The more imagery you gain experience with, the better
you will become at classifying and you’ll be able to apply your knowledge from
one set of imagery to the next.
5.4. Conclusions
Following the completion of this unit you should now be aware of the tools
and concept through which you can identify the thresholds you will require to
classify a scene using a rule-base.
5.5 Exercises
1) Experiment with different segmentation algorithms and parameters. You
should not have to edit the thresholds you have already entered to reclassify
the resulting segments.
Unit 6: Working with Levels
Level:
• Intermediate
Time:
• This unit should not take you more than 1.5 hours
Resources:
• A licence of Definiens Developer (Version 7.0 was used to develop
these units).
• A copy of the image ‘crowns_forest_image.tif’ generated for this
exercise.
Processes:
• LevelsExampleProcess.dcp
6.1. Introduction
Within this unit, you will learn how to use Levels within Definiens Developer
and some of the features which allow interaction between levels. These
features increase the knowledge available within the system as different
scales of information are used.
To illustrate the use of Definiens Developer levels you will use an artificial
image that has been created for this unit (Figure 6.1). Within this image,
the green objects represent trees (herein referred to as Level 1) and a second
level (herein referred to as Level 2) will be created to represent the forest
extent. To identify the forest extent, the use of more complex processes will
be required to fill in the gaps between the crowns to create the forest mask.
Figure 6.1. Image to be used for classification.
As before, the first step is to create your project using the image
“crowns_forest_image.tif” as input, where bands 1, 2 and 3 should be named
Red, Green and Blue respectively. Make sure the unit it set to pixels and
geocoding is turned off. When viewing the image use no stretch to see the
same image as shown in Figure 6.1.
6.2.2. Create process and perform segmentation
As with previous projects, the first step is to create your process outline
(Figure 6.3) and perform a multi-resolution segmentation to create Level 1.
The parameters for the segmentation are given in Figure 6.3.
Implement and execute the components shown in Figure 6.6 into your
process tree under the ‘Classification > Level 1’ and ‘Merge and Tidy’
processes.
Figure 6.6. Process for classification and merging at level 1.
The next process is to copy Level 1 to create Level 2. To create the process,
insert a new process under the ‘Creation Level 2’ process and input the
parameters shown in Figure 6.7.
Reproduce the processes shown in Figure 6.8 into the process hierarchy
under the ‘Classification > Level 2’ process but notice the first process has a
restriction that object needs a border to an object within the class crowns. To
define this restriction click on the ‘no condition’ button in the Edit Process
dialog and edit the resultant dialog to match that shown in Figure 6.9.
Figure 6.8. The classification processes for Level 2.
Figure 6.9. The classification process to identify objects within a border to a crown.
The feature ‘Rel. border to Crowns’ is used rather than ‘Border to Crowns’ as
it is normalised and independent of the object size and border length and
therefore creates a more stable threshold.
The final part of the classification is to fuse and tidy the result. To do this, you
need to reproduce the processes shown in Figure 6.10.
Figure 6.10. Processes to tidy and merge to give the final result.
You should already be familiar with the fusion process but take note of which
processes require execution on Level 1 and Level 2. To switch the Level,
remember to use the ‘Parameter…’ button next to the drop down box.
The new process here fills in the gaps within the areas of forest so when
executing, it is worth stepping through the processing and executing one step
at a time to observe the workings of each of the processes. The parameters
required for the process which fills the gaps are given in Figure 6.11.
Figure 6.11. Parameters for the process which fills any gaps within the forested areas.
6.3. Results
Once the process has been executed you should a result at each level as
shown in Figure 6.12.
6.4. Conclusions
Following the completion of this unit you should be aware of the concept of
levels within Definiens Developer and how to implement them and represent
the relationships between the levels. You have also come into contact with
another process, in this case the ‘fill enclosed by class’ process.
6.5. Exercises
1) Experiment with different segmentation strategies when creating a new
level. Figure 6.12 demonstrates a multi-resolution segmentation process
which will create a new Level above the existing one.
Figure 6.12. A segmentation process which creates the segmented layer as a new level
above the existing one.
2) Examine and experiment with the other features which allow interaction
between objects within and between levels (e.g., Relations to sub objects and
Relations to super objects). Note that super objects are those on the level
above while sub objects are those on the level below.
3) Explain below why the class background on Level 1 cannot be fully fused to
create one large object.
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
______________________________________________________________
________
Unit 7: Putting it all together
Level:
• Intermediate
Time:
• This unit should not take you more than 4 hours
Resources:
• A licence of Definiens Developer (Version 7.0 was used to develop
these units).
• The multispectral Landsat 7 image orthol7_20423xs100999.img and
its corresponding panchromatic scene o20423_pan.tif (Row: 204
Path: 23 Date 10/09/1999) over North Wales and available from the
Landmap Service.
Processes:
• LandcoverClassificationExample_provided.dcp
• LandcoverClassificationExample.dcp
7.1. Introduction
The aim of this unit is to allow you to pull together the skills you have
developed within Definiens Developer to produce a single more complex
example. The process outline will be provided, with a segmentation and initial
classification of elements such as water to illustrate some more advanced
features but you will be require to identify and enter the thresholds for the
classification of the scene.
Please note the inclusion of the two thematic layers. The first defines the area
of the image where cloud is present and the second defines the upland and
lowland areas of the scene and will be used for segmentation.
The first step is to set out your process tree with the default structure, Figure
7.2.
Once executed, you should observe that the segmentation process has
identified the areas defined within the shapefile defining the cloud cover area.
The following step is to classify these as such and ignore them for the
remainder of the classification process. The classification is performed with
reference to the thematic layer (Figure 7.4) and results in the following
process tree, Figure 7.5.
Once the cloud has been removed from the scene, the follow segmentation
process, Figure 7.6a, will be added to the process tree, Figure 7.6b. Please
note the use of the second shapefile to separate the lowland and upland
regions of the scene. Also, note that the segmentation is being performed at
Level 1 and the Level Usage parameter is set to ‘Use current’.
Once segmented the classes of upland need to be defined using the thematic
layer, Figure 7.7.
The final part of the segmentation is to segment within the upland and lowland
regions to produce the segments for classification. The process and
parameters are shown in Figure 7.8.
Within the Groups tab of the class hierarchy classes can placed in a hierarchy
allowing the relationships between the different classes to be defined. For
example, all the forest classes have been placed under the Forest class.
Therefore, Definiens Developer is aware that the Broadleaf Forest, Coniferous
Forest and Young Coniferous Forest classes are all types of forest. Once
classified if you collapse the Forest group all these forest regions will be
coloured as forest. But, be aware that if you merge the Forest class all the
sub-classes will be merged forming only a single class and removing
information from your classification.
If you are unsure of the classes to be identified please refer to the shapefile
Landcover_classification.shp.
The first step within this classification is to identify the ‘Not Vegetation’ regions
within both the upland and lowland regions. The NDVI has been calculated
using a customised feature, you will need to create (see earlier unit), and a
threshold of NDVI < 0.25 has been identified (enter within the class
description of the class ‘Not Vegetation’) to separate the ‘Not Vegetation’
regions, Figure 7.10.
Figure 7.10. The process tree for the classifying the ‘Not Vegetation’ regions.
Within the ‘Not Vegetation’ regions the area of ‘Water’ have been identify,
using the rules SWIR2 < 15 AND NDVI < 0.1, but when you run these rules
you will notice that not all the areas of ‘Water’ have been identified. This is
because there are still some small regions of cloud over the lake. To correctly
classify these regions we will grow the ‘Water’ class using the ‘Water Grow’
class. The ‘Grow Water’ class contains the rules shown in Figure 7.11, where
the new rule ‘Rel. border to Water > 0’ defines that to be a member of the
class ‘Grow Water’ the object needs to have a border to a ‘Water’ object.
By defining the process tree as shown in Figure 7.12 the ‘Grow Water’ class is
iteratively classified 10 times (10x: for all, Figure 7.13) where the identified
‘Grow Water’ objects are assigned to the ‘Water’ class in between each
iteration.
Figure 7.12. The process tree to classify the regions of water within the scene.
Figure 7.13. Using a process to loop a group of process 10 times.
Finally, the ‘Water’ regions are merged and the remainder of the classification
will concentrate on the vegetation within the scene, where you are require to
develop you our process and rules.
As with the previous processes the final steps are to merge and tidy
classification classes and export to results for use within a GIS. Therefore,
based on the knowledge gained within the previous units developed these
parts of the process.
7.4. Results.
Once you have completed the classification and executed the tidy and
exportation processes you should have results similar to those shown in
Figure 7.14.
a) The classification with Definiens b) The classification with a GIS.
Figure 7.14. The results of the classification.
Finally, it is recommended that you check you classification process with the
model process developed (LandcoverClassificationExample.dcp). To do this
open another instance of Definiens Developer and setup the same project
structure, then right-click within the process tree window and select ‘Load
Rule Set…’, Figure 7.15. You may now execute this process and should gain
the same result as the one shown in Figure 7.14. Observe how each object
can have membership to multiple classes (use the ‘membership to’ feature
and the object information window) as fuzzy membership functions have been
developed for each of the added classes.
Figure 7.15. Importing a saved rule set.
7.5. Conclusions
Following the classification of this scene you should now be confident in
performing you own classifications including several structured classes and
many of the classification features available within Definiens. It is recommend
that you look through the reference guide within your installation of Definiens
to observe the large number of features available for you during classification.
7.6. Exercises
1) The classification ruleset provided is a fuzzy classification; therefore each
object has a membership to all the classes. Look up the classification stability
feature and observe the objects which are on the boarder between two
classes.
Unit 8: Calculating Image Thresholds
Level:
• Advanced
Time:
• This unit should not take you more than 3 hours
Resources:
• A licence of Definiens Developer (Version 7.0 was used to develop
these units).
• The multispectral Landsat 7 image orthol7_20424xs240799.img (Row:
204 Path: 24 Date 24/07/1999) over South Wales and available from
the Landmap Service.
Processes:
• CalculatingThresholdsExample.dcp
8.1. Introduction
A limitation of the methods which have so far been presented is that the
thresholds for classification have be identified manually. This is time
consuming and thresholds can vary between images and even across a single
image. With appropriate image pre-processing atmospheric correction and
topographic correction many of these differences can be corrected for but not
for all. Therefore, this unit will demonstrate the how Definiens Developer
can calculate thresholds from the imagery and use it for classification, in this
case for cloud and shadow detection.
Once a segment of the chessboard has been selected an upper and lower
quartile within the segment will be calculated and used as the thresholds for
identifying seeds for the clouds and the shadows within the scene. A fine
segmentation is then performed on the segment and once these seeds (or
cores) have been identified the remainder of the scene will be used to
recalculate these threshold values. The seeds will then be grown to the limit of
these new thresholds. Finally, the cloud and shadow objects will be merged
and the process will move on to the next large segment until all parts of the
image are processed.
The first step is to setup up the large chessboard segmentation and iterating
through the segments, but the standard process tree is still used, Figure 8.3.
Figure 8.5, shows the process tree which you should have up to this point.
Once an object has the class ‘_active’ it can then be processed individually
using the Image Object Domain filter within a process. Finally, once the
processing has finished all the remaining ‘_active objects need to be removed
to allow the loop to terminate.
8.3.2. Implementing the processing stage.
The next stage is set up the template within the loop to allow the processing of
the individual segments, Figure 8.8.
The process you will use to calculate the thresholds is the ‘compute statistical
value’ process which allows the number, sum, minimum, maximum, mean,
standard deviation, median and quantile to be calculated. The value from this
calculation is outputted into a variable, Definiens Developer supports the
concept of variables within the process tree. Definiens offer 5 variable types,
Scene, Object, Class, Feature and Level, where the scope of the variable is
defined by the type. For example, an object variable will be created for each
individual object, while a Level variable will be defined for an individual level
(i.e., a different value can be stored for each level) and a scene variable is
defined for the whole project (i.e., only one value for the whole project). For
this project you will only use scene variables and calculate the quantile from
the ‘compute statistical value’ process. To calculate the quantile you first need
to define the quantile you are interested in, for example the 90 % quantile, to
increase the flexibility of the process we will define a variable to store this
value.
The next stage of the process, Figure 8.12, is to compute the threshold values
into the LowerQuantileBrightness and UpperQuantileBrightness variables.
Note, the Image Object Domain specifies the ‘_active’ class, therefore the
values are only computed over objects which have the class ‘_active’ and will
therefore vary across the scene, as each chessboard segment is selected in
turn.
Once these thresholds have been calculated the classification of the classes
‘Cloud’ and ‘Shadow’ can be completed to produce their respective seeds.
Figure 8.13 shows the class descriptions of the two classes. Note, the
variables UpperQuantileBrightness and LowerQuantileBrightness are used in
place of the threshold. Finally, a classification process is added to the process
tree, followed by merging processes for the two classes, Figure 8.14.
The next stage is to grow the cloud and shadow seeds to identify the full
extent of the clouds and their shadows, this will require another loop. But first,
the thresholds UpperQuantileBrightness and LowerQuantileBrightness need
to recalculate to provide the thresholds to terminate the loop used to grow the
seeds. The same process as before is used to calculate the new upper and
lower quantiles of the ‘_active’ class. The new value calculated will differ from
the one previous calculated because the identified cloud and shadows seeds
no longer have the class ‘_active’ and therefore not included in this
calculation. To define the loop create a new process, below calculate
threshold, and tick on the ‘Loop while something changes’ option, Figure 8.15.
The elements within the loop will now be inserted as child processes.
a) Process Tree b) Process parameters
Figure 8.15. The process tree and process parameters to set up the loop.
Once the loop has been defined, the classes ‘Cloud Grow’ and ‘Shadow
Grow’ need to be created, where the class description will be the same as
Cloud and Shadow but for the inclusion of the relative border features to
restrict the classification objects to those bordering the Shadow and Cloud
features, Figure 8.16.
Following the classification of Cloud Grow and Shadow Grow the two class
need to be assigned to the Cloud and Shadow classes, before being merged
and any remaining _active class objects assigned to _processed, Figure 8.17.
These three steps, classification, assign and tidy will happen for each iteration
of the loop, where the loop will continue until all the objects fits the rules have
been identified.
Figure 8.17. The process tree.
The next step is to tidy the classification, which consists of three steps. The
first is to assign all the _processed objects to be unclassified and then merge
them. The next is to fill any holes in the cloud or shadow object with an area
less than 20000 m2. To do this the ‘fill enclosed by class’ process was used,
Figure 8.18, where all the unclassified objects with an area less than 20000m2
enclosed by cloud are assigned (i.e., use class description = no) to the class
cloud. This is repeated for the shadow class. Finally, the Shadow and Cloud
classes are merged and exported (remember to export the class names),
Figure 8.19.
Figure 8.18. The fill enclosed by class process parameters.
Figure 8.19. The process tree to tidy and export the classification.
Finally, to increase your understanding of the process you can make use of
the ‘Update View’ option (right-click on a process) which is available on every
process and will update the view in the data window after a process has
executed. This will allow you to watch the progress of your classification.
Initially, select ‘Update View’ on the processes which selects an active object,
the merging of the cloud shadow seeds and the merging of the cloud shadow
during the grow. Now execute the process and you can watch your
classification being performed. Beware of over using this feature as updating
the view is a slow process and can significantly increase the processing time.
For example, switching on the three updates as suggested will double the
processing time for this algorithm.
8.6. Conclusions
From this worksheet you should be aware some of the more advanced
process and functions available within Definiens, including growing a class,
using variables and calculating thresholds.
8.7. Exercises
1) Although the method superficially works well there are numerous small
errors when area of vegetation have been included in the cloud mask;
Develop rules to remove this mis-classification.