Sie sind auf Seite 1von 14

SCGE - 2011

Summer Course Computational Geo-Ecology

Object-based Land Use Land Cover classification Using eCognition Developer


Traditional remote sensing software, such as ERDAS Imagine or ENVI mainly uses spectral information (signatures) of satellite sensors for the classification into predefined categories with homogeneous spectral characteristics. Basically, such pixel-based approaches can be separated into unsupervised and supervised techniques. The eCognition Developer software in contrast, is based on object-oriented classification. This means that a raster image is first segmented into coherent / homogeneous objects. The objects, also called segments, can be manipulated by the interpreter using parameters such as shape, compactness and color. The actual classification takes place after the segmentation into objects. The eCognition Developer software makes it possible to rapidly subdivide a satellite image or a high-resolution (multispectral) air photo into homogeneous surface units, which can be visually checked. It is a rapid tool and ideal for analyzing strongly human influenced and / or highly fragmented landscapes.

Objective and general method


Our objective is to prepare an object-based Land use Land Cover (LULC) classification of a satellite image (sensor: SPOT3) using object-based segmentation and classification. In general, several steps have to be followed in order to perform the classification. In this example, the following steps will be taken and explained in detail. Since you might be new to the software and to the technique, a straightforward kind of cooking book approach is followed here, according to:

1. 2. 3. 4. 5. 6.

Create a new project in eCognition Developer Make a rule set for image segmentation Define training sites on screen Iteratively classify a SPOT-3 scene Classification Accuracy Assessment

Advanced use of eCognition Developer requires more practice of course.

Object-based image classification

1 of 14

SCGE - 2011

Summer Course Computational Geo-Ecology

We use the French SPOT-3 satellite, which provides 4 bands which have spectral reflectance in the visible and the infrared parts of the Electro Magnetic Spectrum (EMS). Information about the SPOT satellite can be found at http://www.spot.com, technical information is listed in table 1.
Table 1. Technical info SPOT sensors: http://www.spotimage.fr/html/_167_224_232_233_.php
sensor electromagnetic spectrum Panchromatic B1 : green SPOT 5 B2 : red B3 : near infrared B4 : mid infrared (MIR) Monospectral B1 : green SPOT 4 B2 : red B3 : near infrared B4 : mid infrared (MIR) Panchromatic SPOT 1 B1 : green SPOT 2 B2 : red SPOT 3 B3 : near infrared pixel size 2.5 m or 5 m 10 m 10 m 10 m 20 m 10 m 20 m 20 m 20 m 20 m 10 m 20 m 20 m 20 m spectral bands 0.48 - 0.71 m 0.50 - 0.59 m 0.61 - 0.68 m 0.78 - 0.89 m 1.58 - 1.75 m 0.61 - 0.68 m 0.50 - 0.59 m 0.61 - 0.68 m 0.78 - 0.89 m 1.58 - 1.75 m 0.50 - 0.73 m 0.50 - 0.59 m 0.61 - 0.68 m 0.78 - 0.89 m

Information regarding eCognition Developer and other features of this software can be found at: http://www.definiens.com/. Information on the program and key concepts can be found in the eCognition User Guide, which you may open from the taskbar. In Chapter 2: Key Concepts page 10 to 13 basic concepts and terminology in eCognition is explained. For now, you may also continue and if necessary, consult this reference document later or not at all.

Step 1. Create a new project in eCognition Developer


Start eCognition Developer: Start | All Programs | eCognition Developer 8.0 | eCognition Developer | rule set mode Switch if necessary to the Load and manage data view | click Define a new project: File | New Project | Insert | luxem.img (load from the local data folder on the hard disk) In the Create Project window, give the three bands Image Layer Alias names (double click on the Layer1, 2, and 3 names to open a new window). Rename as follows: layer 1 = Green, Layer 2 = Red and Layer 3 = Infrared. The new names appear in the Image Layer Alias fields. Give Luxembourg_SPOT as the project name and save the project in your folder on the local disk.

Object-based image classification

2 of 14

SCGE - 2011

Summer Course Computational Geo-Ecology

The image composed of the 3 bands appears on your screen in purple like colors. You need to rearrange the three image layers to have band 3, near infrared, on top and band 1, green, on the bottom. This will give you a false color composite image showing reddish colors, matching the vigorous reflection of vegetation in band 3 on the SPOT-3 sensor (infrared). Open the Edit Layer Mixing dialog: View | Image Layer Mixing or . In the Edit Image Layer

Mixing window you can rearrange the mixing of colors/layers. Experiment around with shifting which layer is on top.

Figure of the Edit Layer Mixing window in which the infrared band replaces the red band.

Question 1: What happens when you shift the arrangement of layers as in the figure above?

Question 2: Why it is useful for this region to use this arrangement of the layers?

Because we depend on visual interpretation of the segments to be created later, we want to stretch the image to increase image contrast. We will perform a 1% linear stretch. To do so, we will use the same Image Layer Mixing dialog we used before. Here you can stretch the image. This is done by adjusting the pull down menu under Equalizing to Linear (1%). The image has an improved contrast now. Stretching allows you to use the full range of color palette for displaying the image. As is shown in the figure, the histogram of the original image only has Digital Numbers (DN), between 84 and 153; stretching these values to 0 and 255 changes the original spectral characteristics to the full spectrum. Therefore you should be careful when classifying imagery after stretching. Often it is best to use the original DN values when performing a classification.

Object-based image classification

3 of 14

SCGE - 2011

Summer Course Computational Geo-Ecology

Under that same pull down menu there are several other options. Experiment to see how the contrast of the image changes. Make sure you inspect the option labeled none. There is hardly any contrast! Be sure to return the menu to Linear (1%). We now need to change the display colors of the segment selection and polygon outlines that we will produce. To do so open View | Display Mode | Edit Highlight Colors and a dialog window pops-up. Here make the Selection color red and the Outlines/Polygons yellow. Leave skeletons default. Click Active View to apply the changes and the dialog window closes.

Step 2. Make a rule set for image segmentation


Switch to the Develop Rulesets view. First step in developing a rule set is to select the segmentation method, which depends on the available data and the objectives of the project. In this module the objective is to classify a SPOT 3 image into land cover categories. You will use multiresolution segmentation to create base objects which will form basic terrain mapping units before the actual classification starts. It is wise to first read about algorithms used in the segmentation process in the Reference Book. Read the pages 37-41 (4.3.4. Multiresolution Segmentation) before you continue. In the In default view mode, the Process Tree window is already open, if not click Process | Process Tree in the main menu bar. Insert a process with the name Mapping Land Use land Cover, by right clicking in the Process Tree window and by selecting Append New | OK. You may always double click the process in the Process Tree to change settings. Next create a child process to actually execute the process itself. Right click | insert child | choose multiresolution segmentation under the algorithm selection, select pixel level under Image Object Domain. In the Level Settings insert a new Level Name Analyses. Change the Image Layer Weight of the infrared band to 2 in the Segmentation Settings. Keep the scale parameter at 10 | set the shape factor to 0.3 | and the compactness to 0.4 | OK. The process should now look like:

Object-based image classification

4 of 14

SCGE - 2011

Summer Course Computational Geo-Ecology

A basic rule when segmenting a large image is: Create image objects as large as possible and at the same time as small as necessary. Remember it is always possible to aggregate smaller objects into larger ones.

Question 3: Why is the layer weight of the infrared band set to a value of 2?

Right click the parent process Mapping Land Use Land Units and press Execute the process of multi-resolution segmentation. Remember this can be done right away after inserting a child process! You see that a time has been added that was needed for processing in the Process Tree window. It is wise to split your viewing screen in two viewers: Window | split horizontally. View the results in one window you may keep default view in the other window. Use the icon to

show or hide outlines of the segments and to change between the two display types. Enlarge the image to zoom in at the segments. The image data display can be viewed either in object mean mode or pixel values . Note the differences between object mean and pixel display

values. Experiment with different ways of displaying the image data. You have now created a simple image object hierarchy consisting of 1 image object level. A wide variety of (statistical) information can be retrieved from each object within this level which will be used in the following classification.

Define classification categories


Now that we have our basic segments, we will proceed with the classification of these objects. We will distinguish the following seven land cover classes and assign following legend colors: Land Use Land Cover category Urban Water Deciduous Forest Coniferous Forest Agricultural Fields 1 Agricultural Fields 2 Bare Soil Legend colour Grey Blue Dark Green Green Light Pink Dark Pink Yellow

Object-based image classification

5 of 14

SCGE - 2011

Summer Course Computational Geo-Ecology

In the Class Hierarchy window, be sure you are in the Inheritance register (bottom of the Class Hierarchy window). You now need to create the seven land cover classes listed above. There are two methods to create classes:

1.

Go to Classification | Class Hierarchy| Edit Classes | Insert Class and then type the

name of the class you wish to add and assign the appropriate color to that class. 2. Or right click in the Class Hierarchy/Inheritance dialog and then select Insert Class

and add the class name you want as well as assigning the appropriate color. Within this window, the classes will be listed in alphabetical order.

Inserting the classifier


For the purpose of this exercise there are several options available in eCognition Developer to classify an image: we will use the nearest neighbor classifier, which is regarded in literature as an efficient tool to classify remote sensing images. The eCognition Developer 7 User Guide, page 103 117 informs you also on this topic and how to use it in the software.

Figure showing the Principle of Nearest Neighbor Classification. The Nearest Neighbor classifier returns a membership value between 0 and 1 on the image objects feature space distance to its nearest neighbor. The membership value is 1 if the image object is identical to a sample. If not, the feature space distance has a fuzzy dependency on the feature space

distance to the nearest sample of a class. You can select the features to be considered for the feature space.

How do we apply the Standard NN classifier to all the classes? There are two methods.

Object-based image classification

6 of 14

SCGE - 2011 Method 1.

Summer Course Computational Geo-Ecology

In the Class Hierarchy dialog box double click on a class to edit the class description dialog for that land use class. For example, double click on the urban class to open its class description dialog. Within the Class Description dialog, in the All register, double click on the and (min) expression to open another dialog window. Alternatively you may follow: right-click | Insert new Expression. In this dialog window select Standard Nearest Neighbor | Insert to add it to the class description. Repeat this for the six remaining classes. Method 2. Apply the same Standard Nearest Neighbor to all classes in one action would be to select Classification | Nearest Neighbor | Apply Standard NN To Classes and then select all classes. This method is significantly faster, but only works if you are applying the same expression to all classes.

The standard nearest neighbor classifier is similar to the supervised classification techniques in other image processing software packages such as ERDAS Imagine. In eCognition Developer we have to select what characteristics of the object features in the image we want to display. The spectral characteristics inform us on the separation of categories in this image. We will use Mean Layer Intensity Values of the image segments (which includes five spectral characteristics of these objects) to identify the seven Land Cover Land Use classes. These five spectral characteristics within the Mean Layer Values Group are: Brightness, Green, Infrared, Max.diff and Red. Now, select these five characteristics so that they are ready for use in the rule set. In the Class Hierarchy window, edit one of the classes right click | edit | right click standard nearest neighbor | edit expression. In the Edit Standard Nearest Neighbor Feature Space window open the object features in the Available register and then open object features, open Layer Values and then double click the Mean, which brings it to the right hand side in the Selected register. Click OK and again OK. This expression is now valid for all classes! Now that we have inserted the method (Standard NN) used for our classification of the image, the next thing to do is to actually identify samples (training sites!!) to be used in the image

Object-based image classification

7 of 14

SCGE - 2011

Summer Course Computational Geo-Ecology

classification. These areas of known land use / land cover are called samples in eCognition Developer.

Define training sites on screen


Make sure that your image objects are seen on the Pixel View and not in the Object Mean mode (remember how you switched between these modes earlier in this exercise!) Like this, you can better define the samples. Most objects can be reasonable identified and recognized as belonging to a certain land cover land use category. Use the following expert rules to define your samples:

Urban:

rapid change in reflections is characteristic for impervious surfaces (urban) forests like this often show a shadow on their northern edge large areas of dark reflection, no river patterns available characterized by dry conditions, almost no vegetation, rectangular patches (notice the long, winding / meandering pattern of the River

Deciduous forest Coniferous forest Bare soil

Water River Sre Agricultural Area 1

There is clearly more biomass on this field that shows as reddish color there is some biomass available in this type of field

Agricultural Area 2

Use these samples to train the SPOT image.

It is wise to display the samples toolbar:

via:

Object-based image classification

8 of 14

SCGE - 2011

Summer Course Computational Geo-Ecology

View | customize | toolbars | check: Samples | Close Now open the sample editor by selecting Classification | Samples | Sample Editor. Make the window large enough to see also text on the right hand side of the graphs. You will see that the sample editor displays five diagrams for feature values. The sample feature values can be compared to those of another class by choosing a different class in the Compare class selection box. For this exercise you will not be comparing one feature class to another, so leave this box set to the default value (none). For ease of comparison, change the y-axes to the full range of Digital Numbers (255) by right clicking and selecting Display Entire Feature Range. Now, change the active class to coniferous forest. Another way of selecting a class to add samples to is by clicking on the class name in the Class Hierarchy dialog. Once you have selected the class in the active class window, click on a sample object representing the coniferous forest within the image. With a single click in the image, the sample editor marks the object outline with red boundary colors for each feature. To mark an object as a sample for that class double click the object. Be sure that classification | samples | select samples has been activated! So, after double clicking, the feature values for that sample are now displayed by additional black marks in the histograms. If you select the Show or hide outlines icon samples will show in full according to the assigned color earlier. GO ON! Insert five sample objects into the sample editor for the Coniferous Forest. Then change to a different class and define samples in the same way until you have finished all seven classes. If you need, zoom in to a more detailed level for sample selection. the

The figure shows the Sample Editor window, with the five selected spectral parameters over the full range of digital numbers (0-255)

Iteratively classify a SPOT-3 scene


In the previous steps you have prepared an initial sample set for the nearest neighbor classification. Next, you will classify all image objects in the Spot image. For that we have to Object-based image classification 9 of 14

SCGE - 2011

Summer Course Computational Geo-Ecology

add a new process to the rule set in the Process Tree to execute the classification. Append a new rule set in the Process Tree: Right click | Append new | and name it classification. Click OK. Make sure it is inserted as the last process. Insert a new child process below classification: right click | Insert Child. Select classification as the algorithm to be used, select analyses for the Level, and keep the default settings for the other Image Object Domain selections. In the Algorithm parameters window, select all the Land Use classes as Active classes and set the use class description value to yes.

The Process Tree, showing the second classification step, here finished in 3 seconds. Click Execute and the classified image appears on the screen. If it doesnt appear change the view to display the classification results. Evaluate the result. In the classification results there is a slider in the lower right corner to set transparency see example in the figure below!

Figure showing snapshots of a classification: left = layer view, centre: 50% Transparent in Pixel view and right: Object mean view. The results most likely show the need for further improvement. It may happen that some segments have not been classified at all (showing black). Some areas are over estimated for a certain class, other areas may be miss-classified, so we need to re-train or re-sample to improve the result and ultimately finalize the classification. These inaccuracies will now be corrected in iterative steps. By assigning typically incorrectly assigned classified image objects

Object-based image classification

10 of 14

SCGE - 2011

Summer Course Computational Geo-Ecology

as samples in the correct class and then redoing the classification. Change to the class you want to work with in the active class window and check to make sure the select samples in the input Mode is still active. Be sure to change the view back to the samples with pixels and outlines. To facilitate your work it may be helpful to open a second viewer (window) that shows the classification results and is linked to the first window. You can add the second viewer and link the two windows by Window | Split Horizontally and then selecting Window | Side by Side View. You should now have the two windows linked showing the raw image in the first and the classified image in the second window. You now need to correctly identify erroneously classified polygons and repeat the classification after each few steps until you are satisfied with the result. This will take multiple rounds of re-training and reclassifying. Especially for those who have been in this fieldwork area: be critical! If you want to remove a sample, just double click on the sample and it will be removed from the sample selection information window. Run a new classification and have a look at the new result. Once you are satisfied with your product, you can do an accuracy assessment of the classification result. Dont forget to regularly save your project! You may notice that especially there is spectral confusion between the shadow edges north of deciduous forest and water. We will not try to optimize this now. For now, the steps in classification should be clear now, and it should be clear that the selection of samples is crucial in these steps. Example of a classification

Object-based image classification

11 of 14

SCGE - 2011

Summer Course Computational Geo-Ecology

Classification Accuracy Assessment


Inventories, maps and classifications should be accompanied by an accuracy assessment. How good is the map? Misclassifications can e.g. be related to the interpretation of samples by the specialist, the choice of algorithms and the resolution of the imagery used. Select Tools | Accuracy Assessment. In the window that pops-up select for Statistic type Best Classification Result as a file name Classified image objects are not only assigned to one class or not; you will also get a detailed list with membership values of each of the classes contained in the class hierarchy. An image object is assigned to the class with the highest membership value, as long as this highest membership value equals at least the minimum membership value. (We wont do this now.). It is important for the quality of a classification result that the highest membership value of an image is absolutely high, indicating that the image object attributes belong to at least one of the class descriptions. Using nearest neighbor classification, a high membership value indicates a close spectral distance to one of the given samples Click show statistics in the accuracy assessment window. The statistics are displayed in a matrix. The map representation opens in the second viewer. The map representation indicates the assignment value of each object on a color palette ranging from red (low value) to green (high value). Look at your results. If you managed to achieve a lot of green then you have done a good job. If there is much yellow and/or red you need to work at reclassifying those polygons until you achieve a greater accuracy. As you can see in the example, the best classification value for most of the objects is significantly high. There are only a small number of objects with a lower assignment value. This information can be used to check these particular image objects. The class mean values and standard deviations show that only a small number of objects were classified with such low membership values. All in all, the class assignments are significant. You can save the statistics as a non-delimitated excel file click Save Statistics to do so, be sure to note the name of the file saved! To convert the file to a text file use the save as function within excel. In the text file you can insert tab or comma separations and re-import the database

Object-based image classification

12 of 14

SCGE - 2011

Summer Course Computational Geo-Ecology

into excel with the columns now saved. Otherwise, if you do not need to maintain a record of the statistics generated for your research, do not worry about this step.

Example of a Best Classification Result

Exporting the results You may want to use the polygons and/or raster layers you have created in eCognition in other packages such as ArcMap or in ERDAS Imagine. To do so you can export the data file. Choose Export | export result. In the new window, select raster type under Export Type, select Classification under Content Type selection and choose Erdas Imagine Images (*.img) as Format and give a name. Under the Select classes button, Select all classes from the Available classes. Press OK and Export.

Save the file in your directory and open it in Arc Map. In ArcMap, prepare a map in the layout view with the seven classes, design and apply a color palette and add the legend, a north arrow and a title. Then, in ArcGIS, export the map as a .pdf file at 150 dpi and save it. Do not forget to add your name as text. An example of such a result is given to the left. In the same way you may export polygons of objects including attributes you used in this exercise. Try it!

This ends the short introduction to object-based classification using eCognition Developer.

Object-based image classification

13 of 14

SCGE - 2011 References

Summer Course Computational Geo-Ecology

Benz, U.C., Hofmann, P., Willhauck, G., Lingenfelder, I., and Heynen, M.; 2004; Multiresolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information; ISPRS Journal of Photogrammetry & Remote Sensing; 58:239-258. Burnett, C., and Blaschke, T.; 2003; A multi-scale segmentation/object relationship modelling methodology for landscape analysis; Ecological Modelling; 168:233-249. Giada, S., de Groeve, T., Ehrlich, D., and Soille, P.; 2003; Information extraction from very high resolution satellite imagery over Lukole refugee camp, Tanzania; International Journal of Remote Sensing; 24:4251-4266. Laliberte, A.S., Rango, A., Havstad, K.M., Paris, J.F., Beck, R.F., McNeely, R., and Gonzalez, A.L.; 2004; Object-oriented image analysis for mapping shrub encroachment from 1937 to 2003 in southern New Mexico; Remote Sensing of Environment; 93:198-210. Wang, L., Sousa, W.P., Gong, P., and Biging, G.S.; 2004; Comparison of IKONOS and QuickBird images for mapping mangrove species on the Caribbean coast of Panama; Remote Sensing of Environment; 91:432-440.

Object-based image classification

14 of 14

Das könnte Ihnen auch gefallen