Sie sind auf Seite 1von 10

14/12/13

Basic Concepts eCognition Com m unity

asic Concepts
View Edit History Actions

TABLE OF CONTENTS
1. Image and Image Layer 2. Scene 3. Project 4. Workspace 5. Image Object 6. Image Object Level 7. Feature 8. Class and Classification 9. Image Object Hierarchy 10. Definiens Application 11. Solution 12. Action 13. Ruleware and Rule Sets 14. Process 1. Algorithm 2. Image Object Domain

Image and Image Layer


An image is a set of raster image data. An image consists of at least one image layer based on pixels. Each image layer represents a type of information. The most common image layers are the Red, Green and Blue (RGB) image layers, but there are other image layer types, such as Near Infrared (NIR) or Elevation data used in remote sensing. An image is stored as .tif, .img, .pix, or other raster file format. Within Definiens Developer images are represented by scenes.

Scene
Definiens Developer loads image data as scenes, which means that each scene usually represents one image. A scene consists of one or more image layers or channels of one image file. When working with a combination of different data, a scene usually contains several image files with multiple image layers and optionally thematic layers. A scene can include additional information related to the image content, such as metadata, geocoding, or geo information. Depending on the image reader or camera, a scene combines multiple views of the same piece of reality, each of them in a separate layer. To put it simply, a scene can include several images of the same thing, each of them providing different information. Each scene representing one set of data used in Definiens software is managed in a project.
com m unity.ecognition.com /hom e/basic-concepts#section-0 1/10

14/12/13

Basic Concepts eCognition Com m unity

Thus, scenes are the combined input image data for both projects and Definiens workspaces.

Project
In Definiens software, a project manages a scene. A project is a wrapper for all information related to a particular scene which is the input image data. It stores references to at least one image layer and related result information from image analysis expressed by classified image objects. In addition, a project contains metadata like layer aliases and unit information. Optionally, a project can enclose thematic layers. During creation of a project, a scene is referenced into the project. Image analysis extracts result information from a scene and adds it to the project. This information is expressed in classified image objects. When viewing analyzed projects you can investigate both the input scene and the classification of image objects representing a result. A project is saved as a .dpr project file.

Workspace
A workspace is a container for projects, saved as .dpj file. A workspace file contains image data references, projects, exported result values and references to the used ruleware. Furthermore, it comprises the import and export templates, result states, and metadata. In the Workspace window, you administer the workspace files. Here you manage all relevant data of your image analysis tasks.

Figure: Workspace window with Summary and Export Specification and drop-down view menu.

Image Object
During image analysis, a scenerepresenting an imageis split into image objects. An image object is group of connected pixels in a scene. Each image object represents a definite
com m unity.ecognition.com /hom e/basic-concepts#section-0 2/10

14/12/13

Basic Concepts eCognition Com m unity

region in an image. Image objects can provide information about this definite image region. Every image object is linked to its neighbors. Together the image objects perform a network, which enables access to the context of each image object.

Figure: Image Objects in a Landsat Image

Image Object Level


A scene, representing an image, is segmented into image objects during the process of image analysis. Image objects are organized into image object levels. An image object level serves as an internal working area for the image analysis.

Figure: Each image object level consist of a layer of image objects. During image analysis multiple image object levels can be created and layered above the basic pixel level. Two or more image object levels build an image object hierarchy.

Keep in mind the important difference between image object levels and image layers. Image layers represent the data already existing in the image when it is first imported. In contrast, image object levels store image objects, which represent the data. Thus, they serve as internal working areas.

com m unity.ecognition.com /hom e/basic-concepts#section-0

3/10

14/12/13

Basic Concepts eCognition Com m unity

An image object level can be created by segmentation from the underlying pixel level or from an existing image object level. In addition, you can create an image object level by duplicating an existing image object level. Image objects are stored in image object levels. Image object related operations like classification, reshaping and information extraction are done within image object levels. Thus, image object levels serve as internal working areas of the image analysis. Every image object is linked to its neighbors. Together the image objects perform a cognition network, which enables access to the context of each image object.

Feature
In Definiens software, a feature is an attribute that represents certain information concerning objects of interest, for example measurements, attached data or values. There are two major types of features: Image Object features are related to image objects. Object features describe spectral, form, hierarchical, or other properties of an image object, for example its Area. Global features are not related to an individual image object, for example the Number of classified image objects of a certain class. Ima ge Ob j e ct Fe a ture s Since regions in the image provide much more information than single pixels, there are many different image object features for measuring color, shape, and texture of the associated regions. Even more information may be extracted by taking the network structure and the classification of the image objects into account. Important examples of this type of features are the Rel. border to neighboring objects of a given class and Number of subobjects of a given class. G lob a l Fe a ture s Global features describe the current network situation in general. Examples are the Mean value of a given image layer or the Number of levels in the image object hierarchy. Global features may also represent metadata as an additional part of the input data. For example the type of tissue in a toxic screen might be expressed via metadata and thus incorporated into the analysis.

com m unity.ecognition.com /hom e/basic-concepts#section-0

4/10

14/12/13

Basic Concepts eCognition Com m unity

Figure: Features in the Feature View window.

For detailed information on algorithms, you can visit the Features Reference article of the Reference Book section

Class and Classification


A class is a category of image objects. It can both be used to simply label image objects or to describe its semantic meaning. Classification is a procedure that associates image objects with an appropriate class labeled by a name and a color.

Figure: Legend window listing classes. Information contained in image objects is used as a filter for classification. Based on this, image objects can be analyzed according defined criteria and assigned to classes that best meet these criteria. The classes can be grouped in a hierarchical manner, allowing the passing down of their
com m unity.ecognition.com /hom e/basic-concepts#section-0 5/10

14/12/13

Basic Concepts eCognition Com m unity

defining class descriptions to child classes using the inheritance hierarchy. Classes form a structured network, called class hierarchy.

Figure: Sample Class Hierarchy window. Through the process of classification, each image object is assigned to a certainor no class and thus connected with the class hierarchy. The result of the classification is a network of classified image objects with concrete attributes, concrete relations to each other and concrete relations to the classes in the class hierarchy.

Image Object Hierarchy


During image analysis, multiple image object levels can be created and layered above the basic pixel level. Two or more image object levels build the image object hierarchy. Put simply: The image object hierarchy serves as a storage rack for all image objects levels which represent the different shelves storing the image objects. Thus the image object hierarchy provides the working environment for the extraction of image information. The entirety of image objects is organized into a hierarchical network of image objects. Such a network is called image object hierarchy. It consists of one or more image object levels, from fine resolution on the lowest image object level to coarse resolution on the highest image object level. Image objects within an image object level are linked horizontally. Similarly, image objects are linked vertically in the image object hierarchy. The image objects are networked in a manner that each image object knows its context, that are its neighbors, its superobject on a higher image object level and its subobjects on a lower image object level.

com m unity.ecognition.com /hom e/basic-concepts#section-0

6/10

14/12/13

Basic Concepts eCognition Com m unity

Figure: Within the image object hierarchy, each image object is linked to its neighbors, its superobject, and its subobjects. To assure definite relations between image object levels, no image object may have more than one superobject but it can have multiple subobjects. The border of a superobject is consistent with the border of its subobjects.

Figure: Three image object levels of a sample image.

Definiens Application
A Definiens application can extend each of Definiens Enterprise Image Intelligence clients providing industry- or user-specific ruleware and functions. Definiens applications enlarge the capabilities of both the Definiens Enterprise Image Intelligence (EII) Clients (Definiens Developer and Architect) and the processing environment Definiens eCognition Server. Definiens applications enable users of clients such as Definiens Architect to create ready-touse solutions for their specific image analysis problems. Running with one of the Definiens Image Intelligence Clients, an application completes the client functionalities by using particular ruleware, controls, and workflows needed for industry- or user-specific tasks.

Example: If you start a Definiens client like Developer or Architect together with, for example, ABC application, you can use specific ABC functionalities for your particular ABC tasks. If you start a client with another XYZ application, you benefit from other specific XYZ functionalities needed for your particular XYZ tasks. Definiens offers a range of applications. They are optional and licensed through application-specific
com m unity.ecognition.com /hom e/basic-concepts#section-0 7/10

14/12/13

Basic Concepts eCognition Com m unity

licenses.

Solution
A solution is designed by Definiens Architect users as a ready-to-use image analysis solving a specific image analysis problem. A solution provides an image analysis rule set configured for a specific type of image data. A solution is assembled from predefined building blocks called actions.

Action
An action represents a predefined building block of an image analysis solution. Configured actions can perform different tasks like object detection, classification or export of results to file. Actions are sequenced and together they represent a ready-to-use solution accomplishing the image analysis task. A configured action consists of a set of processes with defined parameters. Standard action definitions, which are just unconfigured actions, are provided in action libraries. Special task actions can be designed according specific needs using Definiens Developer.

Ruleware and Rule Sets


Definiens Developer provides a development environment for creating ruleware based on the Definiens Cognition Network Language (CNL). Ruleware is a piece of software applicable to defined image analysis tasks, such as a rule set, an action, or a solution. Rule sets represent the code of ruleware assembling a set of functions. Based on rule sets, you can create actions, which are packaged modules of rule sets with a user interface for configuration of parameters. You can assemble actions in action libraries and provide them to non-developer users for easy creation of solutions for defined image analysis tasks. Nondeveloper users configure and assemble actions and save them as a solution, ready to use for processing on image data.

Process
Definiens Developer provides an artificial language for developing advanced image analysis algorithms. These algorithms use the principles of object oriented image analysis and local adaptive processing. This is achieved by processes. A single process is the elementary unit of a rule set providing a solution to a specific image analysis task. Processes are the main working tools for developing rule sets.

In Definiens Developer, the term Process is used for both a single process and a process sequence.

The main functional parts of a single process are the algorithm and the image object domain. A single process allows the application of a specific algorithm to a specific region of interest in the image. All conditions for classification as well as region of interest selection may
com m unity.ecognition.com /hom e/basic-concepts#section-0 8/10

14/12/13

Basic Concepts eCognition Com m unity

incorporate semantic information. Processes may have an arbitrary number of child processes. The resulting process hierarchy defines the structure and flow control of the image analysis. Arranging processes containing different types of algorithms allows the user to build a sequential image analysis routine.

Figure: Process Tree window.

To learn how to create a process see Cre a te a P roce ss

Algorithm
The algorithm defines the operation the process will perform. This can be generating image objects, merging or splitting image objects, classifying objects, and so on. The two main functions of algorithms are generating or modifying image objects and classifying image objects. In addition to these, a set of other algorithms help to define all necessary operations to set up an image analysis routine. The following functional categories of algorithms exist: Process related operation Segmentation algorithms Basic Classification algorithms Advanced Classification algorithms Variables operation algorithms Reshaping algorithms Level operation algorithms Interactive operations algorithms Sample operation algorithms Image layer operation algorithms Thematic layer operation algorithms Export algorithms Workspace automation algorithms

For detailed information on algorithms, you can visit the Algorithms Reference article of the Reference Book section
com m unity.ecognition.com /hom e/basic-concepts#section-0 9/10

14/12/13

Basic Concepts eCognition Com m unity

Image Object Domain


The image object domain describes the region of interest where the algorithm of the process will be executed in the image object hierarchy. The image object domain is defined by a structural description of the corresponding subset. Examples for image object domains are the entire image, an image object level or all image objects of a given class.

Figure: Workflow of a process sequence. By applying the usual set operators to the basic image object domains, many different image object domains can be generated. The process will then loop over the set of image objects in the image object domain and apply the algorithm to every single image object. Therefore, image objects domains may be defined relative to the current image object of the parent process, for example the subobjects or the neighboring image objects of the parent process object (PPO).

com m unity.ecognition.com /hom e/basic-concepts#section-0

10/10

Das könnte Ihnen auch gefallen