Sie sind auf Seite 1von 5

Classification of LandSat 8 Image

Nischit Prasad Nhuchhe Pradhan


9464121996
Team ID: RE1059
PEC University Of Technology, Chandigarh

Abstract
This paper discusses an image classifying algorithm that takes into account Blue, Green, Red
and Near Infrared bands. It is a per-pixel classifier and works by finding the maximum, minimum
value range for each of the band and uses it to classify the image.

1. Introduction
Classification is the process of identifying, differentiating and categorizing of data. Classification of objects helps
in the better understanding of its nature and similarities and differences between others. In the digital world any
processes regarding images require use of highly complex algorithms. Furthermore, to classify an image, the
computer first needs to understand what an image is and how adjacent pixels in the image relate to each other
forming an object. In case of classification of satellite imagery, classification refers to the process of allocating
every pixel in the digital image to a particular class such as grass, water, rock, soil etc. resulting in a well labeled
image.
Every image is made up of three basic bands red, green and blue. Together they form a multi-spectral image or
a colored image. All bands are in grey scale where every pixel holds a discrete magnitude that resembles the
reflectance value in digital form. A satellite image consists of many different bands other than the basic RGB
band. Each band carries a particular set of data that can be used to classify the RGB image into more
informative and resourceful images. For example produce a Heat map of surface of the Earth.

For classification process many different approaches can be taken i.e. from making probable assumptions to an
elaborated mathematical and statistical analysis. All classification algorithms are based on the assumption that
the image in question depicts one or more features and that each of these features belongs to one of several
distinct and exclusive classes. Once a statistical characterization has been achieved for each information class,
the image is then classified by examining the reflectance for each pixel and making a decision about which of
the signatures it resembles most. [1]

Image
Image is the visual representation of data stored electrically in a computer. According to the definition stated in
Wikipedia,

An image is an artifact that depicts visual perception, for example a two-dimensional picture that has a
similar appearance to some subject— usually a physical object or a person, thus providing a depiction of
it. [2]

Images are captured by Digital Cameras and are stored in the form of .raw file. Raw file is the object as seen by
the camera. .raw files have a fixed structure. It contains minimally processed data that have all the information to
digitally produce the image after processing.

Digital Camera
Light is an electromagnetic radiation that falls in the visible electromagnetic spectrum i.e. 400-700nm. When
light or in fact any electromagnetic radiation falls on an object; parts of it is absorbed, transmitted and reflected.
These reflected waves are captured by a digital receptor and stored as data that are processed to obtain
information. The normal photographs we see in our computer screen are all produced by working on visible
waves alone.
Satellite Imagery
A satellite is equipped with various receptors that receive and store data quiet similar to how sonar does. In
sonar a radio frequency is sent by the ship and receives the reflected wave. In case of satellite Imagery, rays
are produced by the Sun and the reflected waves are captured by the satellite receptors. Each recorded values
are stored in a grey scale as according the intensity of reflected ray (black for no reflection and white for
maximum reflection). A multi-spectral image (color image) can be thus created by combining the three visible
band data i.e. Blue band, Green Band and Red band.

2. Classification
There are many different approaches in classifying an image like,

1. Per-pixel methods
2. Sub-pixel methods
3. Object-based methods

In this case per-pixel method has been used considering the resolution of the image 30mx30m given.

Various Algorithms are available for classification of an image. Some of them are given below

1. Nearest neighbor Classification

Nearest neighbor algorithm uses a statistical approach to classifying an unclassified pixel. First all of the
known pixels are classified. Then a Euclidean distance is calculated form the unclassified pixel to all the
classified pixels inside a radius. The pixel is classified as according to the closest pixel.

2. Support Vector Machine Classification

Support Vector Machine (SVM) is a binary classification in which the image is classified into two groups by
drawing an optimal hyperplane. A hyperplane is considered to be optimal when the distance between it and
the two groups is maximum.

3. Artificial Neural Network based Classification

ANN is a computational model inspired by biological neural network. It can be considered as weighted
directed graph in which nodes are neurons and edges with weights are all connection among neurons. It has a
leaning phase where the networks are iterated and made by the help of training sample. Input is used to get
output. The error in the output is used to modify the weight in ANN. [3]

3. Theory
Basic classification of objects can be done by analyzing the reflectance value of the object for different bands.
Higher values in refer to greater reflection and lower values refer to greater absorption. In the digital system, the
bands are stored in a grey scale whose pixel value ranges from 0 to 255.

When an object is seen, it is due to the phenomenon that some rays are reflected from the surface of the object
which is then received by the eyes. Different objects reflect rays differently i.e. oceans appear blue and trees
appear green because they reflect blue and green rays respectively to a much greater extent. Then how can a
green bottle be distinguished from a leaf? By detailed studies it is found that trees highly reflect NIR rays along
with green and absorb Red and Blue rays. Thus this property is utilized to distinguish vegetation from other
objects. Similarly, water bodies highly absorb every band except for Blue Band hence it appears blue. This type
of classification requires more depth knowledge of the molecular structure and composition as energy is
absorbed, transmitted and reflected in atomic level.

A more mathematical approach to classification is by means of Normalized Difference Vegetation Index. NDVI is
the index ranging from -1 to 1 whose value refers to the probability of finding vegetation whose density is
proportional to its value in the given location. Higher values of it give a higher probability of finding vegetation
e.g., values less than 0.2 means there is no vegetation. Values in range (0.2, 0.5) mean the object is minor
vegetation. Values greater than 0.5 mean the object consists of dense vegetation (mostly forest or agricultural
field during a good harvest).

4. Algorithm
Initial Approach
The initial approach to classifying the satellite image was putting selective cutoffs for each band according to its
property i.e. using NDVI index for vegetation, setting high cutoff value of Blue Band for water bodies (they reflect
more in blue bands), setting high cutoff for Blue Band and NIR Band for urban area. The values of each band
were normalized at the start to get a range from 0 to 1. This approach gives in a somewhat expected result but
this approach cannot be generalized for all cases as the cutoff varied from image to image.

If (NIR < 0.2) AND (BLUE > 0.2) AND (GREEN < 0.1) AND (RED < 0.15)
The pixel is water. // highly absorbs all bands except blue
Else If (NIR > 0.08) AND (RED > 0.1) AND (BLUE > 0.03) AND (GREEN > 0.06)
The pixel is snow or cloud. //highly reflects all bands
Else If (NIR > 0.2) AND (BLUE > 0.029)
The pixel is urban area. //reflects more of blue and near infrared bands
Else If (NIR > 0.12) AND (RED > 0.15)
The pixel is vegetation. //highly reflects NIR and red bands

The cutoffs for the bands have been taken from research papers as well as calculated and verified by hit
and trial method. Since the values are hard coded into the program, it affects the versatility of the
program and hence does not seem to be a valid approach.

Figure 1 Classification according to reflectance characteristic


Blue: water, Green: vegetation, Orange: Man made structure

Current Approach
Since a training index set has been provided, the program makes use of the provided „.dat‟ file to obtain
training index. The training index consists of minimum and maximum coordinates for each class which
when plot creates a square of 100 pixels for each of the available class. In total 7 indexes were given for
each of the 7 class. But when plotting all of the given indexes, it was seen that different indexes were
pointing to the same class instead of each index pointing to a different class. Hence a separate input
window has been made to address the issue which takes user provided training index.
The program goes through each of the provided training index and calculates the maximum and
minimum range of each Band (Blue, Green, Red, Near Infrared) for each class. This step is called
training the program. After the program has been trained, it sequentially goes though each unclassified
pixel and classifies it according to which ever definition fits it the best.
PIXEL = unclassified pixel
BLUE = blue band value of pixel
GREEN = green band value of pixel
RED = red band value of pixel
NIR = near infrared value of pixel
k = go through set of class S
If BLUE is in range of S[k] blue value
If GREEN is in range of S[k]green value
If RED is in range of S[k] red value
If NIR is in range of S[k]red value
PIXEL = S[k]

Figure 2 Plotting 7 class training index provided in .dat file Figure 3 Training index provided by user for water class

5. Methodology
1. The contrast of every band was enhanced to obtain more leveled clarity.

2. The visible bands were combined to obtain a multi-spectral image (color image).

3. The user inputs training samples for every class by drawing squares. The training samples were taken from
different parts of the RGB image. For example, if sample of water is being taken, the samples are taken from
different water bodies (rivers, lakes, ponds). The program is trained by the training sample.

4. The image is classified as per pixel.

6. Result
The algorithm seems to be pretty accurate in classifying general objects in the image such as forest, small
vegetation, water bodies, soil and urban areas/ manmade structures. Further improvements can be made to this
to obtain better results. Any source of error is due to less accuracy in entering the training samples.
Figure 4 Classified satellite image

7. Acknowledgement
I would like to express my sincere gratitude to prof. Kavita Mitkari for providing reference materials and guidance
through the project.

8. Reference
[1] Himani Raina, Omais Shafi, “ Analysis Of Supervised Classification Algorithms”, INTERNATIONAL
JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 4, ISSUE 09, SEPTEMBER 2015

[2] https://en.wikipedia.org/wiki/Normalized_Difference_Vegetation_Index

[3] Debabrata Ghosh, Naima Kaabouch,“A Survey on Remote Sensing Scene Classification Algorithms”.

9. Links
http://m.earthobservatory.nasa.gov/Features/MeasuringVegetation/measuring_vegetation_2.php

https://www.mapbox.com/blog/putting-landsat-8-bands-to-work/

http://m.earthobservatory.nasa.gov/Features/Lights2/lights_soil3.php

Das könnte Ihnen auch gefallen