Beruflich Dokumente
Kultur Dokumente
Accuracy Assessment
Accuracy
The degree (often expressed as a percentage) of correspondence between observation and reality.
We usually judge accuracy against existing maps, large scale aerial photos, or field checks
(information used as reference, considered “truth”).
Square matrix with the number of rows and columns equal to the number of categories
whose classification accuracy is being assessed.
Error matrix stems from classifying the sampled training set pixels and listing the known cover
types for training (columns) versus the pixel actually classified into each land cover category
by the classifier (rows).
4
Major diagonal of the matrix The training set pixels that are classified into the proper
and cover categories
Errors
Omission Error (related to “producer accuracy”)
Error of exclusion
Pixels are not assigned to its appropriate class, or
Pixels are omitted from the actual class they belong
Correspond to non diagonal column element
error of inclusion
pixel is assigned to a class to which it does not belong,
(or) pixels were improperly included in that category
represented by non diagonal row element
6
Class 1 Class 2
Number of
pixels
Class 1 Class 2
Number of
pixels
Overall Accuracy =
The total number of correctly classified pixels
The total number of reference pixels
Producer Accuracy indicates how well training set pixels of the given cover types are classified
The number of correctly classified pixels in each category (major diagonal)
The number of training set pixels used for that category (column total)
User Accuracy indicates the probability that a pixel classified into a given category actually represented
that category on the ground
The number of correctly classified pixels in each category (major diagonal)
The total number of pixels that were classified in that category (row total)
So, from this assessment we have three measures of accuracy which address subtly different
issues:
Overall accuracy :
User accuracy :
measures the proportion of each TM class which is correct.
Producer accuracy
K statistics is a measure of the difference between the actual agreement between the
reference data and an automated classifier and the chance agreement between the reference
data and a random classifier.
∧
Conceptually 𝑘𝑘 can be defined as
The statistic serves as an indicator of the extent to which the percentage correct values of an
error matrix are due to the “true” agreement versus “chance” agreement
A kappa of 0 suggest that the classification is very poor and the given classifier is no better
than a random assignment of pixels
13
∑x
i =1
ii
Observed Accuracy =
N
r
1 ∑ (x i+ ⋅ x+ i )
Change Agreement = i =1
N N
Where
r = number of rows in the error matrix
xii = number of observations in row i and column i (on the major diagonal)
xi+ = total of observations in row i (shown as marginal total to right of the matrix)
x+i = total of observations in column i (shown as marginal total at bottom of the matrix)
N = total number of observations included in matrix
14
∑x ii
1 ∑ (x i+ ⋅ x+ i )
i =1
- i =1
N N N
k̂ = r
1 ∑ (x i+ ⋅ x+ i )
1 - i =1
N N
r r
N ∑ xii − ∑ ( xi + ⋅ x+ i )
kˆ = i =1
r
i =1
N − ∑ ( xi + ⋅ x+ i )
2
i =1
15
• Note : Overall Accuracy is biased on the correctly classified pixels (the major
diagonal) and excludes the omission and commission errors
• Kappa in addition to the correctly classified pixels, includes the off diagonal elements
as a product of Row total and column total
16
CLASSIFIED IMAGE REFERENCE DATA
2 For each point, compare the map class with the “true” class
Classification Data Water Sand Forest Urban Corn Hay Row total
r r
N ∑ xii − ∑ ( xi + ⋅ x+ i )
kˆ = i =1
r
i =1
N 2 − ∑ ( xi + ⋅ x+ i )
i =1
∑x
i =1
ii = 226 + 216 + 360 + 397 + 190 + 219 = 1608
∑ (x
i =1
i+ ⋅ x+ i ) = (239 × 233) + (309 × 328) + (599 × 429)
General range
K < 0.4: poor
0.4 < K < 0.75: good
K > 0.75: excellent