Sie sind auf Seite 1von 12

IPASJ International Journal of Computer Science (IIJCS)

Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm


A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 10, October 2017 ISSN 2321-5992

A Blur-Invariant Local Feature Descriptor for


Gaussian and Motion Blurred Image Matching
QiangTong1, Terumasa Aoki1,2
1
Graduate School of Information Sciences (GSIS), Tohoku University, Sendai, Japan,
2
New Industry Creation Hatchery Center (NICHe), Tohoku University, Sendai, Japan

ABSTRACT
Even though plenty of local feature descriptors have been proposed in the past decades, lacking of robustness to blur is still one
of the biggest problems of all existing descriptors. This paper presents a blur-invariantlocal feature descriptor for matching a
blurred image (caused by camera motion, out of focus, etc.) and a non-blurred image. The proposed descriptor is based on
blur-invariant moments. By encoding the blur-invariant moments for each pixel in local regions, the proposed descriptor can
generate local features that are distinctive while keeping robustness to blur. Experimental results show that the proposed
descriptor is suitable to blurred image matchingand outperforms the state of the art methods.

Keywords:descriptor, blur-invariant, image moment, image matching

1. INTRODUCTION
Until now, local feature-based image matching has become a fundamental and widely applied technique in computer
vision and it has been employed in lots of applications such as image retrieval [1], [2], object recognition [3], 3D
reconstruction [4], and wide baseline matching [5]. Compared to global feature-based image matching, the biggest
advantage of local feature-based image matching is the good performanceunder the conditions of partial occlusion,
resizing and rotation of images. However, image matching between a blurred image (especially a motion-blurred
image) and a non-blurred image is still a challenging task.As far as the authors know, there are still no local features
that can be applied for blurred image matching which is required for lots of image/video applications, such as traffic
signs detection and object, roads recognition on high speed self-driving cars, image retrieval for moving cameras, face
recognition, etc. For example, blurred image matching is a fundamental task for developing self-driving cars. When a
self-driving car runs fast, it is difficult to automatically detect other cars, people walking across the road and/or traffic
signs etc., since videos taken by a camera on the car are always blurred.
To solve this problem, we can de-blur blurred images by applying some deblurring algorithms [6]-[8] at first. Then,
we can match the de-blurred images to non-blurred images by applying existing image matching methods. Even though
it seems quite simple and straightforward, all deblurring algorithms have at least one of the following two drawbacks.
The first drawback is that all of the deblurring algorithms require highly computational cost since these algorithms
always need an iterative process to correctly estimate some parameters of blur. The second drawback is that these
algorithms may generate new artifacts such as blocks or ring bells. That can usually degrade the performance of image
matching. Hence, we believe that the best way to accomplish blurred image matching is to design a newlocal feature-
based image matching method including a blur-invariant interest point detector and a local feature descriptor.
Usually, a local feature-based image matching method includes an interest point detector and a local feature
descriptor. For two images of an image pair, an interest point detector finds hundreds or thousands of interest points
from the image. Then, a local feature descriptor describes the surrounding information of each interest point and
generates a feature vector. Finally, by matching these feature vectors from the images in the pair, we can judge whether
these two images are wholly or partially the same or not.Unfortunately, both the detector and descriptor of all existing
methods are sensitive to strong blur. For example, one of the best image matching methods - SIFT [9] is not good at
blurred (especially motion blur) image matching, because both its detector and descriptor compute interest points and
feature vectors from intensities of pixels and as we all know,intensities of pixels are not robust to blur. Other existing

Volume 5, Issue 10, October 2017 Page 150


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 10, October 2017 ISSN 2321-5992

methods such as SURF [10], ORB [11], BRISK [12], DAISY [13], LIOP [14] cannot work well for blur-image
matching for the same reason.

(a)

(b)
Figure 1 A real-world blurred image matching example: blue lines and blue circles are valid matched point pairs
and interest points respectively, the result of SIFT (a) shows less than 20 points correctly matched but the result of
our method (b) shows more than 200 points (only 100 are shown) correctly matched.
To solve the problem that no detector is robust to blur, we have presented a blur-invariant interest point detector
[15]. Compared to traditional interest point detectors [16], [9], [10] which usually detect corners or blobs (both are not
robust to blur) from images by computing gradients or intensities (both are not robust to blur too) of pixels, our detector
[15] is based on a new concept Moment Symmetry (MS) which is similar to reflection symmetry but MS is defined by
looser conditions than reflection symmetry. Our detector applies some blur-invariant moments to detect MS regions

Volume 5, Issue 10, October 2017 Page 151


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 10, October 2017 ISSN 2321-5992

(the regions extracted by the definition of MS) in imagesas interest points since the MS regions are very robust to blur.
In conclusion, our detector can detect the same points from blurred images and its non-blurred images.
However, we still need a blur-invariant descriptor. Hence, in this paper, we present a blur-invariant local feature
descriptor for blurred image matching. The proposed descriptor is based on some blur-invariant moments. For each
interest point, we first calculate 3 blur-invariant moments of each pixel in a local region surrounding the point and
apply these blur-invariant moments to replace the intensity of each pixel. Then, as existing descriptors do, we encode
the blur-invariant moments of all the pixels in the local region to generate a feature vector. In this way, the proposed
descriptor is distinctive while keeping robustness to blur. As shown in Fig.1 (b), the matching method that applies the
proposed descriptoris good at blurred image matching, unlike state of the art methods (as shown in Fig. 1 (a)).
In this paper, we describe related work and the proposed descriptor in sections 2 and 3 respectively. We also give our
experimental results and conclude this paper in sections 5 and 6 respectively.

2. RELATED WORKS
So far, plenty of local feature descriptors have been proposed, which are based on derivatives [17], [2], complex filters
[18], steerable filters [19], phase [20], etc. One of the most popular descriptors is SIFTs descriptor[9]. Compared to
other descriptors, SIFT descriptor has shown higher performance for image matching [21] and seems to be the most
widely used descriptor nowadays. SIFT descriptor computes a histogram of oriented gradients around each of the
detected interest points and generates a highly distinctive feature vector for each point. However, according to [21],
SIFT descriptor cannot be used for blurred image matching like the other descriptors mentioned above.One of the
biggest problems is that the feature vectors generated by SIFT descriptor are based on gradients of pixels which are not
robust to blur. Other local feature descriptors, such as PCA-SIFT [22], SURF [10] are also based on local gradient
histograms like SIFT descriptor except for some speed up strategies. Furthermore, descriptors proposed later such as
BRIEF [23], BRISK [12], and RIFF [24] also only focus on lower computational cost. That is why there is no local
feature descriptors which can be applied for blurred image matching as far as the authors know.
On the other hand, a descriptor based on moment invariants has been introduced in [25]. This descriptor computes
some moment invariants from local regions of images and directly applies them as feature vectors. However, this
descriptor is not a blur-invariant descriptor since it only focuses on the robustness to affine transform and illumination
changes. Furthermore, according to [21], it is not distinctive, whencomparing to other local feature descriptors.

3. BLUR-INVARIANT LOCAL FEATURE DESCRIPTOR


In this section, we will describe the details of a novel blur-invariant local feature descriptor.
3.2 Methodology
As described in section 2, many local feature descriptors have proven their good discriminative power. A good example
is SIFTs descriptor. The process flow of this descriptor is as follows. First, gradients of the pixels around each interest
point are divided into some squared grids and some directions to create a histogram.Then magnitudes of these
gradients are accumulated into the corresponding binsof the histogram to generate a feature vector for each interest
point. The descriptor generated in this way is very distinctive [21]. Unfortunately, all existing local feature descriptors
including SIFTs descriptor cannot be applied for blurred image matching since they compute features from intensities
of pixels which are not robust to blur.
On the other hand, there are plenty of blur-invariant image moments (or moment invariants) nowadays according to
[26], [27]. However, as far as the authors know, all of them are global features and it is difficult to directly introduce
them to a local feature descriptor. Compared to the other existing local feature descriptors [21], less distinctive power is
a crucial problem of the existing blur-invariant moments.
Hence, we propose a blur-invariant local feature descriptor which can solve these problems mentioned above at the
same time. First, we choose some blur-invariant moments of each pixel in a local region and replace them withthe
intensity or gradient of the pixel. Then we encode these blur-invariant moments of all pixels in the local region like the
existing descriptors (SIFT, SURF, etc.) do to generate a feature vector for the local region. In this way, the proposed
descriptor can be highly distinctive while keeping blur-invariance.
3.3 Blur-invariant moments
First, we define image moment of a local region whose size is ( is odd) as follows.

Volume 5, Issue 10, October 2017 Page 152


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 10, October 2017 ISSN 2321-5992

(1)

Where is the order of the moment, and represents image function of the local region.
Usually a blurred image is considered as a convolution between its non-blurred image and a blur kernel, so each
pixel of the blurred image is affected by its surrounding pixels in the non-blurred image. Unlike global feature
descriptor, the local feature descriptor describes local regions of an image, so pixels at the boundary region of each
local region in a blurred image are affected by the pixels lying outside the local region in its non-blurred image. That
causes boundary error coming from unknown outside part of the non-blurred local region, and the boundary error will
be inevitable and sometime will be incredibly large when any moment is computed from small regions of blurred

Figure 2 An example of average dissimilarity of the values calculated by blur-invariant moments between
blurred local regions and their non-blurred regions.
images.Hence, in this section, we first check some blur-invariant moments whose order are up to 5, to find which
moment is robust to blur and boundary error.
The average dissimilarity between 1700 local region pairs from 100 non-blurred images and their 500 (Gaussian and
4 directional motion) blurred images is shown in Fig.2. First, we select a local region pair (blurred region and its non-
blurred region) with the same and changeable sizes. The sizes of each region are quite small, pixels
(about times of the blur kernel). Then, we compute blur-invariant moments , , , , , ,
and from each local region with each size and compare their values. Note that , , and are 3rd and
5th-order moment invariants are defined in [26], [27], and some other 3rd and 5th-order moment invariants like ,
or ... are not shown here due to space limitations. Theoretically speaking, the dissimilarity of moments of each pair
should be zero since the moments are all blur-invariant. Therefore, we can consider the dissimilarity shown in Fig.2
mostly comes from boundary error. We can see that the dissimilarities of , and are much smaller (about
) than others ( , , , and ). Hence, we consider that these 3 moments , and are
highly robust to blur and apply them in the proposed descriptor.
3.4 Spatial binning
Even though we can apply these 3 blur-invariant moments , and in our descriptor, less discriminative
power is still a huge problem to our descriptor. In this section, we will describe how to encode these 3 moments to
improve the discriminative power of the proposed descriptor.
The process flow of the proposed descriptor is shown in Fig.3. In conclusion, Our local feature descriptor generates a
high dimensional, blur-invariant and distinctive feature vector for each local region, by using the information on the
location, the magnitude of blur-invariant moments, and the neighbor relationship of its inside pixels.

Volume 5, Issue 10, October 2017 Page 153


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 10, October 2017 ISSN 2321-5992

For each interest point, we choose a local region,thecenter of which is the interest point (see Fig.3 (a)). The size of
the region is ( : scale of the point and is decided by interest point detector). In Fig.3, .Then, for each
pixel in the region, we compute its 3 blur-invariant moments , and from a computing region as
shown in Fig.3 (b). In order to make the proposed descriptor irrelevant to the sizes of local regions, we define
normalized moment as follows. We also define and compute magnitude of the 3 moments in equation (3) (shown in
Fig.3 (c)).In this way, these 3 moments can be easily applied as one single value.

(2)

(3)
Where are the coordinates of each pixel, are normalized moment and magnitude of moments
respectively.

Figure 3 The process flow of our descriptor: a local region surrounding an interest point (red point) is shown in (a), the
size of this region is 20 20 and each small black grid in the region represents each pixel, 3 moments of each pixel
are computed and shown in (b), magnitudes of the moments of the pixels are shown in (c), an example of moment
quantization is shown in (d), magnitudes accumulation and feature vector generation are shown in (e).
Like the existing descriptors such as SIFT, etc., we divide the whole region into sub-regions and create
bins histogram for each sub-region. For example, we set as shown in Fig.3 (c),(d), and as shown in
Fig.3(e). However, the difference between our descriptor and the existing descriptors is that each bin represents the
result of a moment quantization as follows.

(4)

Where are the coordinates of the given pixel, and represents the bin number for this pixel.

Volume 5, Issue 10, October 2017 Page 154


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 10, October 2017 ISSN 2321-5992

As shown in Fig.3 (d), the black grid is the given pixel. We compare their normalized moments and decide which
bin the given pixel belongs to, based on the spatial information (in which sub-region) and values' relationship between
the pixel and its neighbor pixels. In this way, each pixel in the local region is assigned to its corresponding bin
according to equation (4). Note that our preliminary experiment shows pixels in a blurred local region and
its non-blurred local region has the same quantization results.
Finally, we accumulate the magnitude of moments of each pixel into its bin to generate our
dimensional feature vector. For example, if a pixel is in the 10th grid (the 2nd column, the 3rd row) and its is 3
rd th
(the 3 bin), the magnitude of moments of the pixel will be accumulated into the 75 bin of the whole histogram shown
in Fig.3 (e).

4. EXPERIMENTAL RESULTS
4.1 Datasets
In our experiments, we used two different datasets: Dataset-S and Dataset-N. Dataset-S contains 100 non-blurred
images including Lena, Baboon, and images from Columbia database [28], and their 3200 synthetically blurred images.
All synthetic images are generated by motion blur kernels(8 directions: ) and
Gaussian blur kernels. The sizes of the blur kernels are from 1/12 to 1/6 of the image sizes. Also, 200 blurred images
are resized from 1/2 to 2 of sizes of non-blurred images.
On the other hand, Dataset-N includes "Bikes" and "Trees" sets (each set includes 1 non-blurred image and 5 blurred
images) from [21], "Benchmark" set (4 non-blurred images and 48 blurred images) from [29] and "Posters" (5 non-
blurred images and 20 blurred images) captured by our camera. All images in Dataset-N are real-world images
extracted from natural videos. Some examples of Dataset-S and Dataset-N are shown in Fig.4.

Figure 4Some examples of our datasets: some images of Data-S and "Trees" are shown in (a), some images of "Posters" and "Benchmark" are shown in (b).

Volume 5, Issue 10, October 2017 Page 155


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 10, October 2017 ISSN 2321-5992

4.2 Comparison of image matching methods


In this section, we will show the results of the comparison between the existing matching methods (SIFT and SURF)
and our methodincluding the detectorin [15] and the proposed descriptor. Since SIFT and SURF (both their detectors
and their descriptors) seem to work better than other existing methods according to the results of our preliminary
experiments, we just compared our method to SIFT and SURF due to space limitations.
Foreach image pair including a blurred image and its non-blurred image, we use an interest point detector to detect
interest points from these images. Then, for each interest point from an image of the image pair, the distances of all
point pairs between this point and all interest points from another image of the image pairare computed by using a local
feature descriptor. If two points of a point pair are located at the nearest position in a multidimensional vector space,
this point pair is considered as a matched pair. After all matched pairs from an image pair are identified, we apply
algorithm to eliminate outlier pairs and we set the retained inlier pairs as correct matches. The number of
correct matches is very important to image matching. As a evaluation metric for local feature descriptors, we apply the
number of correct matches and another criterion called "matching precision" defined as follows.

(3)

Where is the number of the corresponding interest points between the blurred image and the non-blurred image.
When two interest points from a blurred image and a non-blurred image are located at the same physical position, we
call these two points "corresponding points",and we consider that a descriptor fails to match the image pair if of
this pair is less than 50.
Table 1: Comparison of matching methods for Dataset-S

Method Blur size/Image size


Avg. Avg. (%)

1/12 297 71
The proposed 1/9 236 67
method
1/6 172 65
1/12 34 60
SIFT 1/9 23 55
1/6 15 55
1/12 34 62
SURF 1/9 22 56
1/6 16 51

Table.1 shows the comparison between our method and methods of SIFT, SURF for Dataset-S. As shown in this
table, we divide all blurred images into 3 classes according to the sizes of blur kernels regardless of Gaussian blur or
motion blur. And we show the average results in the third and fourth columns due to space limitations.We can see
that the existing methods always fail to match blurred images according to the . On the other hand, our method is
good at blurred image matching. No matter for if motion blurred images or Gaussian blurred images, the of our
method is much (about times) higher than the existing methods. Furthermore, the matching precision of our
method is also higher than the matching precisions of SIFT. Some examples of image matching between
image pairs from Dataset-S are shown in Fig.5.We can see that our matching method is good at blurred image
matching and outperforms the state of the art method.
Table 2: Comparison of matching methods for Dataset-N

Method Sub-set
Avg. Avg. (%)

The proposed B &T 1389 63


method Bench 351 66

Volume 5, Issue 10, October 2017 Page 156


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 10, October 2017 ISSN 2321-5992

Post 371 41
B &T 369 35
SIFT Bench 17 27
Post 24 17
B &T 218 31
SURF Bench 15 23
Post 29 11

Table.2 shows the comparison of Dataset-N. We used 4 subsets - "Bikes" and "Trees" represented by 'B&T,'
"Benchmark" represented by'Bench', and Posters represented by 'Post'. We only show average results due to space
limitations. Note that, even though the images of "Bikes" and "Trees" are famous images and are widely used in
computer vision, they contain small blurs for our research. So, we just used 2 strongly blurred (the fifth and sixth)
images in "Bikes" and "Trees". We can see that our matching method is much better than the existing methods
according to the Nm. And the matching precision of our method is 20%~30% higher than the matching precisions of
SIFT.

Figure 5Some examples of blurred image matching: the results of SIFT descriptor and all its matched pairs are
shown in (a), (c), (e), the results of our descriptor and only 100 matched pairs of each image pair are shown in (b),
(d), (f).

Volume 5, Issue 10, October 2017 Page 157


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 10, October 2017 ISSN 2321-5992

Some examples of image matching between image pairs from Dataset-N are shown in Fig.6. From Fig.1 and Fig.6,
we can see our matching method is much better than the state of the art method for blurred image matching.

Figure 6Some examples of Dataset-N: the results of SIFT and all its matched pairs are shown in (a), (c),
(e), the results of our method and only 100 matched pairs of each image pair are shown in (b), (d), (f).

Volume 5, Issue 10, October 2017 Page 158


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 10, October 2017 ISSN 2321-5992

4.3 Evaluation for local feature descriptor


In this section, we only focus on local feature descriptors and compare them.To eliminate the effect from detectors, we
use our blur-invariant detector [15] for all descriptors.
First, we adjusted the parameters (the number of sub-regions and bins as described in section 3.3) of our
descriptor. As shown in Fig.7, "The proposed-128" represents ( , ), "MBE-32"
represents ( , ), "MBE-64" represents ( , ), "MBE-256"
represents ( , ). We can see that the matching precision of the descriptor gets better
when the dimension of the descriptor gets larger for both datasets. However, we think that the proposed one
( ) is the best choice when we consider the efficiency of the descriptor, since precision of the
proposed one is much better than low-dimensional ones and a little inferiorto the high-dimensional one. In addition,
the number of correct matches of the proposed one ( ) is good too, and no failed case exists.
As shown in Fig.7, we also compared our descriptor to the descriptors that directly apply traditional moments.
"Moment-10" represents a 10-dimensional descriptor. It applies 10 (order up to 5) blur-invariant moments [26], [27]

(a) (b)
Figure 7Parameter tuning of our descriptor, (a) and (b) show the results of Dataset-S and Dataset-N respectively.
computed from the whole region. "Moment-160" means a 160-dimensional descriptor. Namely, we computed those 10
moments in "Moment-10" from each of sub-regions of the whole region and apply the total 160 moments as a
feature vector. As shown in Fig.7, the descriptors that directly apply the traditional moments are not distinctive when
compared to local feature descriptors just as we described in section 3.1.

(a) (b)
Figure 8Results of comparison, (a) and (b) show results of Dataset-S and Dataset-N respectively.

Volume 5, Issue 10, October 2017 Page 159


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 10, October 2017 ISSN 2321-5992

We also compared our descriptor to the existing descriptors (SIFTs and SURFs descriptors),since we found that
these are much more robust to blur than other existing descriptors according to [21] and the results of our preliminary
experiments. So, we just show the comparison of our descriptor and the descriptors of SIFT and SURF that we
considered as the state of the art descriptors.

As shown in Fig.8, we can see that the performance of the proposed descriptor outperforms the state of the art
descriptors for both Dataset-S and Dataset-N. The matching precisions of the proposed descriptor is15%~48% higher
than the existing descriptors. Furthermore, our descriptor always works well for all images in these two datasets, unlike
SIFT descriptor that failed to match about 33%of the images in Dataset-S and 40%in Dataset-N according to the N_m.
SURF descriptor failed to match about 37%inDataset-S and 41%in Dataset-N.

Some examples of blurred image matching are shown in Fig.9, we can see that our descriptor (the results are shown
in (b), (d), (f)) is much better than the state of the art descriptor (the results are shown in (a), (c), (e)). This proves that
our descriptor is distinctive and robust to blur.

Figure 9Some examples of blurred image matching: the results of SIFT descriptor and all its matched pairs are shown in (a),
(c), (e), the results of our descriptor and only 100 matched pairs of each image pair are shown in (b), (d), (f).

Volume 5, Issue 10, October 2017 Page 160


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 10, October 2017 ISSN 2321-5992

5. CONCLUSION AND FUTURE WORK


In this paper, we proposed a blur-invariant local feature descriptor which outperforms the state of the art descriptors
and is good at blurred image matching. The proposed descriptor is based on some blur-invariant moments. By encoding
these blur-invariant moments, our descriptors is highly robust to blur while keeping distinctiveness.
However, we believe matching methods for blurred image matching is still a wide-open area and there are still some
limitations and problems that need to be fixed and solved. For example, we want to try other complex patterns to
describe the local region and generate a more distinctive and blur-invariant descriptor. Furthermore, we just focused on
the method for motion blurred and Gaussian blurred image matching in this paper. However, as future work, we want
to make our method suitable for other types blurred image matching.

References
[1] K. Mikolajczyk, C. Schmid, Indexing based on scale invariant interest points, in: ICCV, pp. 525-531, 2001.
[2] C. Schmid, R. Mohr, Local grayvalue invariants for image retrieval, PAMI, 19(5), pp. 530-534, 1997.
[3] V. Ferrari, T. Tuytelaars, L. VanGool, Simultaneous object recognition and segmentation by image, in: ECCV,
pp. 40-54, 2004.
[4] C. Schmid, et al, Selection of scale-invariant parts for object class recognition, in: ICCV, pp. 634-639, 2003.
[5] T. Tuytelaars, L. VanGool, Matching widely separated views based on affine invariant regions, IJCV, 59(2), pp.
61-85, 2004.
[6] J.F. Cai, H. Ji, C. Liu, Z. Shen, Blind motion deblurring using multiple images, J. Comput. Physics, 14(228),
pp. 5057-5071, 2009.
[7] Z. Hu, M. Yang, Good regions to deblur, in: ECCV, 19(5), pp. 59-72, 2012.
[8] C. Schuler, M. Hirsch, S. Harmeling, Learning to Deblur, PAMI, 38(7), pp. 1439-1451, 2016.
[9] D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV, 60(2), pp. 91-110, 2004.
[10] H. Bay, T. Tuytelaars, L. VanGool, Surf: Speeded up robust features, in: ECCV, pp. 404-417, 2006.
[11] E. Rublee, et al, ORB: An efficient alternative to SIFT or SURF, in: ICCV, pp. 2564-2571, 2011.
[12] S. Leutenegger, Y. Siegwart, BRISK: Binary robust invariant scalable keypoints, in: ICCV, pp. 2548-2555,
2011.
[13] E. Tola, V. Lepetit, P. Fua, Daisy: An efficient dense descriptor applied to wide-baseline stereo, PAMI, 32(5),
pp. 815-830, 2010.
[14] Z.Wang, F. Wu, Local intensity order pattern for feature description, in: ICCV, pp. 603-610, 2011.
[15] Q. Tong,T. Aoki. "Moment Symmetry: A novel method for interest point detection to match blurred and non-
blurred images,"in ICIVC, IEEE, 26-30, (2016).
[16] K. Mikolajczyk,C. Schmid, Scale & affine invariant interest point detectors, IJCV, 60(1), pp. 63-68, 2004.
[17] L. Florack, B. Haar Romeny, J. Koenderink, M. Viergever, Local intensity order pattern for feature description,
in: Scandinavian Conference on Image Analysis, pp. 338-345, 1991.
[18] A. Baumberg, Reliable feature matching across widely separated views, in: CVPR, pp. 774-781, 2000.
[19] W.T. Freeman, E.H. Adelson, The design and use of steerable filters, PAMI, 13(9), pp. 891-906, 1991.
[20] G. Carneiro, A.D. Jepson, Multi-scale phase-based local features, in: CVPR, pp. 736-743, 2003.
[21] K. Mikolajczyk, C. Schmid, A performance evaluation of local descriptors, PAMI, 27(10), pp. 1615-1630, 2004.
[22] Y. Ke, R. Sukthankar, PCA-SIFT: A more distinctive representation for local image descriptors, in: CVPR, pp.
511-517, 2004.
[23] M. Calonder, V. Lepetit, C. Strecha, P. Fua, Brief:binary robust independent elementary features, in: ECCV, pp.
778-792, 2010.
[24] T. Gabriel,et al, Unified real-time tracking and recognition with rotation-invariant fast features, in: CVPR, pp.
934-941, 2010.
[25] F. Mindru,T. Tuytelaars, L. VanGool, T. Moons, Moment invariants for recognition under changing viewpoint
and illumination, CVIU, 1(94), pp. 3-27, 2004.
[26] J. Flusser, T. Suk, Degraded image analysis: an invariant approach, PAMI, 6(20), pp. 590-603, 1998.
[27] J. Flusser, et al, Recognition of images degraded by linear motion blur without restoration, in: Theoretical
Foundations of Computer Vision, pp. 37-51, 1996.
[28] http://www1.cs.columbia.edu/CAVE/software/softlib/coil--20.php
[29] R. Khler, S. Harmeling, Recording and playback of camera shake: Benchmarking blind deconvolution with a
real-world database, in: ECCV, pp. 27-40, 2012.

Volume 5, Issue 10, October 2017 Page 161

Das könnte Ihnen auch gefallen