Sie sind auf Seite 1von 10

REDUCTION OF INTERPOLATION ERRORS WHEN USING LOCAL

CORRELATION TRACKING FOR MOTION DETECTION

H. E. POTTS, R. K. BARRETT and D. A. DIVER


Department of Physics, University of Glasgow, Glasgow G12 8QA, U.K.
(e-mail: hugh@astro.gla.ac.uk)

(Received 28 April 2003; accepted 28 June 2003)

Abstract. Local correlation tracking (LCT) is a commonly used motion tracking technique, par-
ticularly in solar physics. When used to track motions smaller than one pixel per time sample,
interpolation of the original data is required. We demonstrate that it is possible to introduce large
systematic errors by using an inappropriate interpolation method, and describe how to avoid these
errors. The effect of these errors on the calculated velocity field is demonstrated on simulated solar
granulation data.

1. Introduction

Local correlation tracking (LCT) is a method commonly used in the solar com-
munity for determining velocity fields in time-resolved image data. November and
Simon (1988) first demonstrated that this method could be used to measure pre-
cisely the proper-motion of solar granulation, using high-resolution images from
the Sacramento Peak Vacuum Tower Telescope. Since then LCT has been used
on a wide variety of data sources at many different spatial scales to study solar dy-
namics. Title et al. (1989) carried out a detailed study of the statistical properties of
granulation using data from SOUP on Spacelab-2. The technique has been particu-
larly useful for analysing SOHO MDI data. High-resolution MDI data (0.6 arc sec
pixels) was used to study the long-term evolution of supergranules (Shine, Simon,
and Hurlburt, 2000) over a 45.5 hr run, and also by De Rosa, Duvall, and Toomre
(2000) to compare with the results from timedistance helioseismology. Lower-
resolution, full-disk MDI dopplergrams (2 arc sec pixels) were used with LCT
to track mesogranules by Lisle, De Rosa, and Toomre (2000) in order to study
the interaction between magnetic elements and supergranular flows. LCT has also
been used with high-resolution white-light images from TRACE to measure the
supergranular flows (Krijger, Roudier, and Rieutord, 2002).
The method involves calculating the rigid translation of small image elements
between consecutive frames of data. Although LCT is computationally intensive,
it is simple to implement, robust and flexible, and can give very accurate results,
even when the image features that are being tracked are short-lived. It is possible,
however, to introduce large systematic errors into the results, by not choosing
interpolation algorithms with sufficient care. Interpolation errors were noted by

Solar Physics 217: 6978, 2003.


2003 Kluwer Academic Publishers. Printed in the Netherlands.
70 H. E. POTTS, R. K. BARRETT AND D. A. DIVER

Figure 1. A simple representation of the LCT method.

Hurlburt et al. (1995) in a paper assessing the accuracy of the LCT algorithm used
at the Lockheed Solar and Astrophysics Laboratory. This paper discusses the cause
and magnitude of these errors and describes methods for avoiding them.

2. The Local Correlation Tracking (LCT) Method

2.1. I NTEGER SHIFTS

An illustration of the simplest form of LCT is shown in Figure 1. The two graphs
represent snapshots of the same area of a surface at two consecutive time points.
The first frame is broken up into many small subimages; one is shown by the box.
Subimages of the same size are then chosen on the subsequent frame at different
x and y pixel translations. The correlation between the two subimages is then
calculated, based on the individual pixel differences, until the shift that gives the
maximum correlation is found. This is repeated for all subimages, and so builds up
a map of the local shifts between the consecutive frames.
The size of the subimages, and hence the spatial resolution of the velocity
obtained, needs to be chosen with care. The subimages must not be significantly
smaller than the variation scale of the features that are to be tracked. Larger subim-
ages will give better signal-to-noise ratios, but with poorer spatial resolution. The
size of the shifts measured is limited to integer pixel shifts, and this method is
therefore only suitable for high-resolution images with large shifts between the
frames. In the case where sub-pixel precision is required, interpolation of some
kind needs to be used to calculate the shifted subimages.

2.2. I NTERPOLATED SHIFTS

To track motions with greater velocity resolution than the size of a single pixel,
the shifted images need to be generated using some kind of spatial interpolation.
As most interpolation methods introduce amplitude distortions which are a func-
tion of the interpolation distance, comparing the original subimage with a shifted
REDUCTION OF INTERPOLATION ERRORS 71

subsequent image as described can cause problems with the correlation measure-
ment. Figure 2 clearly illustrates the amplitude reduction that is caused by linear
interpolation. A simple way around this problem is to take the two frames and
shift them both equal distances, but in opposite directions, introducing the same
amplitude distortion into both. Amplitude errors then cancel out in the correlation
calculation.
This step alone, however, is not adequate to avoid all interpolation errors, espe-
cially if the data is sampled such that it contains spatial structures with scale lengths
close to the sampling interval. In the next section the sources of interpolation error
are examined in more detail.

3. Interpolation Problems

All interpolation methods rely on some estimate of the properties of the continuous
signal between the discrete sample points. If these were known precisely, and the
data adequately sampled, the interpolation would give an exact value. Methods
vary from the simplest, of assuming that the value of the data varies linearly in
each direction between the sample points, to ones involving higher derivatives and
many sample points of the original data for each interpolated point. It is important
to note that choosing a sophisticated, high-order method does not necessarily result
in more accurate interpolation. Indeed if the data is noisy or sparsely sampled, high-
order interpolants can result in substantial errors, often greater than those from
simple linear interpolation.
Interpolated LCT relies critically on being able to measure small phase dif-
ferences between two sets of interpolated data. This means that to evaluate any
interpolation scheme both the phase and amplitude errors must be considered.
Merely generating a smooth curve that goes through all the data points is not
adequate.
In order to see how phase and amplitude errors occur we will examine a simple
system: a linearly interpolated sine wave.
Consider a signal y = sin x, coarsely sampled (but above the critical Nyquist
sampling rate) at spatial intervals . In order to apply a rigid sub-pixel shift to
the data we need to resample this data, with the same interval, but offset from the
original samples by a distance . Consider two consecutive points yi = sin xi and
yi+1 = sin xi+1 = sin (xi + ). The coordinates of a linearly interpolated point yn
at xn = xi + are:
72 H. E. POTTS, R. K. BARRETT AND D. A. DIVER

Figure 2. An illustration of the phase and amplitude errors that can be introduced by interpolation.
Two signals are generated by fitting sine waves to the results of linear interpolation of a coarsely
sampled (black crosses) sine wave. The dotted line represents the original continuous function.


yn = yi + (yi+1 yi )


= sin xi + [sin(xi + ) sin xi ] (1)

   

= sin xi 1 + (cos  1) + cos xi sin  .
 
From this it can clearly be seen that the set of interpolated points make up another
sine wave with the same frequency as the original, but with different phase and
amplitude. The phase shift is given by

sin 
tan =  (2)

1 + (cos  1)

and the amplitude A of the interpolated signal is
 2  2

A = 1 + (cos  1) +
2
sin 
 
  (3)
2
= 1 + 2(cos  1) .
 2
These effects can be clearly seen in Figure 2. A coarsely sampled sine wave,
shown by black crosses is linearly interpolated (thick black line). From this inter-
polation two other signals are generated, one with points sampled /4 before the
original data points and one sampled the same distance after. Sine waves are then
fitted to these resampled points. If the interpolation method were perfect these sine
waves would match the original signal in both phase and amplitude. The change in
both is easy to see. The fitted sine waves have reduced amplitude and a significant
phase change, comparable in size to the distance the points were shifted. Note
REDUCTION OF INTERPOLATION ERRORS 73

Figure 3. Absolute error in LCT shift measurements for different interpolation methods.

that the phase error introduced by the linear interpolation moves the phase of the
resampled signal in the same direction as the interpolated point is moved from its
nearest original data point. This property is shared by most interpolation methods,
as will be seen later.
The amplitude error is symmetrical as you interpolate away from a data point.
This means that the method of shifting both subimages by equal distances in op-
posite directions, as described earlier, does exactly cancel this error.
The error in phase, which is the most important parameter when using LCT,
cannot be cancelled out like this. These errors can also be quite large, sometimes
substantially larger than the shift applied to the data, making linearly interpolated
LCT increasingly inaccurate for small shifts.
In Figure 3 the absolute error in the shift obtained from using LCT with a variety
of different interpolation methods is shown. The signal tested was a sine wave with
a frequency of around 23 of the Nyquist frequency for the samples, giving around
3 samples per cycle. LCT was carried out as described in Section 2.2. As can be
seen the errors are periodic, with a period of two sample intervals. This is due to
the shift being applied in equal and opposite directions to both data sets. When the
shift is less than one sample interval, LCT always overestimates the magnitude of
the shift, sometimes by a considerable amount.
A much more useful measure of the accuracy is the relative error in the shift.
This is shown in Figure 4, where the ratio of the measured shift to the true shift
is shown. Here the drastic extent of the errors becomes apparent. For small shifts
the measured shift can easily be several times the true value. Note also that using
piecewise cubic interpolation (which uses the nearest two neighbours on each side
of the interpolated point) gives far worse results than simple linear interpolation,
as well as being far slower. The reason for this is clear: as there are only three
sample points per cycle, a cubic polynomial fitted across four data points will not
match the function well. This is important, as the high-frequency modes near to the
sampling frequency give the highest sensitivity when measuring small shifts.
The method that gives by far the best results, with a maximum error of less
than 1%, is Fourier interpolation. This method is discussed in more detail in the
following section.
74 H. E. POTTS, R. K. BARRETT AND D. A. DIVER

Figure 4. Error in LCT shift measurements, expressed as a proportion of the true shift value. Fourier
interpolation gives the only acceptable results for small shifts.

Figure 5. Examples of the performance of different interpolation systems. The lower plot is a dupli-
cate of the upper plot, but with a magnified y scale to show the small errors introduced by Fourier
interpolation.
REDUCTION OF INTERPOLATION ERRORS 75

4. Fourier Interpolation

Fourier interpolation is a method that relies on specific assumptions about the spec-
tral content of the data. If the data has any aliased modes in it, typically caused by
sampling without first filtering the data correctly, LCT will fail badly. This is a fault
in the data that should be eliminated at acquisition, as it is almost impossible to
remove by post processing. Large amplitudes of random noise can also make this
method work erratically, although intelligent filtering can help this considerably.
The best way to filter is to consider the spatial scale of the features that are used
for tracking, and filter all but these modes. The sensitivity of LCT to detect shifts
in a feature of a given amplitude is inversely proportional to the scale length of the
feature. Large-scale features do not vary sufficiently over the area of the subimages
to contribute to the correlation signal. Short wavelength features that are not to
be tracked, such as random noise, must be filtered out as they will dominate the
correlation calculation for small shifts. The easiest and fastest filter to use for this
propose is a Fourier filter, where all data not of the required wavelengths can be
rejected in Fourier space.

4.1. M ETHOD FOR F OURIER INTERPOLATION

Consider an image, a two-dimensional data grid f (x, y). Before Fourier transform-
ing the data it is important to apply a window function w(x, y) in order to ensure
that artifacts caused by the discontinuity between the opposite sides of the image
do not propagate far into the data. A two-dimensional Welch window works well:
  2   2
x 12 Nx y 12 Ny
w(x, y) = 1 1
1 1
, (4)
N
2 x
N
2 y

where Nx and Ny are the dimensions of the image data. We then take the 2D Fourier
transform of the product of the image and the window function:

1
M N
ux uy
F (u, v) = w(x, y) f (x, y)ei2( M + N ) . (5)
NM x=0 y=0

To implement a shift of (x, y) we just modify the phase part of the transform, to
give us the Fourier transform of the shifted data, Fs :
Fs (u, v) = F (u, v)ei(kx (u,v)x+ky (u,v)y), (6)
where kx (u, v) and ky (u, v) are the wave numbers of the Fourier component
F (u, v). For the first quadrant of the transformed data, where u < M/2 and
v < N/2, the wave numbers are kx = 2 u/M and ky = 2 v/N. The wave
numbers for the other quadrants are obtained by reflection of these values abut the
u = M/2 and v = N/2 columns. This all assumes that the data indices start at
(0, 0).
76 H. E. POTTS, R. K. BARRETT AND D. A. DIVER

Figure 6. Synthetic granulation. Left: the velocity field (maximum velocity 0.6 pixels per frame).
Right: section of a single data frame, showing the granules that are advected by the velocity field.

The interpolated, windowed data, fsw , is then generated by taking the inverse
transform of Fs (u, v). The data then need to be divided by the windowing function
to renormalise it. As the data points are now at (x + x, y + y) a new windowing
function must be generated at these points:
  2   2
x + x 12 Nx y + y 12 Ny
ws (x + x, y + y) = 1 1
1 1
. (7)
N
2 x
N
2 y

The final, interpolated data set fs (x +x, y +y) is obtained by dividing the inverse
transform result by the shifted window:
fsw
fs (x + x, y + y) = . (8)
ws
Figure 5 compares the results of different, common types of interpolation. A
high-resolution range of bandwidth limited data was generated, shown by the solid
line. This data was then sampled, shown by the crosses, such that the smallest
wavelengths contained in the data were close to the Nyquist frequency. These
sample points were then interpolated to try to reconstruct the original data. The
image shows the first 15 interpolated data points from a large data set, where point
0 is the edge point of the original data. The errors caused by the edge discontinuity
for the Fourier transform method are clear for the first few points, but rapidly de-
cay, beyond which the Fourier shift gives almost perfect results. The phase errors
caused by the cubic spine method, the best of the conventional interpolants are
clear between points 9 and 13.

5. Testing on Simulated MDI Data

To show the effects of interpolation errors in a more realistic context, LCT using
linear and Fourier interpolation was carried out on some simulated granulation
data, designed to have similar properties to filtered MDI continuum data. Data
REDUCTION OF INTERPOLATION ERRORS 77

Figure 7. Velocity distribution obtained from using LCT on simulated granulation data, showing the
effect of different interpolation methods.

representing 6 hours (360 frames) of MDI high-resolution continuum data was


generated. Each frame of the data was 256 256 pixels, with each pixel represent-
ing 0.6 arc sec. A brief outline of the method for generating the data follows, and
a detailed description may be found in Potts, Barrett, and Diver (2003). The data
was generated by making a uniformly distributed array of cells, each representing
a granule. Each cell is allocated a lifetime and size, which varies over the lifetime
of the cell. There are two components to the cell motion. The cells are advected
by a specified underlying velocity field, shown in Figure 6. They also experience a
repulsive force from all the surrounding cells, resulting in a random walk as cells
appear and disappear. The random walk velocity is typically 23 times larger than
the advection component. The properties of the underlying velocity field and the
distribution of sizes and lifetimes are chosen to match those of real granulation. We
used the statistical data found from the SOUP instrument as a basis (Title et al.,
1989), although we have chosen a larger underlying velocity field, (approx. 1.5
times larger), as this shows the distortion effects more clearly. Part of a frame of
the generated data is shown in the left-hand image in Figure 6.
The motion was then tracked using LCT with subimages of 9 9 pixels, with
the cell centres separated by 4 pixels in the x and y directions. Each correlation
calculation was weighted with a gaussian of FWHM of 5 pixels, centred on the
cell. The LCT calculation was performed using Fourier and linear interpolation.
Both interpolation methods obtained results that recovered the direction of the flow
field with reasonable accuracy, but the difference between the results is clear when
the distribution of the recovered velocities is examined. In Figure 7 the distribution
78 H. E. POTTS, R. K. BARRETT AND D. A. DIVER

function for the absolute values of the recovered velocities are compared with the
known underlying velocity field. Fourier interpolation recovers the velocity distri-
bution with good accuracy, and small random errors, but for linear interpolation
substantial systematic errors are clear. The result of the errors is to reduce the
population of low velocity areas, creating an excess of larger velocities. As the
recovered velocities are the result of a temporal average of many fast random
walk velocities at each point, the shape of this distortion is hard to predict from
the known errors in the interpolation step.

6. Conclusions

We have demonstrated that care needs to to be taken when using local correlation
tracking to analyse small amplitude velocity fields. Large distortions can occur in
the velocity profiles recovered where sub-pixel interpolation is used. These errors
are caused by using an inappropriate interpolation method. The only interpolation
method we have found to be safe is Fourier interpolation, and when this method is
used LCT faithfully reproduces the correct velocity field.

Acknowledgement

This work was funded in the UK at Glasgow University by PPARC rolling grant
number PPA/G/0/2001/00472.

References

De Rosa, M., Duvall, T. L., and Toomre, J.: 2000a, Solar Phys. 192, 351.
De Rosa, M., Duvall, T. L., and Toomre, J.: 2000b, Solar Phys. 197, 21.
Hurlburt, N. E., Schrijver, C., Shine, R., and Title, A. M.: 1995, Proceedings of the 4th SOHO
Workshop, p. 239.
Krijger, J. M., Roudier, T., and Rieutord, M.: 1989, Astron. Astrophys. 387, 672.
November, L. J. and Simon, G. W.: 1988, Astrophys. J. 333, 427.
Potts, H. E., Barrett, R. K., and Diver, D. A.: 2003, submitted.
Shine, R. A., Simon, G. W., and Hurlburt, N. E.: 2000, Solar Phys. 193, 313.
Title, A. M., Tarbell, T. D., Topka, K. P., Furguson, S. H., and Shine, R. A.: 1989, Astrophys. J. 336,
475.

Das könnte Ihnen auch gefallen