Sie sind auf Seite 1von 90

MATH 590: Meshfree Methods

Chapter 15: Refined and Improved Error Bounds

Greg Fasshauer

Department of Applied Mathematics


Illinois Institute of Technology

Fall 2010

fasshauer@iit.edu MATH 590 Chapter 15 1


Outline

1 Native Space Error Bounds for Specific Basis Functions

2 Improvements for Native Space Error Bounds

3 Error Bounds for Functions Outside the Native Space

4 Error Bounds for Stationary Approximation

5 Convergence With Respect to the Shape Parameter

6 Dimension-Independent Error Bounds

7 Polynomial Interpolation as the Limit of RBF Interpolation

fasshauer@iit.edu MATH 590 Chapter 15 2


Native Space Error Bounds for Specific Basis Functions

For the first part of this chapter we discuss the non-stationary


approximation setting.
The additional refinement of the error estimate of the
approximation theorem from Chapter 14 for specific kernels K is
rather technical (for details see, e.g., the book
[Wendland (2005a)]).
A large body of literature exists on this topic such as, e.g.,
[Buhmann and Dyn (1991), Light (1996), Light and Wayne (1995),
Light and Wayne (1998), Madych (1992),
Madych and Nelson (1992), Narcowich and Ward (2004),
Narcowich et al. (2003), Narcowich et al. (2005),
Schaback (1995a), Schaback (1996), Wendland (1998),
Wendland (1997), Wu and Schaback (1993), Yoon (2003)]).
We now list some of the results that can be obtained.

fasshauer@iit.edu MATH 590 Chapter 15 4


Native Space Error Bounds for Specific Basis Functions Infinitely Smooth Basis Functions

As mentioned before, an application of the estimate


k ||
p
|D f (x) D Pf (x)| ChX , CK (x)|f |NK () ,

for interpolation on bounded domains Rs from Chapter 14 to


infinitely smooth functions such as Gaussians or generalized (inverse)
multiquadrics immediately yields arbitrarily high algebraic convergence
rates, i.e., for every ` N and || ` we have

|D f (x) D Pf (x)| C` h`|| |f |NK () . (1)

whenever f NK ().

A considerable amount of work has gone into investigating the


dependence of the constant C` on ` (see, e.g., [Wendland (2001)] or
the recent work in [Luh (2009)]).

In general, the constant C` even depends on the space dimension s, in


essence allowing high-order approximation only for small values of s
(see, e.g., [Fasshauer et al. (2010), Luh (2009)]).
fasshauer@iit.edu MATH 590 Chapter 15 5
Native Space Error Bounds for Specific Basis Functions Infinitely Smooth Basis Functions

Using different proof techniques (see [Madych and Nelson (1988)] or


[Wendland (2005a)] for details) it is possible to derive more precise
error bounds for Gaussians and (inverse) multiquadrics (whose
constants still depend on s). We quote from [Wendland (2005a)]:
Theorem
Let be a cube in Rs . Suppose that = (k k) is a strictly

conditionally positive definite radial function such that = ( )
satisfies | (`) (r )| `!M ` for all integers ` `0 and all r 0, where M
is a fixed positive constant.
Then there exists a constant c such that for any f N ()
c
kf Pf kL () e hX , |f |N () , (2)

for all data sites X with sufficiently small fill distance hX , .


If satisfies even | (`) (r )| M ` , then for sufficiently small hX ,
c| log hX , |
hX ,
kf Pf kL () e kf kN () . (3)
fasshauer@iit.edu MATH 590 Chapter 15 6
Native Space Error Bounds for Specific Basis Functions Infinitely Smooth Basis Functions

Example

For Gaussians
2 kxk2
(x) = e , > 0 fixed,
2
we have (r ) = e r , so that
2r
(`) (r ) = (1)` 2` e for ` `0 = 0.

Thus, M = 2 , and the error bound corresponding to (3) applies.


This kind of exponential approximation order is usually referred to as
spectral (or even super-spectral) approximation order.

Remark
We emphasize that this nice property holds only in the non-stationary
setting and for data functions f that are in the native space of the
Gaussians such as band-limited functions.

fasshauer@iit.edu MATH 590 Chapter 15 7


Native Space Error Bounds for Specific Basis Functions Infinitely Smooth Basis Functions

Example

For generalized (inverse) multiquadrics

(x) = (1 + kxk2 ) , < 0, or 0 <


/ N,

we have (r ) = (1 + r ) . In this case one can show that

| ` (r )| `!M ` whenever ` de.

Here M = 1 + | + 1|. Therefore, the error estimate corresponding to


(2) applies, i.e., in the non-stationary setting generalized (inverse)
multiquadrics have spectral approximation order.

fasshauer@iit.edu MATH 590 Chapter 15 8


Native Space Error Bounds for Specific Basis Functions Infinitely Smooth Basis Functions

Example

For Laguerre-Gaussians
s/2 2 kxk2
(x) = Ln (kxk2 )e , > 0 fixed,

we have
s/2 2r
(r ) = Ln (2 r )e
and the derivatives (`) will be bounded by (`) (0) = pn (`)2` , where
pn is a polynomial of degree n.
Thus, the approximation power of Laguerre-Gaussians falls between
(3) and (2) and Laguerre-Gaussians have at least spectral
approximation power.

fasshauer@iit.edu MATH 590 Chapter 15 9


Native Space Error Bounds for Specific Basis Functions Basis Functions with Finite Smoothness

For functions with finite smoothness such as the


Matrn functions,
radial powers,
thin plate splines, and
Wendlands compactly supported functions
it is possible to bound the constant C (x) by some additional powers
of h, and thereby to improve the estimate
k ||
p
|D f (x) D Pf (x)| ChX , C (x)|f |N ()

discussed in Chapter 14.

fasshauer@iit.edu MATH 590 Chapter 15 10


Native Space Error Bounds for Specific Basis Functions Basis Functions with Finite Smoothness

In particular, for C k functions the factor C (x) can be expressed as

C (x) = max kD kL (B(0,2chX , ))


||=2k

independent of x (see [Wendland (2005a)]).

Therefore, this results in the following error estimates (see, e.g.,


[Wendland (2005a)], or the much earlier [Wu and Schaback (1993)]
where other proof techniques were used).

fasshauer@iit.edu MATH 590 Chapter 15 11


Native Space Error Bounds for Specific Basis Functions Basis Functions with Finite Smoothness

Example

For the Matrn functions


s
K s (kxk)kxk 2 s
2
(x) = , > ,
21 () 2

we get for bounded domains


s ||
|D f (x) D Pf (x)| ChX ,2 |f |N () (4)

provided || d s+1
2 e, hX , is sufficiently small, and f N ().

fasshauer@iit.edu MATH 590 Chapter 15 12


Native Space Error Bounds for Specific Basis Functions Basis Functions with Finite Smoothness

Example

For the compactly supported Wendland functions

s,k (x) = s,k (kxk)

this first refinement leads to


k + 1 ||
|D f (x) D Pf (x)| ChX ,2 kf kN () (5)

provided is a bounded domain, || k , hX , is sufficiently small,


and f N ().

fasshauer@iit.edu MATH 590 Chapter 15 13


Native Space Error Bounds for Specific Basis Functions Basis Functions with Finite Smoothness

Example

For the radial powers

(x) = (1)d/2e kxk , 0<


/ 2N,

we get

||
|D f (x) D Pf (x)| ChX2 , |f |N () (6)
de1
provided is a bounded domain, || 2 , hX , is sufficiently
small, and f N ().

fasshauer@iit.edu MATH 590 Chapter 15 14


Native Space Error Bounds for Specific Basis Functions Basis Functions with Finite Smoothness

Example

For thin plate splines

(x) = (1)k +1 kxk2k log kxk,

we get
k ||
|D f (x) D Pf (x)| ChX , |f |N () (7)
provided is a bounded domain, || k 1, hX , is sufficiently small,
and f N ().

fasshauer@iit.edu MATH 590 Chapter 15 15


Improvements for Native Space Error Bounds

Radial powers and thin plate splines can be interpreted as a


generalization of univariate natural splines.

Therefore, we know that the approximation order estimates obtained


via the native space approach are not optimal.
Example
For interpolation with univariate piecewise linear splines on a bounded
interval we know the approximation order to be O(h), whereas using

(x) = |x|, x R

the estimate (6) yields only approximation order O(h1/2 ).

fasshauer@iit.edu MATH 590 Chapter 15 17


Improvements for Native Space Error Bounds

Example
Similarly, for thin plate splines

(x) = kxk2 log kxk

one would expect order O(h2 ) in the case of pure, i.e., || = 0, function
approximation.
However, the estimate (7) yields only O(h).

Remark
These two examples suggest that it should be possible to double the
approximation orders obtained thus far.

fasshauer@iit.edu MATH 590 Chapter 15 18


Improvements for Native Space Error Bounds

One can improve the estimates for functions with finite smoothness
(i.e., Matrn functions, Wendland functions, radial powers, and thin
plate splines) by either (or both) of the following two ideas:
by requiring the data function f to be even smoother than what the
native space prescribes, i.e., by building certain boundary
conditions into the native space;
by using weaker norms to measure the error.

fasshauer@iit.edu MATH 590 Chapter 15 19


Improvements for Native Space Error Bounds

The idea to localize the data by adding boundary conditions was


introduced in the paper [Light and Wayne (1998)].

This trick allows us to square the approximation order, and thus


reach the expected approximation orders.

The second idea to weaken the norm is rather obvious and can
already be found in the early paper [Duchon (1978)].

fasshauer@iit.edu MATH 590 Chapter 15 20


Improvements for Native Space Error Bounds

Example
After applying both of these techniques the final approximation order
estimate for interpolation with the compactly supported functions s,k
on a bounded domain is (see [Wendland (1997)])
2k +s+1
kf Pf kL2 () ChX , kf kW 2k +s+1 (Rs ) , (8)
2

k + s+1
where we assume f W22k +s+1 (Rs ) W2 2
(Rs ) = N (Rs ).
For example, for the popular basic function

3,1 (r ) = (1 r )4+ (4r + 1)

we have with s = 3 and k = 1


6
kf Pf kL2 () ChX , kf kW 6 (R3 ) .
2

Note that the numerical experiments in Chapter 12 produced


RMS-convergence rates only as high as 4.5.
fasshauer@iit.edu MATH 590 Chapter 15 21
Improvements for Native Space Error Bounds

Example (cont.)
The L error for Wendland functions s,k (without the L2 relaxation) is

2k +1+ 2s
kf Pf kL () ChX , kf kW 2k +s+1 (Rs ) ,
2

k + s+1
where f needs to be such that it not only lies in N () = W2 2 (),
but also has an extension to the Sobolev space W22k +s+1 (Rs ) that can
be written as a convolution of an appropriate L2 function supported in
with the basic function s,k .
For the specific example of 3,1 (r ) = (1 r )4+ (4r + 1) on R3 we get
4.5
|f (x) Pf (x)| ChX , kf kW 6 (R3 ) 2

which is more in line with the numerical evidence obtained earlier (see
[Wendland (1997)] for details).

fasshauer@iit.edu MATH 590 Chapter 15 22


Improvements for Native Space Error Bounds

Example
For radial powers one obtains L2 -error estimates of order O(h+s ).
For thin plate splines one obtains L2 -error estimates of order
O(h2k +s ).

These estimates are optimal, i.e., exact approximation orders, as


shown in [Bejancu (1999)].

Remark
More work on improved error bounds can be found in, e.g.,
[Johnson (2004)] or [Schaback (1999)].

fasshauer@iit.edu MATH 590 Chapter 15 23


Improvements for Native Space Error Bounds

Remark
Since for the examples of piecewise linear splines and thin plate
splines discussed above our kernels (for interpolation on bounded
domains) were given by full space Greens functions, another
possibility to obtain the full approximation order on bounded domains
may be to instead use a Greens function for the bounded domain with
appropriate boundary conditions added.

fasshauer@iit.edu MATH 590 Chapter 15 24


Error Bounds for Functions Outside the Native Space

The error bounds mentioned so far were all valid under the assumption
that the function f providing the data came from (a subspace of) the
native space of the RBF employed in the interpolation.
We now mention a few recent results that provide error bounds for
interpolation of functions f not in the native space of the basic function.
In particular, the case when f lies in some Sobolev space that is larger
than the native space is of great interest.
A rather general theorem (sometimes referred to as a sampling
inequality) was recently given in [Narcowich et al. (2005)].
In this theorem Narcowich, Ward and Wendland provide Sobolev
bounds for functions with many zeros. However, since the interpolation
error function is just such a function, these bounds have a direct
application to our situation.
We point out that this theorem again applies to the non-stationary
setting.
fasshauer@iit.edu MATH 590 Chapter 15 26
Error Bounds for Functions Outside the Native Space

Theorem
Let k be a positive integer, 0 < 1, 1 p < , 1 q and let
be a multi-index satisfying k > || + ps or, for p = 1, k || + s. Let
X be a discrete set with fill distance h = hX , where is a
compact set with Lipschitz boundary which satisfies an interior cone
condition. If u Wpk + () satisfies u|X = 0, then

k +||s( p1 q1 )+
|u|W || () ch |u|W k + () ,
q p

where c is a constant independent of u and h, and (x)+ is the cutoff


function.
Here | |W || () is actually a norm for = 0, but only a semi-norm in
q
general.

fasshauer@iit.edu MATH 590 Chapter 15 27


Error Bounds for Functions Outside the Native Space

Suppose we have an interpolation process

P : Wpk + () V

that maps Sobolev functions to a finite-dimensional subspace V of


Wpk + () with the additional property

|Pf |W k + () |f |W k + () ,
p p

then the previous theorem immediately yields the error estimate


k +||s( p1 q1 )+
|f Pf |W || () ch |f |W k + () .
q p

The additional property |Pf |W k + () |f |W k + () is certainly satisfied


p p
provided the native space of the basic function is a Sobolev space.

fasshauer@iit.edu MATH 590 Chapter 15 28


Error Bounds for Functions Outside the Native Space

Thus, the use of the sampling inequality provides an alternative to the


power function approach discussed in the previous chapter if we base
P on linear combinations of shifts of the basic function .

This new approach has the advantage that the term C (x) which may
depend on both and X no longer needs to be dealt with.

fasshauer@iit.edu MATH 590 Chapter 15 29


Error Bounds for Functions Outside the Native Space

In particular, the authors of [Narcowich et al. (2005)] show that if the


Fourier transform of satisfies

c1 (1 + kk22 ) ()
c2 (1 + kk22 ) , kk , Rs , (9)

then the estimate


||s( 21 q1 )+
|u|W || () ch |u|W2 () ,
q

holds provided the fill distance is sufficiently small.

fasshauer@iit.edu MATH 590 Chapter 15 30


Error Bounds for Functions Outside the Native Space

Example
Examples of basic functions with an appropriately decaying Fourier
transform are provided by the families of
Wendland, or
Matrn functions.

Analogous error bounds also hold for


radial powers and
thin plate splines
since their native spaces are Beppo-Levi spaces.

fasshauer@iit.edu MATH 590 Chapter 15 31


Error Bounds for Functions Outside the Native Space

For functions f outside the native space of a basic function whose


Fourier transform satisfies (9) Narcowich, Ward and Wendland prove
Theorem
Let k and n be integers with 0 n < k and k > 2s , and let
f C k (). Also suppose that X = {x 1 , . . . , x N } satisfies
diam(X ) 1 with sufficiently small fill distance. Then for any
1 q we have
k ns( 21 q1 )+
|f Pf |Wqn () cXk h kf kC k () ,

h
where X = qX is the mesh ratio for X and qX is the separation
1
distance 2 mini6=j kx i x j k2 .

fasshauer@iit.edu MATH 590 Chapter 15 32


Error Bounds for Functions Outside the Native Space

Recall that
the fill distance corresponds to the radius of the largest possible empty
ball that can be placed between the points in X , while
the separation distance (c.f. Chapter 16), on the other hand, can be
interpreted as the radius of the largest ball that can be
placed around every point in X such that no two balls
overlap. Thus,
the mesh ratio is a measure of the non-uniformity of the distribution of
the points in X .

Similar results were obtained earlier in [Brownlee and Light (2004)] (for
radial powers and thin plate splines only), and in [Yoon (2003)] (for
shifted surface splines, see below).

fasshauer@iit.edu MATH 590 Chapter 15 33


Error Bounds for Functions Outside the Native Space

Example
If we consider polyharmonic splines, then the decay condition (9) for
the Fourier transform is satisfied with
= 2 for thin plate splines and with
= for radial powers.
If we take k = , n = 0, and q = in the error estimate
k ns( 12 q1 )+
|f Pf |Wqn () cXk h kf kC k () ,

then we arrive,
for thin plate splines (x) = kxk2 log(kxk), at the bound
s
|f Pf |L ch2 2 kf kC 2 ()
and, for radial powers (x) = kxk , at
s
|f Pf |L ch 2 kf kC () .
fasshauer@iit.edu MATH 590 Chapter 15 34
Error Bounds for Functions Outside the Native Space

These bounds immediately correspond to the optimal native space


bounds obtained earlier only after the improvements discussed in the
previous subsection.

For data functions f with less smoothness the approximation order is


reduced accordingly.

fasshauer@iit.edu MATH 590 Chapter 15 35


Error Bounds for Functions Outside the Native Space

Lower bounds on the approximation order for approximation by


polyharmonic splines were provided in [Maiorov (2005)].
Maiorov studies for any 1 p, q and s > ( p1 q1 )+ the error E of
Lq -approximation of Wp functions by polyharmonic splines.
More precisely,

E(Wp ([0, 1]s ), RN ( , ), Lq ([0, 1]s )) cN s ,
where RN ( , ) denotes the linear space formed by all possible linear
combinations of N polyharmonic (or thin-plate type) splines
(
r 2s if s is odd s
(r ) = 2s
> ,
r log r if s is even, 2

and multivariate polynomials of degree at most 1.


Note that these bounds are in terms of the number N of data sites
instead of the usual fill distance h.
For the special cases p = q = and p = 2, 1 q 2 the above
lower bound is shown to be asymptotically exact.
fasshauer@iit.edu MATH 590 Chapter 15 36
Error Bounds for Functions Outside the Native Space

Recently (see [Rieger and Zwicknagl (2010)]), the use of sampling


inequalities on bounded domains was extended to the case of infinitely
smooth functions.

For Gaussians and (inverse) multiquadrics the authors recover the


exponential convergence rates discussed at the beginning of this
chapter.

However, the results in [Rieger and Zwicknagl (2010)] also cover


approximation of derivatives,
the case of ridge regression (see Chapter 19), and
provide guidance on how to choose the smoothing parameter so
that the approximation order of the interpolant is preserved under
smoothing.

fasshauer@iit.edu MATH 590 Chapter 15 37


Error Bounds for Stationary Approximation

The stationary setting is a natural approach for use with local


basis functions.
The main motivation comes from the computational point of view.
We are interested in maintaining sparse interpolation matrices as
the density of the data increases.
This can be achieved by scaling the basis functions proportional to
the data density.
In principle we can take any of our basic functions and apply a
scaling of the variable, i.e., we replace x by x, > 0.
As mentioned several times earlier, this scaling results in
peaked or narrow basis functions for large values of , and
flat basis functions for 0.

fasshauer@iit.edu MATH 590 Chapter 15 39


Error Bounds for Stationary Approximation

We will now discuss what happens if we choose inversely


proportional to the fill distance, i.e.,
0
h = (10)
hX ,

for some fixed base scale 0 and study the approximation error based
on the RBF interpolant
N
X
Pf (x) = cj h (kx x j k),
j=1

where
h = (h ).

fasshauer@iit.edu MATH 590 Chapter 15 40


Error Bounds for Stationary Approximation

Example
A rather disappointing fact is that Gaussians do not provide any
positive approximation order, i.e., the approximation process is
saturated.
This was studied by [Buhmann (1989)] on infinite lattices.
For quasi-interpolation the approximate approximation approach of
Mazya shows that it is possible to choose 0 in such a way that the
level at which the saturation occurs can be controlled (see, e.g.,
[Mazya and Schmidt (1996), Mazya and Schmidt (2007)]).
Therefore, Gaussians may very well be used for stationary
interpolation provided an appropriate initial shape parameter is
chosen.

Remark
We will illustrate this behavior in the next chapter.
The same kind of argument also applies to the Laguerre-Gaussians of
Chapter 4.
fasshauer@iit.edu MATH 590 Chapter 15 41
Error Bounds for Stationary Approximation

Example
Basis functions with compact support such as the Wendland functions
also do not provide any positive approximation order in the stationary
case.
This can be seen by looking at the power function for the scaled basic
function h = (h ) which is of the form

Ph ,X (x) = P,Xh (h x),

where Xh = {h x 1 , . . . , h x N } denotes the scaled data set.


Moreover, the fill distances of the sets Xh and X satisfy

hXh , = h hX , .
h

fasshauer@iit.edu MATH 590 Chapter 15 42


Error Bounds for Stationary Approximation

Example (cont.)
Therefore, the power function (which can be bounded in terms of the
fill distance) satisfies
 
Ph ,X (x) C h hX ,
h

for some > 0.


This, however, does not go to zero if h is chosen as in (10), i.e.,
h = hX0, .

Remark
If, on the other hand, we work in the approximate approximation
regime, then we can obtain good convergence in many cases (see the
next chapter for some numerical experiments).

fasshauer@iit.edu MATH 590 Chapter 15 43


Error Bounds for Stationary Approximation

Example
Stationary interpolation with (inverse) multiquadrics, radial powers and
thin plate splines presents no difficulties.

In fact, [Schaback (1995b)] shows that the native space error bound for
thin plate splines and radial powers is invariant under a stationary
scaling.

Therefore, the non-stationary bound of the theorem from


[Narcowich et al. (2005)] (see above) applies in the stationary case
also.

The advantage of scaling thin plate splines or radial powers comes


from the added stability one can gain by preventing the separation
distance from becoming too small (see Chapter 16 and the work of
Iske on local polyharmonic spline approximation, e.g., [Iske (2004)]).

fasshauer@iit.edu MATH 590 Chapter 15 44


Error Bounds for Stationary Approximation

Example (cont.)
Yoon provides error estimates for stationary approximation of rough
functions (i.e., functions that are not in the native space of the basic
function) by so-called shifted surface splines.

Shifted surface splines are of the form

(1)ds/2e (1 + kxk2 )s/2 ,



s odd,
(x) =
(1)s/2+1 (1 + kxk2 )s/2 log(1 + kxk2 )1/2 , s even,

where s/2 < N.

Remark
These functions include all of the (inverse) multiquadrics, radial powers
and thin plate splines.

fasshauer@iit.edu MATH 590 Chapter 15 45


Error Bounds for Stationary Approximation

Example (cont.)
Yoon has the following theorem (see [Yoon (2003)] for the Lp case, and
[Yoon (2001)] for L bounds only).

Theorem
Let be a shifted surface spline with shape parameter inversely
proportional to the fill distance hX , . Then there exists a positive
constant C (independent of X ) such that for every f in the Sobolev
space W2 () W
() we have

kf Pf kLp () Chp |f |W (Rs ) , 1 p ,


2

with
s
p = min{, 2 + ps }.
() with max{0, s s } < < , then
Furthermore, if f W2 () W 2 p

kf Pf kLp () = o(hp + ).
fasshauer@iit.edu MATH 590 Chapter 15 46
Error Bounds for Stationary Approximation

Example (cont.)
Yoons estimates address the question
How well do the infinitely smooth (inverse) multiquadrics
approximate functions that are less smooth than those in their
native space?

For example, Yoons theorem states that L2 -approximation


p to functions
in W22 (), Rs , by multiquadrics (x) = 1 + kxk2 is of the
order O(h2 ).

However, we emphasize once more that this refers to stationary


approximation of rough functions, i.e., is scaled inversely proportional
to the fill distance and f need not lie in the native space of , whereas
the spectral order given in (2) corresponds to approximation of
functions in the native space in the non-stationary case with fixed .

fasshauer@iit.edu MATH 590 Chapter 15 47


Error Bounds for Stationary Approximation

Remark
For thin plate splines and radial powers the approximation orders
in Yoons theorem are equivalent to those of the theorem from
[Narcowich et al. (2005)] and the results of Brownlee and Light
mentioned above.
This is to be expected due to the invariance of these basic functions
with respect to scaling.
The second part of Yoons result is a step toward exact
approximation orders as is the work of [Maiorov (2005)] and
[Bejancu (1999)] mentioned above.

fasshauer@iit.edu MATH 590 Chapter 15 48


Convergence With Respect to the Shape Parameter

None of the error bounds discussed thus far have taken into
account the possibility of varying the shape parameter for a fixed
data set X .
However, in the literature the infinitely smooth basic functions
such as the Gaussians and (inverse) multiquadrics are usually
formulated including the shape parameter (or another parameter
equivalent to it) and one may wonder how a change in this shape
parameter affects the convergence properties of the RBF
interpolant.
In fact, quite a bit of work has been spent on the quest for the
optimal shape parameter (see, e.g.,
[Carlson and Foley (1991), Fasshauer and Zhang (2007),
Foley (1994), Hagan and Kansa (1994), Luh (2010a),
Kansa and Carlson (1992), Rippa (1999), Tarwater (1985),
Wertz et al. (2006)]).

fasshauer@iit.edu MATH 590 Chapter 15 50


Convergence With Respect to the Shape Parameter

Convergence of the infinitely smooth Gaussians and (inverse)


multiquadrics with respect to the shape parameter was studied early
on in [Madych (1991)].

Madych showed that for these basic functions there exists a positive
constant < 1 such that

|f (x) Pf (x)| C1/(hX , ) (11)

provided f is in the native space of .

Remark
This estimate shows that taking either the shape parameter or
the fill distance hX , to zero results in exponential convergence.
However, numerical experiments as well as a more careful
theoretical analysis (see [Luh (2010b)]) show that the constant C
is not independent of , so that there may be a minimal error for a
positive value of .
fasshauer@iit.edu MATH 590 Chapter 15 51
Dimension-Independent Error Bounds

In this section we mention some new results (see


[Fasshauer et al. (2010)] for much more details) on the rates of
convergence of Gaussian kernel approximation.
We address weighted L2 approximation when the data is specified
either by function values of an unknown function f (from the native
space of the kernel) or with the help of arbitrary linear functionals.
The convergence results pay special attention to the dependence of
the estimates on the space dimension s.
We will see that the use of anisotropic Gaussian kernels instead of
isotropic ones provides improved convergence rates.

fasshauer@iit.edu MATH 590 Chapter 15 53


Dimension-Independent Error Bounds The current situation

The error bounds in the previous sections were formulated in terms of


the fill distance, while the results we mention below are in terms of N,
the number of data.
Therefore, we restate some of the earlier bounds in terms of N using
the fact that for quasi-uniformly distributed data sites we have
hX , = O(N 1/s ).
If f has derivatives up to total order `, then the first error bound for
Gaussians kernels K was of the form

kf Pf k C`,s N `/s kf kNK ()

where we now emphasize the possible dimension-dependence of the


constant C`,s .
This bound shows that infinitely smooth functions can be approximated
with order ` = .

fasshauer@iit.edu MATH 590 Chapter 15 54


Dimension-Independent Error Bounds The current situation

With extra effort we obtained the spectral estimate


c 1/s
kf Pf k e s N log N
kf kNK () .

Remark
Both of these bounds show that the rate of convergence
deteriorates as s increases.
Moreover, the dependence of the constants on s is not clear.
Therefore, these kinds of error bounds and in fact almost all
error bounds in the RBF literature suffer from the curse of
dimensionality.
We will now present some results from [Fasshauer et al. (2010)]
on dimension-independent convergence rates for Gaussian kernel
approximation.

fasshauer@iit.edu MATH 590 Chapter 15 55


Dimension-Independent Error Bounds New results on (minimal) worst-case weighted L2 error

We make several assumptions in order to be able to obtain


dimension-independent error bounds.
We define the worst-case weighted L2, error as

errwc
2, = sup kf Pf k2, ,
kf kN s 1
K (R )

where Pf is our kernel (minimum norm) approximation calculated in


the usual way.
Therefore

kf Pf k2, errwc
2, kf kNK (Rs ) for all f NK (Rs ).

The N th minimal worst case error errwc


2, (N) refers to the worst case
error that can be achieved with an optimal design, i.e., data generated
by N optimally chosen linear functionals.

fasshauer@iit.edu MATH 590 Chapter 15 56


Dimension-Independent Error Bounds New results on (minimal) worst-case weighted L2 error

For function approximation this means that the data sites have to
be chosen in an optimal way.
The results in [Fasshauer et al. (2010)] are non-constructive, i.e.,
no such optimal design is specified.
However, a Smolyak or sparse grid algorithm is a natural candidate
for such a design.
If we are allowed to choose arbitrary linear functionals, then the
optimal choice for weighted L2 approximation is known.
In either case we will need an eigenfunction expansion of the
Gaussian kernel.

fasshauer@iit.edu MATH 590 Chapter 15 57


Dimension-Independent Error Bounds New results on (minimal) worst-case weighted L2 error

Following [Rasmussen and Williams (2006)], the eigenvalues and


2 2
eigenfunctions of the univariate Gaussian kernel K (x, z) = e (xz)
are:
2(k 1)
k =  k 1 , k = 1, 2, . . . ,
1
2
2 2
2 (1 + 1 + 4 ) +

and eigenfunctions
s !
(1 + 42 )1/4 2 x 2 
2 1/4

k (x) = exp Hk 1 (1 + 4 ) x ,
2k 1 (k 1)! 1 2
2 (1 + 1 + 4 )

where Hk are the classical Hermite polynomials of degree k , i.e.,


2 dk x 2
Hk (x) = (1)k ex e for all x R, k = 0, 1, 2, . . .
dx k
so that

Z
2
Hk2 (x) ex dx = 2k k ! for k = 0, 1, 2, . . . .
R

fasshauer@iit.edu MATH 590 Chapter 15 58


Dimension-Independent Error Bounds New results on (minimal) worst-case weighted L2 error

Remark
The multivariate (and anisotropic) case can be handled using products
of univariate eigenvalues and eigenfunctions.
For details see [Fasshauer et al. (2010)] or [Fasshauer (2010)].

fasshauer@iit.edu MATH 590 Chapter 15 59


Dimension-Independent Error Bounds New results on (minimal) worst-case weighted L2 error

For weighted L2 approximation we use generalized Fourier


coefficients, i.e., the optimal linear functionals are
Lk = h, k iNK (Rs ) , where the k are the eigenfunctions of K .
We obtain the truncated generalized Fourier series approximation
N
X
Pf (x) = hf , j iNK (Rs ) j (x) for all f NK (Rs ),
j=1

where

X Z
K (x, y) = k k (x)k (y), K (x, y)k (y)(y)dy = k k (x).
k =1

It is then known [Novak and Wozniakowski (2008)] that


errwc
p
2, (N) = N+1 ,
the (N + 1)st largest eigenvalue, which is easy to identify in the
univariate case, but takes some care to specify in the multivariate
setting.
fasshauer@iit.edu MATH 590 Chapter 15 60
Dimension-Independent Error Bounds New results on (minimal) worst-case weighted L2 error

In [Fasshauer et al. (2010)] it is then proved that in the isotropic


case, i.e., with a truly radial Gaussian kernel of the form
2 kxyk2
K (x, y) = e

one can approximate


function data with an N th minimal error of the order O(N 1/4+ ),
and
Fourier data (i.e., arbitrary linear functional data) with an N th
minimal error of the order O(N 1/2+ ).
Here the constants in the O-notation do not depend on the
dimension s and is arbitrarily small.

fasshauer@iit.edu MATH 590 Chapter 15 61


Dimension-Independent Error Bounds New results on (minimal) worst-case weighted L2 error

With anisotropic kernels, i.e.,


2 2 ...2 (x z )2
K (x, y) = e1 (x1 z1 ) s s s

one can do much better.


If the shape parameters decay like ` = ` , then one can
approximate
function data with an N th minimal error of the order
2
O(N max( 2+ ,1/4)+ ), and
Fourier data (i.e., arbitrary linear functional data) with an N th
minimal error of the order O(N max(,1/2)+ ).
Again, the constants in the O-notation do not depend on the
dimension s.

fasshauer@iit.edu MATH 590 Chapter 15 62


Dimension-Independent Error Bounds New results on (minimal) worst-case weighted L2 error

Remark
Even if we do not have an eigenfunction expansion of a specific
kernel available, the work of [Fasshauer et al. (2010)] shows that
for any radial (isotropic) kernel one has a dimension-independent
Monte-Carlo type convergence rate of O(N 1/2+ ) provided
arbitrary linear functionals are allowed to generate the data.
For translation-invariant (stationary) kernels the situation is similar.
However, the constant in the O-notation depends in any case
on the sum of the eigenvalues of the kernel. For the radial case
this sum is simply (0) (independent of s), while for general
translation invariant kernels it is K
e (0), which may depend on s.

fasshauer@iit.edu MATH 590 Chapter 15 63


Dimension-Independent Error Bounds New results on (minimal) worst-case weighted L2 error

Remark
These results show that even though RBF methods are often
advertised as being dimension-blind their rates of
convergence are only excellent (i.e., spectral for infinitely smooth
kernels) if the dimension s is small.
For large dimensions the constants in the O-notation take over.
If one, however, permits an anisotropic scaling of the kernel (i.e.,
elliptical symmetry instead of strict radial symmetry) and if those
scale parameters decay rapidly with increasing dimension, then
excellent convergence rates for approximation of smooth functions
can be maintained independent of s.

fasshauer@iit.edu MATH 590 Chapter 15 64


Polynomial Interpolation as the Limit of RBF Interpolation Infinitely Smooth RBFs

Recently, a number of authors (see, e.g.,


[Driscoll and Fornberg (2002), Fornberg and Flyer (2005),
Fornberg and Wright (2004), Larsson and Fornberg (2005),
Lee et al. (2007), Schaback (2005), Schaback (2008)]) have studied
the limiting case as 0 of scaled radial basis function interpolation
with infinitely smooth basic functions such as Gaussians and
generalized (inverse) multiquadrics.

It turns out that there is an interesting connection to polynomial


interpolation.

fasshauer@iit.edu MATH 590 Chapter 15 66


Polynomial Interpolation as the Limit of RBF Interpolation Infinitely Smooth RBFs

In [Driscoll and Fornberg (2002)] univariate (s = 1) interpolation with


-scaled infinitely smooth radial basic functions is studied.

Driscoll and Fornberg show that the RBF interpolant


N
X
Pf (x) = cj (k(x xj )k), x [a, b] R,
j=1

to function values at N distinct data sites tends to the Lagrange


interpolating polynomial of f as 0.

fasshauer@iit.edu MATH 590 Chapter 15 67


Polynomial Interpolation as the Limit of RBF Interpolation Infinitely Smooth RBFs

The multivariate case is more complicated.

However, the limiting RBF interpolant is given by a low-degree


multivariate polynomial (see [Larsson and Fornberg (2005),
Lee et al. (2007), Schaback (2005), Schaback (2008)]).

For example, if the data sites are located such that they guarantee a
unique polynomial interpolant, then the limiting RBF interpolant is
given by this polynomial.

If polynomial interpolation is not unique, then the RBF limit is still a


polynomial whose form depends on the choice of basic function.

fasshauer@iit.edu MATH 590 Chapter 15 68


Polynomial Interpolation as the Limit of RBF Interpolation Infinitely Smooth RBFs

Theorem (Driscoll, Fornberg, Larsson, Schaback, Yoon [2002-08])


Assume the strictly positive definite radial kernel has an expansion

X
(r ) = aj r 2j
j=0
into even powers of r (i.e., is infinitely smooth), and that the data X
are unisolvent with respect to any set of N linearly independent
polynomials of degree at most m. Then

lim Pf (x) = pm,f (x), x Rs ,


0

where pm,f is determined as follows:


If interpolation with polynomials of degree at most m is unique,
then pm,f is that unique polynomial interpolant.
If interpolation with polynomials of degree at most m is not unique,
then pm,f is a polynomial interpolant whose form depends on the
choice of RBF.
fasshauer@iit.edu MATH 590 Chapter 15 69
Polynomial Interpolation as the Limit of RBF Interpolation Infinitely Smooth RBFs

Remark
These statements require the RBFs to satisfy a condition on
certain coefficient matrices Ap,J . This condition was left unproven
in [Larsson and Fornberg (2005)] and verified in
[Lee et al. (2007)].

In [Larsson and Fornberg (2005)] the authors also provide an


explanation for the error behavior for small values of the shape
parameter, and for the existence of an optimal (positive) value of
giving rise to a global minimum of the error function. Recent
theoretical work in [Luh (2010a), Luh (2010b)] also goes in this
direction.

For the special case of scaled Gaussians [Schaback (2005)]


shows that as 0 the RBF interpolant converges to the de Boor
and Ron least polynomial interpolant (see [de Boor (1992),
de Boor and Ron (1990), de Boor and Ron (1992)] and also
[de Boor (2006)]).
fasshauer@iit.edu MATH 590 Chapter 15 70
Polynomial Interpolation as the Limit of RBF Interpolation Infinitely Smooth RBFs

Remark
In [Fornberg and Wright (2004)] the authors describe a so-called
Contour-Pad algorithm that makes it possible (for data sets of
relatively modest size) to compute the RBF interpolant for all
values of the shape parameter including the limiting case 0.

We present some numerical result obtained with Grady Wrights


M ATLAB toolbox in Chapter 17.

Other recent work obtaining RBF interpolants close to the


polynomial limit, i.e., for small , is
[Fasshauer and McCourt (2010), Fornberg et al. (2009)].

We will later exploit the connection between RBF and polynomial


interpolants to design numerical solvers for partial differential
equations.

fasshauer@iit.edu MATH 590 Chapter 15 71


Polynomial Interpolation as the Limit of RBF Interpolation RBFs with Finite Smoothness

To our knowledge, the flat limit of RBFs with finite smoothness


was not studied until the recent paper [Song at al. (2009)] in which
interpolation on Rd was investigated.

Before we explain the results obtained in [Song at al. (2009)], we


look at a few finitely smooth radial kernels as full space Greens
functions.

The connection between (conditionally) positive definite kernels


and Greens functions is currently being studied by Qi Ye (see
[Fasshauer and Ye (2010a), Fasshauer and Ye (2010b),
Ye (2010)]).

fasshauer@iit.edu MATH 590 Chapter 15 72


Polynomial Interpolation as the Limit of RBF Interpolation RBFs with Finite Smoothness

Example (Radial kernels with finite smoothness)


.
The univariate C 0 Matrn kernel K (x, z) = e|xz| is the
full-space Greens function for the differential operator

d2
L= + 2 I.
dx 2
On the other hand, it is well-known that univariate C 0 piecewise
linear splines may be expressed in terms of kernels of the form
.
K (x, z) = |x z|. The corresponding differential operator is

d2
L= .
dx 2
Note that the differential operator for the Matrn kernel
converges to that of the piecewise linear splines as 0.

fasshauer@iit.edu MATH 590 Chapter 15 73


Polynomial Interpolation as the Limit of RBF Interpolation RBFs with Finite Smoothness

Example (cont.)
The univariate C 2 tension spline kernel [Renka (1987)]
.
K (x, z) = e|xz| + |x z| is the Greens kernel of

d4 2 d
2
L= + ,
dx 4 dx 2
.
while the univariate C 2 cubic spline kernel K (x, z) = |x z|3
corresponds to
d4
L = 4.
dx
Again, the differential operator for the tension spline converges
to that of the cubic spline as 0.

fasshauer@iit.edu MATH 590 Chapter 15 74


Polynomial Interpolation as the Limit of RBF Interpolation RBFs with Finite Smoothness

Example (cont.)
In [Berlinet and Thomas-Agnan (2004)] we find a so-called
univariate Sobolev kernel of the form
.
K (x, z) = e|xz| sin |x z| + 4 which is associated with


d4
L= 2 I.
dx 4
The operator for this kernel also converges to that of the cubic
spline kernel, but the effect of the scale parameter is different than
for the tension spline.

Remark
Note that this Sobolev kernel is different from the Sobolev splines
(Matrn functions) discussed earlier terminology . . .

fasshauer@iit.edu MATH 590 Chapter 15 75


Polynomial Interpolation as the Limit of RBF Interpolation RBFs with Finite Smoothness

Example (cont.)
The general multivariate Matrn kernels are of the form
. s
K (x, y) = Kms/2 (kx yk) (kx yk)ms/2 , m> ,
2
and can be obtained as Greens kernels of (see [Ye (2010)])
 m s
L = 2 I , m> .
2
We contrast this with the polyharmonic spline kernels
(
. kx yk2ms , s odd,
K (x, y) = 2ms
kx yk log kx yk, s even,

and
s
L = (1)m m , m> .
2

fasshauer@iit.edu MATH 590 Chapter 15 76


Polynomial Interpolation as the Limit of RBF Interpolation RBFs with Finite Smoothness

All examples above show that the differential operators associated


with finitely smooth RBF kernels converge to those of a
piecewise polynomial or polyharmonic spline kernel as 0.

We therefore ask if RBF interpolants based on finitely smooth


kernels converge to (polyharmonic) spline interpolants for 0
as is the case for infinitely smooth radial kernels and polynomials.

As mentioned above, infinitely smooth radial kernels can be


expanded into an infinite series of even powers of r .

Finitely smooth radial kernels can also be expanded into an


infinite series of powers of r .
In this case there always exists some minimal odd power of r with
nonzero coefficient indicating the smoothness of the kernel.

fasshauer@iit.edu MATH 590 Chapter 15 77


Polynomial Interpolation as the Limit of RBF Interpolation RBFs with Finite Smoothness

Example
For univariate C 0 , C 2 and C 4 Matrn kernels, respectively, we have
.
(r ) = er
1 1
= 1 r + (r )2 (r )3 + ,
2 6
. r
(r ) = (1 + r )e
1 1 1
= 1 (r )2 + (r )3 (r )4 + ,
 2 3  8
. 2 r
(r ) = 3 + 3r + (r ) e
1 1 1 1
= 3 (r )2 + (r )4 (r )5 + (r )6 + .
2 8 15 48

fasshauer@iit.edu MATH 590 Chapter 15 78


Polynomial Interpolation as the Limit of RBF Interpolation RBFs with Finite Smoothness

Theorem ([Song at al. (2009)])

Suppose is radial and conditionally positive definite of order m n


with an expansion of the form

(r ) = a0 + a2 r 2 + . . . + a2n r 2n + a2n+1 r 2n+1 + a2n+2 r 2n+2 + . . . ,

where 2n + 1 denotes the smallest odd power of r present in the


expansion (i.e., is finitely smooth). Also assume that the data X
contain a unisolvent set with respect to the space s2n of s-variate
polynomials of degree less than 2n. Then
N
X M
X
lim Pf (x) = cj kx x j k2n+1 + dk pk (x), x Rs ,
0
j=1 k =1

where {pk : k = 1, . . . , M} denotes a basis of sn .

fasshauer@iit.edu MATH 590 Chapter 15 79


Polynomial Interpolation as the Limit of RBF Interpolation RBFs with Finite Smoothness

Remark
The previous theorem does not cover Matrn kernels with odd-order
smoothness. However, all other examples listed above are covered.

1.6
1.6
1.4
1.4
1.2 1.2
1 1

0.8 0.8

0.6 0.6 data


data
=2
=2
0.4 0.4 =1
=1
=0.1
0.2 =0.1
0.2 cubic spline
piecewise linear
0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Figure: Convergence of C 0 (left) and C 2 (right) Matrn interpolants to


piecewise linear (left) and cubic (right) spline interpolants.

fasshauer@iit.edu MATH 590 Chapter 15 80


Appendix References

References I

Berlinet, A. and Thomas-Agnan, C. (2004).


Reproducing Kernel Hilbert Spaces in Probability and Statistics.
Kluwer, Dordrecht.
Buhmann, M. D. (2003).
Radial Basis Functions: Theory and Implementations.
Cambridge University Press.
Fasshauer, G. E. (2007).
Meshfree Approximation Methods with M ATLAB.
World Scientific Publishers.
Iske, A. (2004).
Multiresolution Methods in Scattered Data Modelling.
Lecture Notes in Computational Science and Engineering 37, Springer Verlag
(Berlin).

fasshauer@iit.edu MATH 590 Chapter 15 81


Appendix References

References II

Mazya, V. and Schmidt, G. (2007).


Approximate Approximations.
Mathematical Surveys and Monographs, vol. 141, Americal Mathematical Society
(Providence, RI).
Novak, E. and Wozniakowski, H. (2008).
Tractability of Multivariate Problems, Volume 1: Linear Information.
EMS Tracts in Mathematics, no. 6, European Mathematical Society.
Rasmussen, C. E. and Williams, C. (2006).
Gaussian Processes for Machine Learning.
MIT Press (online version at http://www.gaussianprocess.org/gpml/).

Wendland, H. (2005a).
Scattered Data Approximation.
Cambridge University Press (Cambridge).
Bejancu, A. (1999).
Local accuracy for radial basis function interpolation on finite uniform grids.
J. Approx. Theory 99, pp. 242257.

fasshauer@iit.edu MATH 590 Chapter 15 82


Appendix References

References III

de Boor, C. (1992).
On the error in multivariate polynomial interpolation.
Applied Numerical Mathematics 10, pp. 297305.
de Boor, C. (2006).
On interpolation by radial polynomials.
Adv. in Comput. Math. 24, pp. 143153.
de Boor, C. and Ron, A. (1990).
On multivariate polynomial interpolation.
Constr. Approx. 6, pp. 287302.
de Boor, C. and Ron, A. (1992).
The least solution for the polynomial interpolation problem.
Math. Z. 210, pp. 347378.
Brownlee, R. and Light, W. (2004).
Approximation orders for interpolation by surface splines to rough functions.
IMA J. Numer. Anal. 24, pp. 179192.

fasshauer@iit.edu MATH 590 Chapter 15 83


Appendix References

References IV

Buhmann, M. D. (1989).
Multivariate interpolation using radial basis functions.
Ph.D. Dissertation, University of Cambridge.
Buhmann, M. D. and Dyn, N. (1991).
Error estimates for multiquadric interpolation.
in Curves and Surfaces, P.-J. Laurent, A. Le Mhaut, and L. L. Schumaker
(eds.), Academic Press (New York), pp. 5158.
Carlson, R. E. and Foley, T. A. (1991).
The parameter R 2 in multiquadric interpolation.
Comput. Math. Appl. 21, pp. 2942.
Driscoll, T. A. and Fornberg, B. (2002).
Interpolation in the limit of increasingly flat radial basis functions.
Comput. Math. Appl. 43, pp. 413422.

fasshauer@iit.edu MATH 590 Chapter 15 84


Appendix References

References V

Duchon, J. (1978).
Sur lerreur dinterpolation des fonctions de plusieurs variables par les
D m -splines.
Rev. Francaise Automat. Informat. Rech. Opr., Anal. Numer. 12, pp. 325334.
Fasshauer, G. E. (2010).
Greens functions: taking another look at kernel approximation, radial basis
functions and splines.
Submitted.
Fasshauer, G. E., Hickernell, F. J. and Wozniakowski, H. (2010).
Rate of convergence and tractability of the radial function approximation problem.
Submitted.
Fasshauer, G. E. and McCourt, M. J. (2010).
Stable evaluation of Gaussian RBF interpolants.
In preparation.

fasshauer@iit.edu MATH 590 Chapter 15 85


Appendix References

References VI

Fasshauer, G. E. and Ye, Q. (2010).


Reproducing kernels of generalized Sobolev spaces via a Green function
approach with distributional operators.
Submitted.
Fasshauer, G.E. and Ye, Q. (2010).
Reproducing kernels of Sobolev spaces in bounded domains via a Greens
kernel approach.
In preparation.
Fasshauer, G. E. and Zhang, J. G. (2007).
On choosing optimal shape parameters for RBF approximation.
Numer. Algorithms 45, 345368.
Foley, T. A. (1994).
Near optimal parameter selection for multiquadric interpolation.
J. Appl. Sc. Comp. 1, pp. 5469.

fasshauer@iit.edu MATH 590 Chapter 15 86


Appendix References

References VII

Fornberg, B. and Flyer, N. (2005).


Accuracy of radial basis function interpolation and derivative approximations on
1-D infinite grids.
Adv. Comput. Math. 23 1-2, pp. 520.
Fornberg, B., Larsson, E. and Flyer, N. (2009).
Stable computations with Gaussian radial basis functions in 2-D.
Technical Report 2009-020, Uppsala University, Department of Information
Technology.
Fornberg, B. and Wright, G. (2004).
Stable computation of multiquadric interpolants for all values of the shape
parameter.
Comput. Math. Appl. 47, pp. 497523.
Hagan, R. E. and Kansa, E. J. (1994).
Studies of the R parameter in the multiquadric function applied to ground water
pumping.
J. Appl. Sci. Comput. 1, pp. 266281.

fasshauer@iit.edu MATH 590 Chapter 15 87


Appendix References

References VIII

Johnson, M. J. (2004).
An error analysis for radial basis function interpolation.
Numer. Math. 98 4, pp. 675694.
Kansa, E. J. and Carlson, R. E. (1992).
Improved accuracy of multiquadric interpolation using variable shape parameters.

Comput. Math. Appl. 24, pp. 99120.


Larsson, E. and Fornberg, B. (2005).
Theoretical and computational aspects of multivariate interpolation with
increasingly flat radial basis functions.
Comput. Math. Appl. 49, pp. 103130.
Lee, Y. J., Yoon, G. J. and Yoon, J. (2007).
Convergence of increasingly flat radial basis interpolants to polynomial
interpolants.
SIAM J. Math. Anal. 39, pp. 537553.

fasshauer@iit.edu MATH 590 Chapter 15 88


Appendix References

References IX

Light, W. A. (1996).
Variational error bounds for radial basis functions.
in Numerical Analysis 1995, D. F. Griffiths and G. A. Watson (eds.), Longman
(Harlow), pp. 94106.
Light, W. A. and Wayne, H. (1995).
Error estimates for approximation by radial basis functions.
in Approximation Theory, Wavelets and Applications, S. P. Singh (ed.), Kluwer
(Dordrecht), pp. 215246.
Light, W. A. and Wayne, H. (1998).
On power functions and error estimates for radial basis function interpolation.
J. Approx. Theory 92, pp. 245266.
Luh L.-T. (2009).
An improved error bound for multiquadric and inverse multiquadric interpolations.
Int. J. Numer. Meth. Applic. 1/2, pp. 101120.

fasshauer@iit.edu MATH 590 Chapter 15 89


Appendix References

References X

Madych, W. R. (1991).
Error estimates for interpolation by generalized splines.
in Curves and Surfaces, P.-J. Laurent, A. Le Mhaut, and L. L. Schumaker
(eds.), Academic Press (New York), pp. 297306.
Madych, W. R. (1992).
Miscellaneous error bounds for multiquadric and related interpolators.
Comput. Math. Appl. 24, pp. 121138.
Madych, W. R. and Nelson, S. A. (1988).
Multivariate interpolation and conditionally positive definite functions.
Approx. Theory Appl. 4, pp. 7789.
Madych, W. R. and Nelson, S. A. (1992).
Bounds on multivariate polynomials and exponential error estimates for
multiquadric interpolation.
J. Approx. Theory 70, pp. 94114.

fasshauer@iit.edu MATH 590 Chapter 15 90


Appendix References

References XI

Maiorov, V. (2005).
On lower bounds in radial basis approximation.
Adv. in Comp. Math. 22, pp. 103113.
Mazya, V. and Schmidt, G. (1996).
On approximate approximations using Gaussian kernels.
IMA J. Numer. Anal. 16, pp. 1329.
Narcowich, F. J. and Ward, J. D. (2004).
Scattered-data interpolation on Rn : error estimates for radial basis and
band-limited functions.
SIAM J. Math. Anal. 36 1, pp. 284300.
Narcowich, F. J., Ward, J. D. and Wendland, H. (2003).
Refined error estimates for radial basis function interpolation.
Constr. Approx. 19 4, pp. 541564.

fasshauer@iit.edu MATH 590 Chapter 15 91


Appendix References

References XII

Narcowich, F. J., Ward, J. D. and Wendland, H. (2005).


Sobolev bounds on functions with scattered zeros, with applications to radial
basis function surface fitting.
Math. Comp. 74, pp. 743763.
Renka, R. J. (1987).
Interpolatory tension splines with automatic selection of tension factors.
SIAM J. Sci. Stat. Comput. 8, pp. 393415.
Rieger, C. and Zwicknagl, B. (2010).
Sampling inequalities for infinitely smooth functions, with applications to
interpolation and machine learning.
Adv. in Comput. Math. 32/1, pp. 103129.
Rippa, S. (1999).
An algorithm for selecting a good value for the parameter c in radial basis
function interpolation.
Adv. in Comput. Math. 11, pp. 193210.

fasshauer@iit.edu MATH 590 Chapter 15 92


Appendix References

References XIII

Schaback, R. (1995a).
Error estimates and condition numbers for radial basis function interpolation.
Adv. in Comput. Math. 3, pp. 251264.
Schaback, R. (1995b).
Multivariate interpolation and approximation by translates of a basis function.
in Approximation Theory VIII, Vol. 1: Approximation and Interpolation, C. Chui,
and L. Schumaker (eds.), World Scientific Publishing (Singapore), pp. 491514.
Schaback, R. (1996).
Approximation by radial basis functions with finitely many centers.
Constr. Approx. 12, pp. 331340.
Schaback, R. (1999).
Improved error bounds for scattered data interpolation by radial basis functions.
Math. Comp. 68 225, pp. 201216.
Schaback, R. (2005).
Multivariate interpolation by polynomials and radial basis functions.
Constr. Approx. 21, pp. 293317.

fasshauer@iit.edu MATH 590 Chapter 15 93


Appendix References

References XIV

Schaback, R. (2008).
Limit problems for interpolation by analytic radial basis functions.
J. Comp. Appl. Math. 212(2), pp. 127149.
Song, G., Riddle, J., Fasshauer, G.E. and Hickernell, F.J. (2009).
Multivariate interpolation with increasingly flat radial basis functions of finite
smoothness.
Submitted.
Tarwater, A. E. (1985).
A parameter study of Hardys multiquadric method for scattered data
interpolation.
Lawrence Livermore National Laboratory, TR UCRL-563670.
Wendland, H. (1997).
Sobolev-type error estimates for interpolation by radial basis functions.
in Surface Fitting and Multiresolution Methods, A. Le Mhaut, C. Rabut, and L.
L. Schumaker (eds.), Vanderbilt University Press (Nashville, TN), pp. 337344.

fasshauer@iit.edu MATH 590 Chapter 15 94


Appendix References

References XV

Wendland, H. (1998).
Error estimates for interpolation by compactly supported radial basis functions of
minimal degree.
J. Approx. Theory 93, pp. 258272.
Wendland, H. (2001).
Gaussian interpolation revisited.
in Trends in Approximation Theory, K. Kopotun, T. Lyche, and M. Neamtu (eds.),
Vanderbilt University Press, pp. 417426.
Wertz, J., Kansa, E. J. and Ling, L. (2006).
The role of the multiquadric shape parameters in solving elliptic partial differential
equations.
Comput. Math. Appl. 51 8, pp. 13351348.
Wu, Z. and Schaback, R. (1993).
Local error estimates for radial basis function interpolation of scattered data.
IMA J. Numer. Anal. 13, pp. 1327.

fasshauer@iit.edu MATH 590 Chapter 15 95


Appendix References

References XVI

Ye, Q. (2010).
Reproducing kernels of generalized Sobolev spaces via a Green function
approach with differential operators.
Submitted.
Yoon, J. (2001).
Interpolation by radial basis functions on Sobolev space.
J. Approx. Theory 112, pp. 115.
Yoon, J. (2003).
Lp -error estimates for shifted surface spline interpolation on Sobolev space.
Math. Comp. 72, pp. 13491367.

Luh L.-T. (2010).


The mystery of the shape parameter: Parts I-IV.
http://arxiv.org/abs/1001.5087, http://arxiv.org/abs/1002.2082,
http://arxiv.org/abs/1004.0759, http://arxiv.org/abs/1004.0761.

fasshauer@iit.edu MATH 590 Chapter 15 96


Appendix References

References XVII

Luh L.-T. (2010).


The shape parameter in the Gaussian function.
http://arxiv.org/abs/1006.2318.

fasshauer@iit.edu MATH 590 Chapter 15 97

Das könnte Ihnen auch gefallen