Beruflich Dokumente
Kultur Dokumente
Trigonometric Interpolation
Is there something else than polynomials and Taylor approximation in the world?
Idea (J. Fourier 1822): Approximation of a function not by usual polynomials but by trigonometrical polynomials = partial sum of a Fourier series
T2m(t) := t
j=m
j e2ijt ,
j C, t R .
Remark 6.0.1. T2m : R C is periodic of period 1. Moreover, if j = j for all j = 0, . . . , 2m, then T2m(t) takes only real values and may be written as
2m a0 T2m(t) = + a cos(2jt) + bj sin(2jt) j=1 j 2 with a0 = 20 and aj = 2 Re j , bj = 2 Im j for all j = 1, . . . , 2m.
Remark 6.0.2. The functions wj (t) = e2ijt are orthogonal with respect to the L2(]0, 1[) scalar product. Let us take as granted (or known from lectures in Analysis, Mathematcial Methods of Physics, etc.): Theorem 6.0.1 (L2-convergence of the Fourier series). Every squared integrable function f
'
L2(]0, 1[) := {f :]0, 1[ C: f L2(]0,1[) < } is the L2(]0, 1[)-limit of its Fourier series f (t) =
with Fourier coefcients dened by
1 k=
f (k) =
&
f (t) e2ikt dt , k Z .
Remark 6.0.3. In view of this theorem, we may think of one function in two ways: once in the time (or space) domain t f (t) and once in the frequency domain k f (k).
'
|f (k)|2 = f 2 2(]0,1[) L k=
(6.0.1)
a0 + 2
with
j=1
aj cos(2jt) + bj sin(2jt)
1
in L2(]0, 1[) ,
aj = 2 bj = 2
0 0
f (t) cos(2jt) dt , j 0 ,
1
f (t) sin(2jt) dt , j 1 .
'
Remark 6.0.5.
(2)2n
k=
k 2n|f (k)|2 .
(6.0.2)
If |f (n)(k)|
1 0
The smoothness of a function directly reects in the quick decay of its Fourier coefcients.
Example 6.0.6. The Fourier series associated with the characteristic function of an interval [a, b]
|k|=1
eikc
sin kd i2kt e , k
t [0, 1] ,
Note the slow decay of the Fourier coefcients, and hence expect slow convergence of the series. Moreover, observe in the pictures below the Gibbs phenomenon: the ripples move closer to the discontinuities and increase with larger n. Explanation: we have L2-convergence but no uniforme
2.0
n = 10
10
n = 70
8 1.5 6
1.0
0.5
0.0
0.5
1.0 0.0
0.2
0.4
0.6
0.8
1.0
Remark 6.0.7. Usually one cannot compute analytically f (k) or one has to rely only on discrete
values at nodes x = N for = 0, 1, . . . , N ; the trapezoidal rule gives
(k) 1 f N
N 1 =0
f (x)e2ikx =: fN (k)
2 0 2 4 6 8 0.0 0.2
0.4
0.6
0.8
1.0
(6.0.3)
6.1
Let n N xed and denote the nth root of unity n := exp(2i/n) = cos(2/n) i sin(2/n) hence
k+n k n n = n k Z , n = 1 , n = 1 , n1 k=0
n/2
(6.1.1) (6.1.2)
n =
A change of basis in Cn:
kj
n , if j = 0 , 0 , if j = 0 .
standard basis of Kn
0 0 n n 0 1 n n 0 2 Fn = n n . . . . . . 0 n1 n n
1.8
. . .
(n1)2 n
ij n1 Cn,n . = n i,j=0
(6.1.3)
1.4
1.6
Realteil Imaginaerteil
1.2
Realteil Imaginaerteil
1.4
1.2
0.8
0.6
0.8
0.4
0.6
0.2
0.4
0.2
0.2
0.4
10
12
14
10
12
14
Koeffizientenindex
Koeffizientenindex
Vector in standardbasis
Cn
Cn,
Fn(y) :=
ck =
j=0
yj n
, k = 0, . . . , n 1 .
(6.1.4)
Lemma 6.1.2.
1 The scaled Fourier-matrix Fn is unitary: n
&
1 1 F1 = n FH = n Fn n n
1 Remark 6.1.1. n F2 = I and 12 F4 = I , hence the eigenvalues of Fn are in the set {1, 1, i, i}. n n n
numpy-functions:
c=fft(y) y=ifft(c);
Example 6.1.2 (Frequency analysis with DFT). Some vectors of the Fourier basis (n = 16):
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
Value
Value
Value
Real part Imaginary p.
0.2
0.2
0.2
0.4
0.4
0.4
0.6
0.6
0.6
0.8
Vector component k
Vector component k
Vector component k
Fig. 54
16
18
"low frequence
"high frequence
"low frequence
from numpy import sin, pi, linspace, random, fft from pylab import plot, bar, show t = linspace(0,63,64); x = sin(2*pi*t/64)+sin(7*2*pi*t/64) y = x + random.randn(len(t)) %distortion c = fft.fft(y); p = abs(c)**2/64 plot(t,y,-+); show() bar(t[:32],p[:32]); show()
3
20 18
2
16 14 12
Signal
|c |
2
1 2 3
k
10 8 6 4 2 0
10
20
30
40
50
60
Fig. 55
70
10
15
20
25
Coefficient index k
Fig. 56
30
tic-toc-timing: compare fft, loop based implementation, and direct matrix multiplication
1 2 3 4 5 6 7 8 9
Code 6.2.2: timing of different implementations of DFT from numpy import zeros , random , exp , p i , meshgrid , r_ , dot , import t i m e i t def naiveFT ( ) : global y , n c = 0. 1 j y # naive omega = exp(2 p i 1 j / n ) c [ 0 ] = y . sum ( ) ; s = omega f o r j j i n xrange ( 1 , n ) : c [ j j ] = y [ n 1] f o r kk i n xrange ( n 2,1,1) : c [ j j ] = c [ j j ] s+y [ kk ] s = omega #return c def matrixFT ( ) :
fft
10
11
12
13
14
15
16
17
18
global y , n # matrix based I , J = meshgrid ( r _ [ : n ] , r _ [ : n ] ) F = exp(2 p i 1 j I J / n ) tm = 10 3 c = dot (F , y ) #return c def f f t F T ( ) : global y , n c = f f t . f f t (y) #return c nrexp = 5 N = 2 11 # how large the vector will be r e s = zeros ( ( N, 4 ) ) f o r n i n xrange ( 1 ,N+1) : y = random . rand ( n ) t = t i m e i t . Timer ( naiveFT ( ) , from __main__ i m p o r t naiveFT ) t n = t . t i m e i t ( number=nrexp ) t = t i m e i t . Timer ( matrixFT ( ) , from __main__ i m p o r t matrixFT )
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
tm = t . t i m e i t ( number=nrexp ) t = t i m e i t . Timer ( f f t F T ( ) , from __main__ i m p o r t f f t F T ) t f = t . t i m e i t ( number=nrexp ) #print n, tn, tm, tf r e s [ n 1] = [ n , tn , tm , t f ] print res [ : , 3 ] from p yl a b import semilogy , semilogy ( r e s [ : , 0 ] , r e s [ : , 1 ] semilogy ( r e s [ : , 0 ] , r e s [ : , 2 ] semilogy ( r e s [ : , 0 ] , r e s [ : , 3 ] s e v e f i g ( f f t t i m e . eps ) show ( ) show , p l o t , s a v e f i g , b ) , k ) , r )
47
48
49
50
51
52
53
102
naive DFT-implementation
101
c = 0.*1j*y 100 omega = exp(-2*pi*1j/n) c[0] = y.sum(); s = omega 10-1 for jj in xrange(1,n): 10-2 c[jj] = y[n-1] for kk in xrange(n-2,-1,-1): -3 c[jj] = c[jj]*s+y[kk] 10 s *= omega
run time [s] 10-4 10-5 0 500 1000 1500 vector length n 2000 2500
fft runtimes
Incredible! The fft()-function clearly beats the O(n2) asymptotic complexity of the other implementations. Note the logarithmic scale!
The secret of fft(): the Fast Fourier Transform algorithm [15] (discovered by C.F. Gauss in 1805, rediscovered by Cooley & Tuckey in 1965, one of the top ten algorithms of the century).
ck = =
yj
j=0 m1 j=0 m1
2i 2jk y2j e n
+
j=0
(6.2.1)
2i k +e n
y2k+1
j=0
Note m-periodicity:
m1
= Fmyodd .
3 4 5 6 7 8 9
2L)
FFT-algorithm
10
11
12
Code 6.2.3: Recursive FFT from numpy i m p o r t linspace , random , hstack , pi , exp def f f t r e c ( y ) : n = len ( y ) i f n == 1 : c = y re t urn c else : c0 = f f t r e c ( y [ : : 2 ] ) c1 = f f t r e c ( y [ 1 : : 2 ] ) r = ( 2. j p i / n ) linspace ( 0 , n 1,n ) c = h sta ck ( ( c0 , c0 ) ) + exp ( r ) h sta ck ( ( c1 , c1 ) ) re t urn c
1 DFT of length 2L
2L DFT of length 1
Code 6.2.2: each level of the recursion requires O(2L) elementary operations. Asymptotic complexity of FFT algorithm, n = 2L:
As n = m:
2j
Fm n
n/21
n /2 n n/2+1 n Fm ... I
0 n
n1 n
Fm
I 0 n 1 n ...
1 n
...
n/21
n/21
0 0 0 0 0 OE P5 F10 = 0 0 0 0 0
0 2 4 6 8 1 3 5 7 9
0 4 8 2 6 2 6 0 4 8
0 6 2 8 4 3 9 5 1 7
0 8 6 4 2 4 2 0 8 6
0 0 0 0 0 5 5 5 5 5
0 2 4 6 8 6 8 0 2 4
0 4 8 2 6 7 1 5 9 3
0 6 2 8 4 8 4 0 6 2
0 8 6 4 2 , := 10 . 9 7 5 3 1
What if n = 2L?
To compute an n-point DFT when n is composite (that is, when n = pq ), the FFTW library decomposes the problem using the Cooley-Tukey algorithm, which rst computes p transforms of size q , and then computes q transforms of size p. The decomposition is applied recursively to both the pand q -point DFTs until the problem can be solved using one of several machine-generated xed-size
"codelets." The codelets in turn use several algorithms in combination, including a variation of CooleyTukey, a prime factor algorithm, and a split-radix algorithm. The particular factorization of n is chosen heuristically.
'
The execution time for fft depends on the length of the transform. It is fastest for powers of two. It is almost as fast for lengths that have only small prime factors. It is typically several times slower for lengths that are prime or which have large prime factors Ex. 6.2.1.
&
ck =
j=0
jk [j=:lp+m] yj n =
p1 q1
ylp+m
m=0 l=0
2i (lp+m)k e pq
p1
q1 l=0
=
m=0
mk n
l(k ylp+m q
mod q)
(6.2.2)
q1
zm,k :=
l=0
Step II:
for k =: rq + s,
0 r < p, 0 s < q
p1 2i (rq+s)m z e pq m,s
p1
crq+s =
m=0
=
m=0
mr ms n zm,s p
Remark 6.2.6 (FFT for prime n). When n = 2L, even the Cooley-Tuckey algorithm of Rem. 6.2.5 will eventually lead to a DFT for a vector with prime length.
When n is a prime number, the FFTW library rst decomposes an n-point problem into three (n 1)point problems using Raders algorithm [43]. It then uses the Cooley-Tukey decomposition described above to compute the (n 1)-point DFTs.
mod p: k = 1, . . . , p 1} = {1, . . . , p 1} ,
Pp,g (k) = g k
mod p ,
Pk (i) = k i + 1 .
is circulant.
g = 2 , permutation: (2 4 8 3 6 12 11 9 5 10 7 1) .
0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 1 7 10 5 9 11 12 6 3 8 4 0 4 2 1 7 10 5 9 11 12 6 3 8 0 8 4 2 1 7 10 5 9 11 12 6 3 0 3 8 4 2 1 7 10 5 9 11 12 6 0 6 3 8 4 2 1 7 10 5 9 11 12 0 12 6 3 8 4 2 1 7 10 5 9 11 0 11 12 6 3 8 4 2 1 7 10 5 9 0 9 11 12 6 3 8 4 2 1 7 10 5 0 5 9 11 12 6 3 8 4 2 1 7 10 0 10 5 9 11 12 6 3 8 4 2 1 7 0 7 10 5 9 11 12 6 3 8 4 2 1 0 1 7 10 5 9 11 12 6 3 8 4 2
F13
Then apply fast (FFT based!) algorithms for multiplication with circulant matrices to right lower (n
Theorem 6.3.1 (Trigonometric interpolant). Let N even and z = FN y CN . The trigonometric interpolant
N/2 ,
pN (x) :=
k=N/2
&
zk e2ikx :=
1 zN/2e2ixN/2 + zN/2e2ixN/2 + 2
zk e2ikx
|k|N/2
%
Fast intepolation by means of FFT: O(n log n) asymptotic complexity, see Sect. 6.2, provide
2n j=0
nk jk j exp 2i = zk := exp 2i y , k = 0, . . . , 2n . 2n + 1 2n + 1 k
Lemma 6.1.2
F2n+1c = z , c = (0, . . . , 2n )T
c=
1 F2nz . 2n + 1
(6.3.1)
N = y . shape [ 0 ]
i f N%2 ! = 1 : r a i s e V a l u e E r r o r ( "Odd number o f p o i n t s r e q u i r e d ! " ) n = (N 1.0) / 2 . 0 z = arange (N) # See (6.3.1) c = f f t ( exp ( 2 . 0 j p i ( n /N) z ) y ) / ( 1 . 0 N) # From (??): j = 1 (nj + n+j ) and 2 1 ( # j = 2i nj n+j ), j = 1, . . . , n, 0 = n a = h sta ck ( [ c [ n ] , c [ n 1:: 1]+c [ n + 1 :N ] ] ) b = h sta ck ( [ 1.0 j ( c [ n 1:: 1] c [ n + 1 :N ] ) ] ) re t urn ( a , b ) i f __name__ == " __main__ " : from numpy import l i n s p a c e y = linspace (0 ,1 ,9)
# Expected Result:
# # # # # # # # # # # # # #
a= 0.50000 - 0.00000i -0.12500 + 0.00000i -0.12500 + 0.00000i -0.12500 - 0.00000i -0.12500 + 0.00000i b= -0.343435 + 0.000000i -0.148969 + 0.000000i -0.072169 - 0.000000i -0.022041 + 0.000000i
w = trigipequid (y) p r i n t (w) Code 6.3.3: Computation of coefcients of trigonometric interpolation polynomial, general nodes from numpy import hstack , arange , s i n , cos , p i , o u t e r from numpy . l i n a l g import s o l v e
def t r i g p o l y c o e f f ( t , y ) :
" " " Computes expansion c o e f f i c i e n t s o f t r i g o n o m e t r i c p o l y o n o m i a l s \ e q r e f { eq : t r i g p r e a l } \ t e x t t t { t } : row v e c t o r o f nodes \ Blue { t0, . . . , tn [0, 1[ } \ t e x t t t { y } : row v e c t o r o f data \ Blue { y0, . . . , yn } r e t u r n va l u e s are column v e c t o r s o f expansion c o e f f i c i e n t s \ Blue { j } , \ Blue { j } """ N = y . shape [ 0 ] i f N%2 ! = 1 : r a i s e V a l u e E r r o r ( "Odd number o f p o i n t s r e q u i r e d ! " ) n = (N 1.0) / 2 . 0 M = h sta ck ( [ cos (2 p i o u t e r ( t , arange ( 0 , n +1) ) ) , s i n (2 p i o u t e r ( t , arange ( 1 , n +1) ) ) ] ) c = s o l v e (M, y ) a = c [ 0 : n+1] b = c [ n+1:] re t urn ( a , b )
i f __name__ == " __main__ " : from numpy import l i n s p a c e t = linspace (0 , 1 , 5) y = linspace (0 , 5 , 5) # # # # # # # # # # # Expected values: a= 2.5000e+00 -1.1150e-16 -2.5000e+00 b= -2.5000e+00 -1.0207e+16
z = trigpolycoeff ( t , y) print ( z ) Example 6.3.4 (Runtime comparison for computation of coefcient of trigonometric interpolation polynomials).
trigpolycoeff trigipequid
tic-toc-timings
runtime[s]
102
n
103 Fig. 57
""" Nruns = 3 t i m es = [ ] f o r n i n xrange ( 1 0 , 501 , 10) : print (n ) N = 2 n+1 t = l i n s p a c e (0 ,1 1.0/N,N) y = exp ( cos (2 p i t ) ) # Innity for all practical purposes t 1 = 10.0 10 t 2 = 10.0 10 f o r k i n xrange ( Nruns ) : t i c = time . time ( ) a, b = trigpolycoeff ( t , y) toc = time . time ( ) t i c t 1 = min ( t1 , t o c ) t i c = time . time ( ) a, b = trigipequid (y) toc = time . time ( ) t i c t 2 = min ( t2 , t o c ) t i m es . append ( ( n , t1 , t 2 ) )
t i m es = a r r a y ( t i m es ) fig = figure () ax = f i g . gca ( ) ax . l o g l o g ( t i m es [ : , 0 ] , t i m es [ : , 1 ] , " b+ " , l a b e l = " t r i g p o l y c o e f f " ) ax . l o g l o g ( t i m es [ : , 0 ] , t i m es [ : , 2 ] , " r " , l a b e l = " t r i g i p e q u i d " ) ax . s e t _ x l a b e l ( r " n " ) ax . s e t _ y l a b e l ( r " r u n t i m e [ s ] " ) ax . legend ( ) f i g . s a v e f i g ( " . . / PICTURES / t r i g i p e q u i d t i m i n g . eps " ) i f __name__ == " __main__ " : trigipequidtiming () show ( )
Same observation as in Ex. 6.2.1: massive gain in efciency through relying on FFT.
Task:
k =
(??)
q(k/N )
k e2i /N
2n
q(k/N ) =
j=0 kn e2i /N v
kj j exp(2i ) , k = 0, . . . , N 1 . N
j
with
v = FN c ,
(6.3.2)
Fourier matrix, see (6.1.3). where c CN is obtained by zero padding of c := (0, . . . , 2n)T :
(c)k =
j , for k = 0, . . . , 2n , 0 , for k = 2n + 1, . . . , N 1 .
Code 6.3.7: Fast evaluation of trigonometric polynomial at equidistant points from numpy import transpose , vsta ck , hstack , zeros , conj , exp , p i from s c i p y import f f t def t r i g i p e q u i d c o m p ( a , b , N) : " " " E f f i c i e n t evaluation of tr i g o n o m e tr i c polynomial at e q u i d i sta n t points column v e c t o r s \ t e x t t t { a } and \ t e x t t t { b } pass c o e f f i c i e n t s \ Blue { j } ,
\ Blue { j } i n r e p r e s e n t a t i o n \ e q r e f { eq : t r i g p r e a l } """ n = a . shape [ 0 ] 1 i f N < 2 n 1: r a i s e V a l u e E r r o r ( "N t o o s m a l l " ) gamma = 0.5 h sta ck ( [ a[ 1:0: 1] + 1 . 0 j b [ 1:: 1] , 2 a [ 0 ] , a [ 1 : ] 1 . 0 j b [ 0 : ] ] ) # Zero padding ch = h sta ck ( [ gamma, zeros ( ( N(2n +1) , ) ) ] ) # Multiplication with conjugate Fourier matrix v = c o n j ( f f t ( c o n j ( ch ) ) ) # Undo rescaling q = exp ( 2.0 j p i n arange (N) / ( 1 . 0 N) ) v re t urn q
i f __name__ == " __main__ " : from numpy import arange a = arange ( 1 , 6 ) b = arange ( 6 , 1 0 ) N = 10 # # # # # # Expected values: 15.00000 - 0.00000i 21.34656 + 0.00000i -5.94095 - 0.00000i 8.18514 + 0.00000i -3.31230 - 0.00000i 3.00000 + 0.00000i -1.68770 - 0.00000i -2.71300 - 0.00000i 0.94095 + 0.00000i -24.81869 - 0.00000i
Code 6.3.8: Equidistant points: fast on the y evaluation of trigonometric interpolation polynomial
from numpy import zeros , f f t , s q r t , p i , r e a l , l i n a l g , l i n s p a c e , s i n def e v a l i p t r i g ( y , N) : n = len ( y ) i f ( n%2) == 0 : c = f f t . i f f t (y) a = zeros (N, dtype=complex ) a [: n/2] = c [: n/2] a [N n / 2 : ] = c [ n / 2 : ] v = f f t . f f t (a) ; re t urn v else : r a i s e TypeError , odd l e n g t h f = lambda t : 1 . / s q r t ( 1 . + 0 . 5 s i n (2 p i t ) ) v l i n f = [ ] ; v l 2 = [ ] ; vn = [ ] N = 4096; n = 2 while n < 1+2 7: t = l i n s p a c e ( 0 , 1 , n +1) y = f ( t [: 1]) v = r e a l ( e v a l i p t r i g ( y , N) ) t = l i n s p a c e ( 0 , 1 ,N+1) fv = f ( t ) d = abs ( vf v [ : 1 ] ) ; l i n f = d . max ( )
l 2 = l i n a l g . norm ( d ) / s q r t (N) v l i n f += [ l i n f ] ; v l 2 += [ l 2 ] ; vn += [ n ] n = 2 from p yl a b import semilogy , show semilogy ( vn , v l 2 , + ) ; show ( ) compare with Code 6.3.9: Equidistant points: fast on the y evaluation of trigonometric interpolation polynomial from numpy import zeros , f f t , s q r t , p i , r e a l , l i n a l g , l i n s p a c e , s i n def e v a l i p t r i g ( y , N) : n = len ( y ) i f ( n%2) == 0 : c = f f t . f f t (y) 1./n a = zeros (N, dtype=complex ) a [: n/2] = c [: n/2] a [N n / 2 : ] = c [ n / 2 : ] v = f f t . i f f t ( a ) N ; re t urn v else : r a i s e TypeError , odd l e n g t h f = lambda t : 1 . / s q r t ( 1 . + 0 . 5 s i n (2 p i t ) )
v l i n f = [ ] ; v l 2 = [ ] ; vn = [ ] N = 4096; n = 2 while n < 1+2 7: t = l i n s p a c e ( 0 , 1 , n +1) y = f ( t [: 1]) v = r e a l ( e v a l i p t r i g ( y , N) ) t = l i n s p a c e ( 0 , 1 ,N+1) fv = f ( t ) d = abs ( vf v [ : 1 ] ) ; l i n f = d . max ( ) l 2 = l i n a l g . norm ( d ) / s q r t (N) v l i n f += [ l i n f ] ; v l 2 += [ l 2 ] ; vn += [ n ] n = 2 from p yl a b import semilogy , show semilogy ( vn , v l 2 , + ) ; show ( )
1 + 1 sin(2t) 2
10
10
10
10
10
10
||Interpolationsfehler||
||Interpolationsfehler||
2
10
6
10
10
#1 #2 #3
10
#1 #2 #3
10
10
10
10
10
12
10
12
10
14
10
14
16
n+1
32
64
128
16
n+1
32
64
128
Note:
1.2 f p 1
1.2 f p 1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.2
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.2
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
n = 16
n = 128 Q
10
10
10
Interpolationsfehlernorm
f (t) =
1 1 sin(2t)
10
auf I = [0, 1] .
10
10
=0.5, L =0.5, L2
10
10
=0.9, L
=0.9, L2 10
12
=0.95, L =0.95, L
2
10
14
=0.99, L =0.99, L2
10
16
10
20
30
40
50
60
70
80
90
100
Polynomgrad n
Analysis uses:
'
trigonometric polynomials =
fN (k) f (k) =
&
jZ
j=0
'
C p with p
&
fN (k) is a bad approximation of f (k) for k large (k N ), but for |k| N it is good enough. 2
'
f (k) absolut
kZ
& '
|k|N/2
|f (k)| x R
for all |k| > M . Then pN (x) = f (x) for all x, if N > 2M
j T = { n }j=0
n1
hence:
The trigonometric interpolation of t e2iN t and t e2i(N mod n)t of degree n give the same trigonometric interpolation polynomial ! Aliasing
e2iN t = e|T |T
2i(N n)t
N = 38:
1 p f p f
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0.2
0.4
0.4
0.4
0.6
0.6
0.6
0.8
0.8
0.8
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
n=4
n=8
n = 16
f (t) =
f (j)e2ijt
j=
Tn(f )(t) =
j=m+1
j e2ijt , j =
l=
f (j + ln) .
e(j) =
f (j + ln) , if m + 1 j m , f (j)
|l|=1
, if j m j > m .
(6.0.1)
f Tn(f )
2 L2(]0,1[)
|l|=1
f (j + ln) +
|j|m
|f (j)|2 .
(6.4.1)
of f , see (6.0.2)
may be estimated, if we know the decay of the Fourier coefcients f (j) smoothness
'
For k N:
L2(]0,1[)
f Tnf L2(]0,1[)
1 + ck nk f (k)
with ck = 2
2k + 1 p(tk ) = f (tk ) for Chebychev nodes, see (5.4.5), tk := cos , k = 0, . . . , n . 2(n + 1) Dene now the corresponding functions g, q : f : [1, 1] C p : [1, 1] C g(s) := f (cos 2s) , q(s) := p(cos 2s) , 1 1-periodic, symmetric wrt. 0 and 2 2k + 1 4(n + 1)
Hence:
=g
(6.5.1)
x = 1/2
g(s) := g(s +
=g
k 2n + 2
, k = 0, . . . , 2n .
j Tj (cos 2s) =
j=0 1 e 2 |j|
j 2i 4(n+1)
j cos 2s = 0 + =
0 +
j=n,j=0
1 2ij s+ 4(n+1) e
1 e2ijs 2 |j|
with Hence
j :=
1 exp 2
j 2i 4(n+1) |j| , if j = 0 ,
, for j = 0 .
q (s) =
j=n
and q
k 2(n + 1)
=g
k 2(n + 1)
1 2 3 4 5
Code 6.5.1: Efcient Chebyshev interpolation from numpy import exp , p i , r e a l , hstack , arange from s c i p y import i f f t def chebexp ( y ) : " " " E f f i c i e n t l y compute c o e f f i c i e n t s \ Blue { j } i n t h e Chebychev expansion \ Blue { p =
n j=0
6 7
8 9
\ Blue { k = 0, . . . , n } , i n Chebychev nodes \ Blue { tk } , \ Blue { k = 0, . . . , n } . These va l u e s are passed i n t h e row v e c t o r \ t e x t t t { y } . """ # degree of polynomial n = y . shape [ 0 ] 1 # create vector z by wrapping and componentwise scaling # r.h.s. vector t = arange ( 0 , 2 n +2) z = exp( p i 1.0 j n / ( n + 1 . 0 ) t ) h sta ck ( [ y , y [ : : 1 ] ] )
10
11
12
13
14
15
16
17
18
19
20
21
c = i f f t (z) # recover j , see (??) t = arange(n , n +2) b = r e a l ( exp ( 0 . 5 j p i / ( n + 1 . 0 ) t ) c ) # recover j , see (??) a = h sta ck ( [ b [ n ] , 2 b [ n +1:2 n + 1 ] ] ) re t urn a i f __name__ == " __main__ " : from numpy import a r r a y # Test with arbitrary values y = array ( [1 , 2 , 3 , 3.5 , 4 , 6.5 , 6.7 , 8 , 9 ]) # Expected values: # 4.85556 -3.66200 0.23380 -0.25019 -0.15958 -0.36335 0.18889 0.16546 -0.27329 w = chebexp ( y ) p r i n t (w)
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
the idea behind the trigonometric approximation; what is the Gibbs phenomenon; the dicrete Fourier transform and its use for the trigonometric interpolation; the idea and the importance of the fast Fourier transform; how to use the fast Fourier transform for the efcient trigonometrical interpolation; the error behavior for the trionomteric interpolation; the aliasing formula and the Sampling-Theorem; how to use the fast Fourier transform for the efcient Chebychev interpolation.