Sie sind auf Seite 1von 12

Fractal transformations of harmonic functions

Michael F. Barnsley, Uta Freiberg


Department of Mathematics, Australian National University, Canberra, ACT, Australia
28 December 2006

ABSTRACT
The theory of fractal homeomorphisms is applied to transform a Sierpinski triangle into what we call a Kigami
triangle. The latter is such that the corresponding harmonic functions and the corresponding Laplacian 4 take a
relatively simple form. This provides an alternative approach to recent results of Teplyaev. Using a second fractal
homeomorphism we prove that the outer boundary of the Kigami triangle possesses a continuous first derivative
at every point. This paper shows that IFS theory and the chaos game algorithm provide important tools for
analysis on fractals.

Introduction

We begin this paper with a brief review of the recently developed theory of fractal tops and fractal transformations. We emphasize that fractal transformations may be computed readily by means of the chaos game algorithm.
Then we develop a beautiful application: we show how the theory may be applied to transform the Sierpinski
triangle so that the corresponding harmonic functions and the corresponding Laplacian 4 take a relatively simple
form. This provides an alternative approach to recent results of Teplyaev31 28 29 30 . In particular, Kigami16
17
appears to have written the first papers regarding representation of Sierpinski triangle in harmonic coordinates
and Meyers23 may have been to present the geometrical interpretation of the harmonic representation. Here we
prove, by use the fractal homeomorphism theorem a second time and a third time, that the basic curves, from
which the Kigami triangle may be constructed, are continuously dierentiable.
Other relevant references of which we are aware are Berger9 , Kusuoka18 19 20 , and also Strichartz26 ,
top of page 194, where there is mention of three-by-three row-stochastic transformations in equations (5) and
their relation to the corresponding two-by-two matrices in equations (8), with mention of the eigenvectors and
eigenvalues of the latter. See also Barnsley4 5 concerning the impedence functions and spectra of renormalizable
electro-mechanical systems and their relation to Julia sets.

Hyperbolic IFS and Birkhos ergodic theorem

Definition 1. Let (X, dX ) be a complete metric space. Let {f1 , f2 , ..., fN } be a finite sequence of strictly
1

contractive transformations, fn : X X, for n = 1, 2, ..., N . Then


F := {X; f1 , f2 , ..., fN }
is called a hyperbolic iterated function system or hyperbolic IFS.
A transformation fn : X X is strictly contractive i there exists a number ln [0, 1) such that d(fn (x), fn (y))
ln d(x, y) for all x, y X.
Let denote the set of all infinite sequences {k }
k=1 of symbols belonging to the alphabet {1, ..., N }. We write
= 1 2 3 ... to denote a typical element of , and we write k to denote the k th element of . Then
(, d ) is a compact metric space, where the metric d is defined by d (, ) = 0 when = and d (, ) = 2k
when k is the least index for which k 6= k . We call the code space associated with the IFS F.
Let and x X. Then, using the contractivity of F, it is straightfoward to prove that
F () := lim f1 f2 ...fk (x)
k

(1)

exists, uniformly for x in any fixed compact subset of X, and depends continuously on . See for example1 ,
Theorem 3. Let
AF = {F () : }.
Then AF X is called the attractor of F. The continuous function
F : AF
is called the address function of F. We call 1
F ({x}) := { : F () = x} the set of addresses of the point
x AF .
Clearly AF is compact, nonempty, and has the property
AF = f1 (AF ) f2 (AF ) ... fN (AF ).
Indeed, if we define H(X) to be the set of nonempty compact subsets of X, and we define F : H(X) H(X) by
F(S) = f1 (S) f2 (S) ... fN (S),

(2)

for all S H(X), then AF can be characterized as the unique fixed point of F, see Hutchinson14 , section 3.2,
and Williams32 .
2
IFSs may be used to represent diverse subsets
p of R . An IFS whose attractor is the Sierpinski triangle with
vertices at A = (0, 0), B = (1, 0), and C = (0.5, 0.75)) is

F = {R2 ; f1 , f2 , f3 }

where f1 (x) = 12 (x + A), f2 (x) = 12 (x + B), and f3 (x) = 12 (x + C) with x = (x, y). We may represent f1 using
matrix notation for ane transformations, namely,

1
x
Ax
0
2
+
,
y
0 12
Ay
and similarly for f2 , f3 . We will write ABC to denote the triangle whose vertices are A, B, and C. Notice that
ABC is an equilateral triangle.
Definition 2. An IFS with probabilities is a hyperbolic IFS {X; f1 , f2 , ..., fN } together with a set of real
numbers pn > 0 for n = 1, 2, ..., N with p1 + p2 + ... + pN = 1.

It is a basic result of IFS theory that given any IFS F with probabilities there exists a unique Borel measure
of norm one, namely a probility measure, such that
(B) =

N
P

n=1

pn (fi1 (B))

for all Borel subsets B of X. The support of this measure is the attractor AF of the IFS. This measure may be
referred to as the measure-attractor of the IFS. The standard measure on the Sierpinski triangle corresponds to
p1 = p2 = p3 =

1
.
3

This is a normalized ln 3/ ln 2 - dimensional Hausdor measure.


Both the measure-attractor and the attractor of an IFS can be described in terms of the chaos game algorithm.
We define a random orbit of x0 X to be {xk }
k=0 where
xk+1 = fk (xk ),
k {1, 2, ..., N } with k = n with probability pn independently of all other choices. (That is, the sequence
1 2 3 ... is i.i.d.) Then, with probability one, the sequence of measures
k :=

k
1P
x
k j=1 j

converges weakly to and, also with probability one, the intersection of the nested (decreasing) sequence of
compact sets
Ak = {xj }
j=k
equals AF . The bar denotes closure.

The proof of these results relies on Birkhos ergodic theorem, see for example Vrscay12 . The scholarly
history of the chaos game is discussed by Kaijer15 and Stenflo25 , and appears to begin in 1935 with the work of
Onicescu and Mihok24 . Mandelbrot22 , pp.196-199, used a version of it to help compute pictures of certain Julia
sets; it was introduced to IFS theory and developed by one of the authors and his coworkers, see for example
Barnsley and Demko1 , Barnsley6 , Berger10 , and Elton11 , where the relevant theorems and much discussion
can be found.

Homeomorphisms between fractals

We order the elements of according to


< i k > k
where k is the least index for which k 6= k . This is a linear ordering, sometimes called the lexicographic
ordering.
Notice that all elements of are less than or equal to 1 = 11111... and greater than or equal to N =
N N N N N..... Also, any pair of distinct elements of is such that one member of the pair is strictly greater than
the other. In particular, the set of addresses of a point x AF is both closed and bounded above by 1. It follows
that 1
F ({x}) possesses a unique largest element. We denote this element by F (x).
Definition 1. Let F be a hyperbolic IFS with attractor AF and address function F : AF . Let
F (x) = max{ : F () = x} for all x AF .

Then
F := {F (x) : x AF }

is called the tops code space and


is called the tops function, for the IFS F.

F : AF F

We remark that F is a shift invariant subspace , see Barnsley7 . Consequently the rich theory of symbolic
dynamics, see Lind21 , can be brought to bear on the analysis of tops code spaces and can provide much information
about the underlying fractal structures.
But we will need the following. Let G denote a hyperbolic IFS that also consists of N functions. Then
G F : AF AG is a mapping from the attractor of F into the attractor of G. We refer to G F as a fractal
transformation.
Definition 2. The address structure of F is defined to be the set of sets
CF = {1
F ({x}) F : x AF }.

The address structure of an IFS is a certain partition of F . Let CG denote the address structure of G. Let
us write CF CG to mean that for each S CF there is T CG such that S T . Notice that if CF = CG then
F = G . Some examples of address structures are given by Barnsley8 .
Theorem 3. Let F and G be two hyperbolic IFSs such that CF CG . Then the fractal transformation
G F : AF AG is continuous. If CF = CG then G F is a homeomorphism.
The proof is given by Barnsley7

Given the two hyperbolic IFSs F and G, such that CF = CG , we note that the attractor of the IFS F G
defined by
F G = {X X; wn = (fn , gn ), n = 1, 2, ..., N }.

contains the graph G of the homeomorphism G F . Loosely speaking, the "top" of the attractor of F G equals
G. Precisely, if AF is totally disconnected or finitely ramified then AFG = G; in general it is straightfoward to
extract G from AFG . But the point we want to emphasize here is that G can be computed by means of the
chaos game algorithm applied to the IFS F G, see Barnsley7 .

Energy and harmonic functions on a sierpinki triangle

Let F = {R2 ; f1 , f2 , f3 } denote the IFS associated with the Sierpinski triangle AF with vertices at A, B, and
C. Let V0 denote the set of vertices of the triangle ABC. Namely, V0 = {A, B, C}.
Let 0 denote the set of finite sequences of symbols from the alphabet {1, 2, ..., N }, including the empty string.
That is,

S
0 =
{1, 2, 3}k {}.
k=1

Let 0 . We write || to denote the length of . Then is the empty string if and only if || = 0. We write
= 1 2 3 ...|| to denote the components of when || 6= 0.
We write f to denote the identity map on AF and, for 0 with || 6= 0, we define
f = f1 f2 ... f|| .

We define, for all m = 0, 1, 2, ...,

Vm =

f ({A, B, C}).

{0 :||=m}

We also define V =

Vm . That is, V is the set of all vertices of the Sierpinski triangle AF .

m=0

We now follow a standard construction due to Kigami, see Strichartz27 . Let u : V R. Then we define, for
all m = 0, 1, 2, ...,
P
Em (u) =
(u(x) u(y))2 .
{x,yVm :|xy|=2m }

Then if

5
E(u) = lim ( )m Em (u)
m 3
exists, we say that "u has finite energy E(u)". We also say that "u is in the domain of (the closure of) the
Laplacian", see Strichartz27 . The value of the renormalization constant 53 follows from Equation (6) below.
Suppose we are given the values
u(A) = hA , u(B) = hB , u(C) = hC .

(3)

Then the corresponding harmonic function h : V R is defined to be the function u which minimizes Em (u) for
each m, subject to the constraints in Equation (3).
We are going to construct explicity h. It is useful, for generalizations, to understand where the various special
numbers and matrices come from. Let a denote the midpoint of the line segment BC, b denote the midpoint of
the line segment CA, and c denote the midpoint of AB. Then V1 = {A, B, C, a, b, c}. We have
E0 (u) = |hA hB |2 + |hB hC |2 + |hC hA |2
and
2

E1 (u) = |hA u(c)| + |u(c) hB | + |hB u(a)| + |u(a) hC | +

|hC u(b)|2 + |u(b) hA |2 + |u(a) u(b)|2 + (|u(b) u(c)|2 + |u(c) u(a))|2 .

We minimize E1 (u) with respect to the values u(a), u(b), u(c). We find that at the minimum, where u(a) = h(a),
u(b) = h(b), and u(c) = h(c),
E1 (u)
= (h(a) hB ) + (h(a) hC ) + (h(a) h(c)) + (h(a) h(b)) = 0,
u(a)
that is,
4h(a) h(b) h(c) = hB + hC
and two other similar equations which may be

4 1
1 4
1 1
Inverting, we find

3
h(a)
10
h(b) = 1
10
1
h(c)
10

1
10
3
10
1
10

obtained by cyclic permutation. It follows that

1
h(a)
hB + hC
1 h(b) = hC + hA .
4
h(c)
hA + hB

1
0
10
1
1
10
3
1
10

1
1 1
hA
5
0 1 hB = 25
2
1 0
hC
5

2
5
1
5
2
5

2
hA
5
2
h
B .
5
1
h
C
5

(4)

It follows that
E1 (h) =

3
6
6
6
6
6
6
hA hC hA hB + h2A + h2B hB hC + h2C = E0 (h).
5
5
5
5
5
5
5

Now observe that A = f1 (A), B = f2 (B), C = f3 (C), a = f3 (B) = f2 (C), b = f1 (C) = f3 (A), and c = f2 (A) =
f1 (B).Using this, we deduce from (4) that

hA
h(fi (A))
h(fi (B)) = Ai hB ,
h(fi (C)
hC
where

A1 = 25

2
5
1
5

1
,
5
2
5

2
5

2
5

A2 = 0

1
2
5

1
5

By iterating the above steps, we readily deduce that


Em (h) =
and

1
5

0 , A3 =
2
5

2
5
1
5

1
5
2
5

2
5
2
.
5

3
3
Em1 (h) = ( )m E0 (h),
5
5

(5)

(6)

h(f1 2 ...m (A))


h(f1 2 ...m1 (A))
hA
h(f1 2 ...m (B)) = Am h(f1 2 ...m1 (B)) = Am Am1 ...A1 hB ,
h(f1 2 ...m (C))
h(f1 2 ...m1 (C))
hC

for all 1 2 ...m 0 with m 1. Strichartz27 , p.17, says that it should be possible to obtain any desired
information about harmonic functions from this last expression but that in practice it may require a lot of work.
We are going to show that it is quite easy to obtain a satisfactory description of h, both theoretical and practical,
starting from this expression.

5
5.1

Harmonic functions and the Kigami triangle

Values of harmonic functions via the chaos game

We depart from the standard construction. We let m tend to infinity, use the continuity of h on AF , and
exploit the uniform convergence in (1) to deduce that that A := lim Am Am1 ...A1 exists for all and
m

We rewrite this as

hA
h(F ())
h(F ()) = A hB for all .
h(F ()))
hC
h(F ()) = A h0

where h(F ())= (h(F ()), h(F ()), h(F ()))> and h0 = (hA , hB , hC )> . This provides the value of h at the
point F (). In order to obtain global information let v> = (v1 , v2 , v3 ) R3 be a test vector and consider
v.h(F ()) = v.A h0 = A>
v.h0
It follows that
>
>
A>
i A v.h0 = Ai v.h0 = v.h(F (i)) for i = 1, 2, 3.

Hence we can apply algorithms of the chaos game type to evaluate sequences of values {v.h(xk , yk )}
k=0 where
the sequence of points {(xk , yk )}
k=0 are distributed ergodically on the Sierpinski triangle according, say, to the
standard measure. For simplicity suppose we start from a point (x0 , y0 ) = F () and vector v0 = A>
v. Then at
the k th iterative step we choose i {1, 2, 3} randomly, independently of all other choices, and set
(xk , yk ) = fi (xk1 , yk1 ), vk = A>
i vk1 , v0 .h(xk , yk ) = vk .h0 for k = 1, 2, 3, ...

(7)

Notice that the elements of the columns of the matrices A>


i sum to one. Hence each of the transformations
represented by the A>
maps
the
plane
i
:= {(x1 , x2 , x3 ) R3 : x1 + x2 + x3 = }
into itself for each R. It follows that if v0 1 then vk 1 for all k, and then the last equation in (7) yields
h(xk , yk ) = (vk,1 hA + vk,2 hB + vk,3 hC )
for k = 1, 2, 3... Thus we have a simple algorithm which allows us to compute, rapidly, values of the harmonic
function h.

5.2

The Kigami triangle

3
We can obtain much finer information by looking at the set of points Tv0 = {A>
v0 : } R . Clearly Tv0
obeys the self-referential equation
>
>
Tv0 = A>
1 (Tv0 ) A2 (Tv0 ) A3 (Tv0 ).

Notice that v0 implies Tv0 . Without loss of generality we choose C = 1 and we represent the points
of : 1 R2 by means of the transformation ((, , )) = (, ) with inverse 1 (, ) = (1 , , ). We
readily find, in matrix representation,


1 1
1
0 1 0

and 1 = 1
0
=
+ 0
0 0 1

0
1
0
1
Then the new transformations gi = A>
: R2 R2 are defined explicitly by
i
2 1

= 15 52
g1

53 5 2

0
5
= 1
+ 51
g2
1

5
5
1 1 51

5
g3
= 5
+ 52

0 35
5

(8)

Notice that each of these ane transformations is strictly contractive. Hence


G = {R2 : g1 , g2 , g3 }
is a hyperbolic IFS. Let Tev0 = (Tv0 ). Then Tev0 = g1 (Tev0 )g2 (Tev0 )g3 (Tev0 ). This set is compact and nonempty.
Hence it must be the unique attractor of the IFS G, that is
AG = Tev0 .

It is easy to make as sketch of AG with the aid of the chaos game algorithm. See Figure 1. We refer to AG
as a Kigami triangle because he first noted the simplicity of the Sierpinski triangle when it is represented using

harmonic coordinates. Alexander Teplyaev31


triangle and some of its relatives,

28 29 30

appears to be the first to publish pictures of a Kigami

We now prove that the Sierpinski triangle AF and the Kigami triangle AG are homeomorphic. Let N denote
e = (0, 0), B
e = (1, 0), and C
e = (0, 1). Let e
the triangle with vertices A
a = ( 25 , 25 ), eb = ( 15 , 25 ) and e
c = ( 25 , 15 ). Then
we note that
g1 (N) g2 (N) = e
c, g2 (N) g3 (N) = e
a, g3 (N) g1 (N) = eb.
It follows that

g1 (AG ) g2 (AG ) = e
c, g2 (AG ) g3 (AG ) = e
a, g3 (AG ) g1 (AG ) = eb.

This implies that AF has the same code structure as AG . Hence, by Theorem 3, the Sierpinski triangle AF and
the Kigami triangle AG are homeomorphic. This is a much shorter proof, for those familiar with IFS theory,
than the one given by Teplyaev31 ; in essense the proofs are the same, but the isolation of the tools used is much
clearer in the IFS framework. We will use these same tools to prove that the basic curves, from which the Kigami
triangle may be constructed, are continuously dierentiable.
Let H : AF AG denote the homeomorphism between the attractors. This continuous invertible transformation, with continuous inverse H 1 : AG AF , relates each point on one attractor to the corresponding point
on the other attractor with the same set of addresses. It readily follows that we can write, consistently with our
earlier usage
(, ) = H(x, y).
The harmonic function h : AF R becomes, on AG , e
h : AG R given by
e
h = h H 1 and e
h(, ) = h(x, y).

and we have
Finally note that

e
h(, ) = h(x, y).

e
h(, ) = hA (1 ) + hB + hC = h(x, y).

In the new coordinates, the harmonic function may be described using the plane which passes through the
points (hA , 0, 0), (0, hB , 0) and (0, 0, hC ). Treat the Kigami triangle as being located on the plane = 0, in
three-dimensional space described using rectangular coordinates (, , ) so that a typical point on the Kigami
triangle is represented by (0, , ). If the unique corresponding point on , with the same , and coordinates,
is (, , ) then e
h(, ) = . See Figure 1.
To obtain a symmetrical version of the Kigami triangle we make the change of variable

T =

1
0

1
1
2
3 ,T
2

1 13 3
0 23 3

which corresponds to mapping the vertices (0, 0), (1, 0) and (0, 1) to (0, 0), (1, 0) and (0, 3/2), respectively, which
lie of an equilateral triangle. Then the linear transformations become self-adjoint, specifically

DR 2 (), ge2 () = R 2 DR 2
() + t2 , ge3 () = D() + t3
ge1 () = R 2
3
3
3

where

=
, D= 5

0
3
5


1
2

2
5

, t3 = 1
, =
, t2 = 1

10 3
5 3

and R denotes clockwise rotation about the origin through angle . See Figure 2.

Figure 1: The Sierpinski triangle and the Teplyaev triangle. They are homeomorphic because they have the same
code structure. Unlike the Sierpiskis triangle, the Tephaev triangle has a unique "direction" at every point of its
outer boundary and at every point on the internal closed curves.

Figure 2: An equilateral Kigami triangle.

We introduce the vectors



1
0
1
, w2 =
, w3 = 23 ,
0
0
2
3
2
1
2
5

2
w4 =
3 , w5 =
3 , w6 =
3 .

w1 =

10

Then we note that

ge1 (w1 ) = w1 , ge1 (w2 ) = w6 , ge1 (w3 ) = w5 ,


ge2 (w2 ) = w2 , ge2 (w1 ) = w6 , ge2 (w3 ) = w4 ,
ge3 (w3 ) = w3 , ge3 (w2 ) = w4 , ge3 (w1 ) = w5 .

(9)

A Kigami triangle can thought of as a type of fractal interpolation, see Barnsley2 3 , taking the topological form of
a Sierpinski triangle in place a line segment, and passing through, or interpolating, the points w1 , w2 ,w3 , w4 , w5 ,
and w6 .

5.3

The nature of the Kigami triangle

By zooming in and looking closely at pictures of Tev0 we are led to the impression that there is a well-defined
tangent direction at every point on each of the curves which define the outer boundary and also on all of the
internal loops, and that this tangent direction depends continuously on location. This is indeed the case. Thus
IFS theory provides geometrical information which makes more precise the known fact, proved using Osceledets
theorem on products of random matrices, that there is a well-defined direction at almost every point, with respect
to the standard measure. The proof is quite beautiful.
e of the IFS
Theorem 1. The attractor G

Ge = {R2 ; ge1 , ge2 }

is the graph of a continuously dierentiable function b : [0, 1] [0, 1].


Proof. It follows from Equations (9) that

Furthermore

ge1 (w1 ) = w1 , ge1 (w2 ) = w6


ge2 (w2 ) = w2 , ge2 (w1 ) = w6

ge1 (w1 w2 w3 ) = w1 w6 w5 and ge2 (w1 w2 w3 ) = w6 w2 w4 ,

and the triangles w1 w6 w5 and w6 w2 w4 are both contained in the triangle w1 w2 w3 . It follows that that the code
structure of the attractor of the IFS {R2 ; ge1 , ge2 } is the same as the code structure of the attractor of the IFS
{R; f1 , f2 }

e is the
whose attractor is the line segment [0, 1] R. It follows, via the fractal homeomorphism theorem, that G
graph of a continuous function b : [0, 1] [0, 1].
Finally we outline the proof that b is dierentiable and that its derivative is continuous. Consideration of the
manner in which the two functions which comprise Ge map ellipses into ellipses leads to the conclusion that the
e exists at every point if and only if it is defined by the attractor of the IFS
slope of G

1
3 3x
3 + 3x
1
H = {[ , ]; , w1 (x) =
, w2 (x) =
}.
3
3
3x 5
3x + 5

e at the point whose address is


That is, we need to show that H is indeed a hyperbolic IFS; then the slope of G
e
is the same as H () provided the address structures the two IFSs G and H are the same. It is readily verified
that H is indeed a hyperbolic IFS. Moreover
1
1
1
1
1
1
w1 ([ , ]) = [ , 0] and w2 ([ , ]) = [0, ].
3
3
3
3
3
3

It follows that the attractor of H is the interval [ 13 , 13 ] and that the two IFSs Ge and H do indeed have the
e is not only dierentiable but also it
same address structures. Hence, via the fractal homeomorphism theorem, G
is homeomorphic to the set of its slopes. Hence it is continuously dierentiable.

REFERENCES

[1] Barnsley, Michael F.; Demko, Stephen G. Iterated function systems and the global construction of fractals.
Proc. Roy. Soc. London Ser. A 399 (1985), no. 1817, 243275.
[2] Barnsley, Michael F. Fractal functions and interpolation. Constr. Approx. 2 (1986), no. 4, 303329.
[3] Barnsley, Michael F.; Harrington, Andrew N. The calculus of fractal interpolation functions. J. Approx.
Theory, 57 (1989), no. 1, 1434.
[4] Barnsley, Michael F.; Morley, T. D. Chaos in electro-mechanical systems and fractals. Proceedings of Southeastern Conference on Theoretical and Applied Mechanics XIII (1984).
[5] Barnsley, Michael F.; Morley, T. D.; Vrscay, E. R. Iterated networks and the spectra of renormalizable electromechanical systems. J. Stat. Phys. 40 (1985), 39-67.
[6] Barnsley, Michael F. Fractals everywhere. Academic Press, Inc., Boston, MA, 1988.
[7] Barnsley, Michael F. Superfractals. Cambridge University Press, Cambridge, NewYork, Melbourne, 2006.
[8] Barnsley, Michael F. Transformations between attractors of iterated function systems. Preprint, 2006.
[9] Berger, Marc A. Random ane iterated function systems: curve generation and wavelets. SIAM Review, 34
(1992), 361-385.
[10] Berger, Marc A. An introduction to probability and stochastic processes. Springer Texts in Statistics. SpringerVerlag, New York, 1993.
[11] Elton, John H. An ergodic theorem for iterated maps. Ergodic Theory Dynam. Systems 7 (1987), no. 4,
481488.
[12] Forte, Bruno; Mendivil, Franklin. A classical ergodic property for IFS: a simple proof. Ergodic Theory Dynam.
Systems 18 (1998), no. 3, 609611.
[13] Hata, Masayoshi, On the structure of self-similar sets. Japan J. Appl. Math. 2 (1985), no. 2, 381414.
[14] Hutchinson, John E. Fractals and self-similarity. Indiana Univ. Math. J. 30 (1981), no. 5, 713747.
[15] Kaijser, Thomas. On a new contraction condition for random systems with complete connections. Rev.
Roumaine Math. Pures Appl. 26 (1981), no. 8, 10751117.
[16] Kigami, J. Harmonic calculus on p.c.f. self-similar sets. Trans. Amer. Math. Soc. 335 (1993), 721-755.
[17] Kigami, J. Harmonic metric and Dirichlet form on the Sierpinski gasket. In "Asymptotic problems in probability theory: stochastic models and diusions on fractals" (K.D. Elworth and N. Ikeda, eds) Pitman Research
Notes in Math., 283, Longman, 1993, 201-218.

[18] Kusuoka, S. Dirichlet forms on fractals and products of random matrices. Publ. Res. Inst. Math. Sci 25
(1989), 659-680.
[19] Kusuoka, S. Lecture on diusion process on nested fractals. Lecture Notes in Mathematics 1567, 39-98,
Springer-Verlag Berlin, 1993.
[20] Kusuoka, S.; Zhou, X.Y. Dirichlet forms on fractals: Poincar constant and resistance. Probab. Theory
Related Fields 93 (1992), 169-196.
[21] Lind, Douglas; Marcus, Brian. An introduction to symbolic dynamics and coding. Cambridge University
Press, 1995.
[22] Mandelbrot, Benoit B. The fractal geometry of nature. W. H. Freeman Publishing Company, San Francisco,
1983.
[23] Meyers, Robert; Strichartz, Robert S.; Tepylaev, Alexander. Dirichlet form on the Sierpinski gasket. Pacific
Journal of Mathematics, 217 (2004), 149-174.
[24] Onicescu, O.; Mihok, G. Sur les chanes de variables statistiques. Bull. Sci. Math. de France, 59 (1935),
174-192.
[25] Stenflo, rjan. Uniqueness of invariant measures for place-dependent random iterations of functions. Fractals
in multimedia (Minneapolis, MN, 2001), 1332, IMA Vol. Math. Appl., Springer-Verlag, New York, 2002.
[26] Strichartz, Robert S. Some properties of laplacians on fractals. Journal of Functional Analysis, 164 (1999),
181-208
[27] Strichartz, Robert S. Dierential equations on fractals. Princeton University Press, Princeton and Oxford,
2006.
[28] Teplyaev, A. Spectral analysis on infinite Sierpi
nski gaskets, J. Funct. Anal., 159 (1998), 537-567.
[29] Teplyaev, A. Gradients on fractals, J. Funct. Anal., 174 (2000), 128-154.
[30] Teplyaev, A. Energy and Laplacian on the Sierpi
nski gasket. Fractal Geometry and Applications: A Jubilee
of Benoit Mandelbrot, Part 1. Proc. Sympos. Pure Math. 72, Amer. Math. Soc. (2004), 131-154.
[31] Teplyaev, A. Harmonic coordinates on fractals with finitely ramified cell structure. Preprint, 2006.
[32] Williams, R.F. Composition of contractions. Bol. da Soc. Brasil de Mat., 2 (1971), 55-59.

Das könnte Ihnen auch gefallen