0 Bewertungen0% fanden dieses Dokument nützlich (0 Abstimmungen)

58 Ansichten57 SeitenBachelor Thesis

© Attribution Non-Commercial ShareAlike (BY-NC-SA)

PDF oder online auf Scribd lesen

Bachelor Thesis

Attribution Non-Commercial ShareAlike (BY-NC-SA)

Als PDF **herunterladen** oder online auf Scribd lesen

0 Bewertungen0% fanden dieses Dokument nützlich (0 Abstimmungen)

58 Ansichten57 SeitenBachelor Thesis

Attribution Non-Commercial ShareAlike (BY-NC-SA)

Als PDF **herunterladen** oder online auf Scribd lesen

Sie sind auf Seite 1von 57

many branches of operator theory. They arise naturally in the context of linear

operators, as their formal inverse, adjoint or closure. Although the general con-

cept is known for over 75 years now (see for instance [7], [8], [1]), there is still only

one book treating the subject, namely [3]. Therefore a general introduction to

the theory of linear relations is given in the first part of the present work, focus-

ing on algebraic properties and some generalizations of linear operator concepts.

Also, the new notion of splitting a linear relation L into two linear operators

S and T such that L = S −1 T is introduced and studied, as well as the linear

relation T S −1 arising from the commutation of both factors S −1 and T

The second part is concerned with the application of the formerly introduced

ideas to the treatment of poles of a generalized resolvent λS −T for two operators

S and T . In [2], the authors examine the same problem. The vector spaces

they introduce turn out to be kernels and ranges of powers of linear relations,

objects similar to those used in the case of an ordinary resolvent. The theorems

and proofs in [2] are translated and adapted into the modern language of linear

relations.

Zusammenfassung

Das Konzept linearer Relationen, auch mehrwertige lineare Operatoren ge-

nannt, hat sich in vielen Bereichen der Operatortheorie als nützliches Hilfsmittel

erwiesen. Lineare Relationen stellen eine natürliche Verallgemeinerung (einwerti-

ger) linearer Operatoren dar. Relationale Inverse, Adjungierte und Abschlüsse der

Graphen von linearen Operatoren sind oft selbst keine Operatoren, man kann sie

aber stets als lineare Relationen auffassen. Obwohl das Konzept schon vor min-

destens 75 Jahren eingeführt wurde (siehe [7], [8], [1]), ist die Fachliteratur zu

dem Thema immer noch rar gesät. Das Buch [3] ist die einzige weitverbreitete Re-

ferenz. Aus diesem Grund befasst sich der erste Teil der vorliegenden Arbeit mit

der Einführung in die Theorie der linearen Relationen, mit dem Hauptaugenmerk

auf ihre algebraischen Eigenschaften und der Verallgemeinerung von Konzepten

linearer Operatoren. Zudem wird eine neue Darstellungsmöglichkeit linearer Re-

lationen eingeführt: Jede solche Relation L lässt sich als S −1 T schreiben, wobei S

und T lineare Operatoren zwischen entsprechenden Vektorräumen sind. Die Ei-

genschaften solcher sogenannter „Splits“ werden untersucht, zusammen mit der

natürlich auftretenden, neuen Relation T S −1 .

Der zweite Teil befasst sich mit der Anwendung der allgemeinen Konzepte auf

die Charakterisierung von Polen einer verallgemeinerten Resolvente λS − T über

verallgemeinerte Auf- und Absteigeindizes, wobei T ein abgeschlossener und S

ein T -beschränkter linearer Operator ist. H. Bart und D. C. Lay befassen sich in

ihrem Artikel [2] mit derselben Fragestellung. Die in jenem Artikel eingeführten

Vektorräume stellen sich als Kerne und Bildräume der linearen Relationen S −1 T

und T S −1 heraus, und können mit in der im ersten Teil eingeführten Terminologie

linearer Relationen strukturierter beschrieben und mit den entwickelten Mitteln

untersucht werden.

1

Contents

1 Preliminaries 2

1.1 Linear relations and related notions . . . . . . . . . . . . . . . . . . . . . 2

1.2 Ascent, descent and generalizations . . . . . . . . . . . . . . . . . . . . . 11

1.3 Splitting linear relations . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.1 Algebraic decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.2 Poles of the generalized resolvent . . . . . . . . . . . . . . . . . . . . . . 43

1

1 PRELIMINARIES

1 Preliminaries

1.1 Linear relations and related notions

We will introduce notation for linear relations, which we will use throughout the work.

Linear relations generalize the concept of a (not necessarily everywhere defined) linear

operator between two vector spaces. Some of the properties which we are about to

show do not need the linear structure, but we will not concern ourselves with fine

details.

X, Y and Z will always denote F -vector spaces, where for now, F may be an

arbitrary field. In particular, “subspace” will always mean “linear subspace”, without

any topological conditions.

or simply in X × Y . Denote by

R(X, Y ) := {L ⊆ X × Y | L linear relation}

the set of all linear relations in X to Y . In case of Y = X, we write R(X) := R(X, X).

Furthermore, we define:

• ker L := {x ∈ X | (x, 0) ∈ L} (kernel of L),

• mul L := {y ∈ Y | (0, y) ∈ L} (multivalued part of L),

• dom L := {x ∈ X | ∃y ∈ Y : (x, y) ∈ L} (domain of L),

• ran L := {y ∈ Y | ∃x ∈ X : (x, y) ∈ L} (range of L).

• For a subset M ⊆ X we set

L[M ] := {y ∈ Y | ∃ x ∈ M ∩ dom L : (x, y) ∈ L} ,

the so-called image of M under L. By “abuse of notation”, we set for a single

vector x ∈ X:

Lx := L[{x}] = {y ∈ Y | (x, y) ∈ L} .1

• The inverse linear relation L−1 ⊆ Y × X is defined as

L−1 := {(y, x) ∈ Y × X : (x, y) ∈ L} .

L(U,V ) := L ∩ (U × V ) = {(x, y) ∈ U × V | (x, y) ∈ L}

the compression of L to the spaces U and V . In case X = Y and U = V

we write LU := L(U,U) . Another special kind of compression is the restriction

of L to subspace U ⊆ X of its domain, so

L|U := L(U,Y ) = {(x, y) ∈ U × Y | (x, y) ∈ L} .

2

1 PRELIMINARIES

From now on, let L ⊆ X × Y be a linear relation. One can easily verify that

kernel and domain as well as multivalued part and range are subspaces of X and Y ,

respectively. Also, if M ⊆ X is a linear subspace of X, then L[M ] is linear subspace

of Y . Of course, compressions and hence restrictions are linear relations in X to Y ,

too.

Remark: If one wanted to be particularly exact, a linear relation L in X × Y would have to

be defined as triple (X, Y, L), as in the case of functions. But in most cases, it will be clear

from the context which spaces are involved, and we will freely consider a compression L(U,V )

as linear relation in X × Y as well as in U × V .

One advantage of studying linear relations rather than operators is the symmetry of

their definition regarding domain and range. For an operator T : X → Y , the inverse

T −1 need not be an operator, but it always is a linear relation.

(i) ker L = mul L−1 ,

(ii) ran L = dom L−1 ,

(iii) (L−1 )−1 = L.

(y, x) ∈ L−1 ⇐⇒ (x, y) ∈ (L−1 )−1 .

By simply adding or subtracting the vectors, one verifies that the following impli-

cations hold true:

Lemma 1.2

(i) x ∈ ker L and y ∈ mul L ⇒ (x, y) ∈ L,

(ii) (x, y) ∈ L and x ∈ ker L ⇒ y ∈ mul L,

(iii) (x, y) ∈ L and y ∈ mul L ⇒ x ∈ ker L.

As already stated, linear relations generalize the concept of linear operators. In fact:

2 In the rest of this work we shall not distinguish between a linear operator and its graph.

3

1 PRELIMINARIES

in our relational notation: mul L = L0 = {0}. On the other hand, if mul L = {0}, then

L is (the graph of) a well-defined function. It is linear, since L is a linear subspace.

(i) In case M ⊆ L, M is called subrelation of L.

(ii) The product or composition of K and L is

(or “I” for short) is the identity map on X. For m ∈ Z, m < 0, we set

Lm := (L−1 )m .

(iii) For a scalar α ∈ F we set

We write −L := (−1)L.

(iv) The operator-like sum of two linear relations L and M is defined as

If we want to consider the sum of two linear relations regarded as linear sub-

b so

spaces, we will explicitly use “ +”,

L+

The usual sum of linear subspaces U, V ⊆ X which are no linear relations will

be denoted by U + V . If the sum is direct, that is U ∩ V = {0}, then we express

this by writing U ⊕ V .

b LV , we call L completely reduced

(v) Let U, V ⊆ X be subspaces. If L = LU ⊕

by the pair (U, V ).

Then KL, αL and L + M are linear relations. “◦” is associative; also, (R(X, Y ), +)

3 In this work, 0 ∈ N.

4

1 PRELIMINARIES

and (R(X), ·) form monoids with identity element X × {0} and I respectively.

(R(X, Y ), +) is even commutative.

Images of subsets under linear relations behave exactly as in the case of operators.

U ⊆ V ⊆ X, W ⊆ Y be subsets. Then:

S

(i) L[U ] = x∈U Lx.

(ii) If L is an operator, taking preimages is the same as taking images under the

inverse relation: (L)−1 [W ] = L−1 [W ].

(iii) M [ker L] = ker LM −1 .

(iv) K[L[U ]] = (KL)[U ], in particular K[ran L] = ran KL.

(v) L[U ] ⊆ L[V ].

(vi) For x ∈ dom L, Lx ⊆ W implies x ∈ L−1 [W ]; thus: if L[U ] ⊆ W , then

U ⊆ L−1 [W ].

(vii) (L + N )[U ] ⊆ L[U ] + N [U ]4 .

Proof:

(i) If x ∈ U \ dom L, then Lx = ∅, so we can write

[ [

L[U ] = Lx = Lx.

x∈U∩dom L x∈U

(ii)

(L)−1 [W ] = {x ∈ X | ∃ y ∈ W : (x, y) ∈ L}

= x ∈ X ∃ y ∈ W ∩ dom L−1 : (y, x) ∈ L−1

= L−1 [W ].

(iii)

= z ∈ Z ∃ x ∈ dom L ∩ dom M : (x, 0) ∈ L, (z, x) ∈ M −1

= z ∈ Z (z, 0) ∈ LM −1 = ker LM −1 .

4 Note that the plus sign has a different meaning on each side.

5

1 PRELIMINARIES

(iv)

= {z ∈ Z | ∃ x ∈ U ∩ dom L, y ∈ dom K : (x, y) ∈ L, (y, z) ∈ K}

= {z ∈ Z | ∃ x ∈ U ∩ dom KL : (x, z) ∈ KL} = (KL)[U ].

(v) This is clear from the definition.

(vi) Suppose x ∈ dom L and Lx ⊆ W , then there exists some y ∈ W ∩ ran L with

(x, y) ∈ L, or (y, x) ∈ L−1 , which means x ∈ L−1 [W ]. Now the second part

follows from (i).

(vii) Let y ∈∈ (L + N )[U ], then there exists an x ∈ U ∩ dom(L + N ) = U ∩ dom L ∩

dom N such that (x, y) ∈ (L+N ). That means there further exist y1 , y2 ∈ Y with

(x, y1 ) ∈ L and (x, y2 ) ∈ N as well as y = y1 + y2 . This shows y ∈ L[U ] + N [U ].

The addition and multiplication of linear relations is distributive under certain con-

ditions.

(L + M )K ⊆ LK + M K,

LK + LJ ⊆ L(K + J),

Proof:

• Let (x, z) ∈ (L+M )K, so there is some y ∈ Y with (x, y) ∈ K and (y, z) ∈ L+M .

There are vectors z ′ , z ′′ ∈ Z with (y, z ′ ) ∈ L, (y, z ′′ ) ∈ M and z = z ′ + z ′′ . This

implies (x, z ′ ) ∈ LK and (x, z ′′ ) ∈ M K, so

(x, z) = (x, z ′ + z ′′ ) ∈ LK + M K.

Now, suppose mul K ⊆ ker M (the other case follows from the commutativity of

the operator-like sum). Let (x, z) = (x, z ′ + z ′′) ∈ LK + M K, such that (x, z ′ ) ∈

LK and (x, z ′′ ) ∈ M K. Then there are some y ′ , y ′′ ∈ Y with (x, y ′ ), (x, y ′′ ) ∈ K,

(y ′ , z ′ ) ∈ L and (y ′′ , z ′′ ) ∈ M . We have y ′ − y ′′ ∈ mul K ⊆ ker M . Hence,

(y ′ , z ′′ ) = (y ′′ , z ′′ ) + (y ′ − y ′′ , 0) ∈ M

6

1 PRELIMINARIES

• To show the second statement, let (x, z) ∈ LK + LJ, that means (x, z ′ ) ∈ LK

and (x, z ′′ ) ∈ LJ for some z ′ , z ′′ ∈ Z with z = z ′ + z ′′ . Consequently, there are

some y ′ , y ′′ ∈ Y with (x, y ′ ) ∈ K, (y ′ , z ′ ) ∈ L, (x, y ′′ ) ∈ J and (y ′′ , z ′′ ) ∈ L. Now,

(x, y ′ + y ′′ ) ∈ K + J and (y ′ + y ′′ , z ′ + z ′′ ) = (y ′ + y ′′ , z) ∈ L. But this already

shows (x, z) ∈ L(K + J).

Now, suppose mul L ⊆ mul LK + LJ and ran K + ran J ⊆ dom L in order to

show that equality holds. Let (x, z) ∈ L(K + J), that is, there is some y ∈ Y

such that (x, y) ∈ K + J and (y, z) ∈ L. By the definition of the operator-

like sum, we find vectors y ′ , y ′′ ∈ Y with y = y ′ + y ′′ such that (x, y ′ ) ∈ K

and (x, y ′′ ) ∈ J. Choose z ′ , z ′′ ∈ Z with (y ′ , z ′ ), (y ′′ , z ′′ ) ∈ L. This is possible,

because y ′ ∈ ran K ⊆ dom L and y ′′ ∈ ran J ⊆ dom L. This implies (y ′ , z ′ ) +

(y ′′ , z ′′ ) = (y, z ′ + z ′′ ) ∈ L and (y, z) − (y, z ′ + z ′′ ) = (0, z − (z ′ + z ′′ )) ∈ L. Thus,

z = z ′ + z ′′ + u with u ∈ mul L ⊆ mul LK + LJ. But then (x, z ′ ) ∈ LK and

(x, z ′′ ) ∈ LJ, so (x, z ′ + z ′′ ) ∈ LK + LJ. Since (0, u) ∈ LK + LJ, we obtain

(x, z ′ + z ′′ ) + (0, u) = (x, z) ∈ LK + LJ. This shows the equality.

X to Y do not form a vector space in general. As we have noted in Proposition

1.4, R(X, Y ) equipped with the addition from Definition 1.2, (iv), is a commutative

monoid with identity element 0 := X × {0F }. Of course, the everywhere defined linear

operators from X to Y form a submonoid of (R(X, Y ), +), which is even an abelian

group. And more precisely:

is (additively) invertible in (R(X, Y ), +) if and only if it is an everywhere defined

linear operator.

Proof: We have

L − L = 0 · L ⇐⇒ L − L = dom L × {0} ,

which is only true, if mul L = {0}: If y1 , y2 ∈ mul L with y1 − y2 6= 0, then (0, y1 ),

(0, y2 ) ∈ L, so (0, y1 − y2 ) ∈ L − L, thus {0} ( span {y1 − y2 } ⊆ ran(L − L).

For the second statement, suppose L possesses an additive inverse K. Firstly, note

that dom 0 = X and dom(L + K) = dom L ∩ dom K for all L, K ∈ R(X, Y ). Hence

we must have dom L = dom 0 = X.

Furthermore, for (x, 0) ∈ L + K to hold, it is necessary that (x, −z) ∈ K for some

z ∈ Y with (x, z) ∈ L. If there was some y ∈ Y with y 6= z and (x, y) ∈ L, we would

have (x, y − z) ∈ L + K, which contradicts L + K = 0. So mul L = {0}.

Except for trivial cases, (R(X, Y ), +) is not an abelian group. We cannot even

expect (R(X, Y ), +, ·) to form an F -semimodule5 , where · denotes the scalar multipli-

cation from Definition 1.2, (iii). This is because otherwise we had

7

1 PRELIMINARIES

But we have

for all α, β ∈ G6 . Then (R(X, Y ), +, ·) is a commutative G-semimodule with zero.

Proof: The fact that α(L + M ) = αL + αM , as well as α(βL) = (αβ)L, hold true for

all α, β ∈ F and L, M ∈ R(X, Y ) is easy to see. Now, let α, β ∈ G. From Lemma 1.6,

we know

idempotent, so

(α + β)L = 0 · L = 0 · L + 0 · L = αL + βL.

If α + β 6= 0, we consider (x, αy1 + βy2 ) ∈ (αL + βL), where (x, y1 ), (x, y2 ) ∈ L. Then

so

α β

x, αy1 , x, βy2 ∈ (α + β)L,

α+β α+β

which eventually implies

Example 1.1 By what we just have shown, we see that the condition of Lemma 1.6

is not necessary. For all α, β ∈ F with α + β 6= 0 and all K ∈ R(X, Y ) we have

5 Let S be a non-empty set equipped with two binary operations +, · : S × S → S. Following [4] (p.

427-428), (S, +, ·) is called semiring, if (S, +) and (S, ·) are semigroups such that

a(b + c) = ab + ac, (a + b)c = ac + bc ∀ a, b, c ∈ S.

It is called (nontrivial) semifield, if additionally (S ∗ , ·)

is a group, where S ∗ := S \ {o}, if o is

the zero of (S, +), and S ∗ := S otherwise.

Let (S, +) be a semigroup, (R, +, ·) a semiring and · : R × S → S a mapping called scalar

multiplication denoted by (α, s) 7→ α · s = αs. (R S, +, ·) is called a (left) R-semimodule, if

α(a + b) = αa + αb, (α + β)a = αa + βa, α(βa) = (αβ)a ∀ α, β ∈ R, a, b ∈ S.

It is called unitary, if (R, +, ·) has an (multiplicative) identity ε and εa = a holds for all a ∈ S.

We differ from [4] (p. 444) in that we do not assume that 0 · a = 0 for all a ∈ S, where 0 is the

zero in R.

6 Semifields satisfying that property are called zero-sum free, see [4] page 428.

8

1 PRELIMINARIES

On the other hand we have already seen in Proposition 1.7 that 0·K = (1+(−1))K =

1 · K + (−1)K = K − K can only hold, if K is a linear operator, so a strict inclusion

can occur in Lemma 1.6.

It is clear that kernel, domain and range of linear relations generalize the kernel,

domain and range of linear operators. If the operator L is also injective, then the

inverse linear relation and inverse operator coincide. However, the product of a linear

relation and its inverse is in general not the identity operator.

• LL−1 = Iran L + b ({0} × mul L). In particular, if L is a linear operator, we have

−1

LL = Iran L .

• L−1 L = Idom L + b (ker L × {0}),

where Iran L := {(y, y) ∈ Y × Y : y ∈ ran L} is the identity operator on ran L and

Idom L is defined analogously.

Proof: Let (y, y ′ ) ∈ LL−1 , that is, there exists an element x ∈ X with (y, x) ∈ L−1 and

(x, y ′ ) ∈ L. So, (x, y), (x, y ′ ) ∈ L, which also implies y, y ′ ∈ ran L and y ′ − y ∈ mul L.

Now the decomposition (y, y ′ ) = (y, y) + (0, y ′ − y) shows

(y, y ′ ) ∈ I +

b ({0} × mul L).

b

Conversely, let (y, y +z) ∈ Iran L +({0}×mul L) with z ∈ mul L. There is an element

x ∈ dom L with (x, y) ∈ L and thus (x, y + z) ∈ L as well. Since (y, x) ∈ L−1 and

(x, y + z) ∈ L, (y, y + z) ∈ LL−1 holds true.

The second equation follows from passing to inverses: from the first equation we

know

b ({0} × mul L−1 ) = Idom L +

b ({0} × ker L).

Inverting on both sides yields, since passing to inverses commutes with taking sums of

subspaces,

−1

L−1 L = Idom b

L + ({0} × ker L)

−1 b (ker L × {0}).

= Idom L +

Though we have seen that the relational inverse L−1 and the operator-like negative

−L are no inverses in an algebraic sense, they are obviously involutive, and they even

commute.

Lemma 1.10 We always have (L−1 )−1 = L and −(−L) = L, as well as (−L)−1 =

−L−1 .

9

1 PRELIMINARIES

We also have a group-like relation between taking products and taking inverses.

we have

L−n := (L−1 )n = (Ln )−1 ∀ n ∈ N.

Proof: Let (z, x) ∈ (KL)−1 , so we have (x, z) ∈ KL, which means there exists some

y ∈ Y such that (x, y) ∈ L and (y, z) ∈ K. But that implies (y, x) ∈ L−1 and

(z, y) ∈ K −1 , and thus (z, x) ∈ L−1 K −1 . The converse inclusion follows analogously.

Since Lx is a coset of mul L, we can associate a linear operator with a linear relation

L by factoring Y by the multivalued part. More precisely

the linear relation Lop := QL L ⊆ X × (Y / mul L), which we call linear operator

associated with L.

Proposition 1.12 The linear relation Lop is a well-defined linear operator with

ran L

dom Lop = dom L, ker Lop = ker L and ran Lop = QL [ran L] = mul L. Set-

theoretically we have

Lop x = Lx ∀ x ∈ dom Lop .

Lop and mul L determine L ⊆ X × Y uniquely, that is, if M ⊆ X × Y is another

linear relation then Lop = Mop and mul L = mul M already imply L = M .

Proof:

• We use Lemma 1.3: Let [y] ∈ mul QL L, which means there is some z ∈ Y with

(0, z) ∈ L and (z, [y]) ∈ QL . But since the first statement means z ∈ mul L, so

(z, [0]) by the very definition of QL , and since QL is an operator, together with

the second statement, we get [y] = [0].

10

1 PRELIMINARIES

• Now for the mnemonic equality Lop x = Lx, let y ∈ Lx for some x ∈ X, which

means (x, y) ∈ L. Then we claim Lx = y + mul L: z ∈ Lx implies (x, z) ∈ L, so

L ∋ (x − x, y − z) = (0, y − z),

if z ∈ y + mul L, then we find some w ∈ mul L (so (0, w) ∈ L) with z = y + w.

This implies (0, z − y) ∈ L, and hence

L ∋ (0 + x, z − y + y) = (x, z),

or equivalently z ∈ Lx.

• dom Lop = dom L and ran Lop = QL [ran L] are easily verfied equalities.

• Concerning the kernel: If Lop x = mul L, then there exists a y ∈ mul L such

that (x, y) ∈ L. Lemma 1.2 (iii) implies x ∈ ker L, so ker Lop ⊆ ker L. For the

converse inclusion, let x ∈ ker L. Now, Lemma 1.2 (ii) implies Lop x ⊆ mul L.

Since Lop is well-defined, we have shown that x ∈ ker Lop .

• To show that L = M , if Lop = Mop and mul L = mul M , let (x, y) ∈ L, that is

y + mul M = y + mul L = Lx = M x.

L is unambiguously defined by (ei , yi ) ∈ L for all i ∈ I and its multivalued part.

Furthermore, given Lop and mul L, we can reconstruct the relation L.

L Lop .

Lop x = y + mul L = QL y,

so (x, y) ∈ Q−1 −1

L Lop . Now, let (x, y) ∈ QL Lop , then x ∈ dom L and y ∈ ran L with

Lx = Lop x = QL y = y + mul L,

In this section, we want to examine some characteristical properties of linear relations

L ⊆ X × X. In definition 1.2, we introduced powers of such relations. It is often

convenient to characterize them as follows.

11

1 PRELIMINARIES

said to be an L-chain (of length n), if

( )

[

m

Ch(L) := (x1 , . . . , xn ) ∈ X n ∈ N, n ≥ 1 .

m∈N

L-chain (x1 , . . . , xn+1 ) with x = x1 and xn+1 = y.

is some – admittedly degenerated – L-chain (x1 ) with x = x1 = y. The case n ≥ 1

follows easily from the the definition of Ln .

3) with leading and trailing zero, so x1 = 0 = xn and xk 6= 0 for at least one 1 < k < n.

In that case, if (x, y) ∈ Lk+(n−k) , there exists some z ∈ X such that (x, z) ∈ Lk and

(z, y) ∈ Ln−k . Since also (0, xk ) ∈ Lk , (xk , 0) ∈ Ln−k , we have

(x, z + xk ) ∈ Lk , (z + xk , y) ∈ Ln−k ,

so z is not unique.

If there exists a 1 < k < n such that xk 6= 0, the singular chain is said to be

non-trivial, otherwise it will be called trivial.

characterization.

if and only if there exists a 1 ≤ k < n with ker Ln−k ∩ mul Lk 6= {0}.

12

1 PRELIMINARIES

with xk 6= 0 for some 1 ≤ k < n. Then we have (0, xk ) ∈ Lk and (xk , 0) ∈ Ln−k ,

so 0 6= xk ∈ ker Lk ∩ mul Ln−k . – If on the other hand ker Ln−k ∩ mul Lk 6= {0} for

some 1 ≤ k < n, choose 0 6= xk ∈ ker Ln−k ∩ mul Lk . Then there exist L-chains

(x0 , . . . xk ), (xk , . . . , xn ) of length k + 1 and n − k respectively with x0 = 0 = xn , so

(x0 , . . . , xk , . . . , xn ) is a non-trivial singular chain.

[ [

Rc (L) = ker Lm ∩ mul Lm ,

m∈N m∈N

and

mul Lk ⊆ mul Lk+1 ⊆ ran Lm+1 ⊆ ran Lm .

Now we can examine the relationships between range, kernel etc. of powers of L.

We generalize Lemma 2.1 from [2].

= .

ker Lm ker Lk ∩ mul Lm

as is the right-hand side because of ker L0 = ker I = {0}. For m = 0 the left-

hand side simplifies to ker Lk for the same reason; for the right-hand side we observe

ran Lm = ran I = X and mul Lm = mul I = {0}, which means on the right we are also

left with ker Lk . In the rest of the proof we are concerned with k, m > 0.

It suffices to find an onto linear map

ker Lk ∩ ran Lm

A : ker Lm+k → ,

ker Lk ∩ mul Lm

13

1 PRELIMINARIES

whose kernel coincides with ker Lm . We define it as follows: Given x ∈ ker Lm+k , we

find an L-chain (x, x1 , . . . , xm+k−1 , 0) using which we set

Ax := xm + (ker Lk ∩ mul Lm ) = [xm ].

• A is indeed a map with stated codomain: (x, . . . , xm ) and (xm , . . . , xm+k−1 , 0)

are L-chains, that is xm ∈ ran Lm and xm ∈ ker Lk , so xm ∈ ker Lk ∩ ran Lm . If

we are given another L-chain (x, x′1 , . . . , x′m+k−1 , 0), then

(0, x1 − x′1 , . . . , xm − x′m , . . . , xm+k−1 − x′m+k−1 , 0)

is also an L-chain, likewise (0, x1 −x′1 , . . . , xm −x′m ) and (xm −x′m , . . . , xm+k−1 −

x′m+k−1 , 0). That means we have xm − x′m ∈ mul Lm and xm − x′m ∈ ker Lk , so

xm − x′m ∈ ker Lk ∩ mul Lm und hence [xm ] = [x′m ].

• The linearity of A is evident.

k

)∩ran(Lm )

• Concerning surjectivity: Let [y] ∈ ker(L ker Lk ∩mul Lm , which means there are L-

chains

(y, x1 , . . . , xk−1 , 0), (x′0 , x′1 , . . . , x′m−1 , y).

Then

(x′0 , x′1 , . . . , x′m−1 , y, x1 , . . . , xk−1 , 0)

is an L-chain with length m+ k and y as m-th place. We conclude x′0 ∈ ker Lm+k

and Ax′0 = [y].

• Concerning the kernel of A: Let x ∈ ker Lm , then there exists an L-chain

(x, x1 , . . . , xm−1 , 0, . . . , 0 ),

| {z }

(k+1) times

Now, let x ∈ ker Lm+k together with an L-chain (x, x1 , . . . , xm+k−1 , 0) and Ax =

0, which means we have xm ∈ ker Lk ∩ mul Ln . Hence there exist L-chains

(xm , y1 , . . . , yk−1 , 0), (0, y1′ , . . . , ym−1

′

, xm ).

It follows

(0, y1′ , . . . , ym−1

′

, xm , y1 , . . . , yk−1 , 0) ∈ Ch(L).

Subtracting this chain from the one belonging to x we get

(x, x1 − y1′ , . . . , xm−1 − ym−1

′

, xm − xm , xm+1 − y1 , . . . , xm+k−1 − yk−1 , 0).

| {z }

=0

In particular:

(x, x1 − y1′ , . . . , xm−1 − ym−1

′

, 0) ∈ Ch(L),

so x ∈ ker Lm . This shows the inclusion ker A ⊆ ker Lm .

In case of the absence of non-trivial singular chains, the formula can be simplified

to Lemma 4.4 of [6].

14

1 PRELIMINARIES

chains, and k, m ∈ N. Then:

ker Lm+k ∼

= ker Lk ∩ ran Lm .

ker Lm

Proof: The absence of non-trivial singular chains implies ker Lk ∩ mul Lm = {0}. Now

apply the preceding lemma.

The first part of the following lemma can be found in [6], numbered 4.1, the second

one in [2] as Lemma 3.1.

ran Lm ∼ dom Lm m

∼ dom L + ran L .

k

m+k = m k m = m k

ran L (ker L + ran L ) ∩ dom L ker L + ran L

Proof: We show the first isomorphism: Firstly, we care about trivialities again: Is

k = 0, then the left-hand side is trivial, and since ran L0 = X and hence

X

the right-hand side is also trivial. For m = 0 we get ran Lk

on the left, as well as on

the right using dom L0 = X:

X X

= .

(ker L0 + ran Lk ) ∩ X ran Lk

ran Lm

A : dom Lm →

ran Lm+k

with ker A = (ker Lm + ran Lk ) ∩ dom Lm . We define it as follows: Given x ∈ dom Lm ,

we find an L-chain (x, x1 , . . . , xm ) using which we set

more: Given another L-chain (x, x′1 , . . . , x′m ), then

15

1 PRELIMINARIES

hence

( 0, . . . , 0 , x1 − x′1 , . . . , xm − x′m ) ∈ Ch(L),

| {z }

(k+1) times

• A is clearly linear.

ran Lm

• Concerning surjectivity: Let [y] ∈ ran Lm+k

, which means there is an L-chain

• Concerning the kernel of A: Firstly let x ∈ ker A, so because of Ax = [0] we have

an L-chain

(x, x1 , . . . , xm−1 , xm )

with xm ∈ ran Lm+k . This means on the other hand that there exists an L-chain

Conversely let x ∈ (ker Lm + ran Lk ) ∩ dom Lm , which means there are some

u ∈ ker Lm , v ∈ ran Lk with associated L-chains

(x = u + v, x1 , . . . , xm ).

But then,

The second isomorphism is an easy consequence of the general result

U ∼ U +V

= (1.1)

U ∩V V

for subspaces U, V of X.

16

1 PRELIMINARIES

For the case of linear operators in X, the notions of ascent and descent play an

important role, for instance regarding the poles of the resolvent, which we seek to

generalize, following [2]. We can recover their definitions for linear relations, but we

will also introduce stronger versions.

• α(L) := inf m ∈ N ker Lm = ker Lm+1 – ascent of L,

b(L) := inf {m ∈ N | ker L ∩ ran Lm ⊆ mul Lm },

• α

e(L) := inf {m ∈ N | ker L ∩ ran Lm = {0}}

• α – generalized ascent of L,

• δ(L) := inf m ∈ N ran Lm = ran Lm+1 – descent of L,

b

• δ(L) := inf {m ∈ N | ker Lm + ran L ⊇ dom Lm },

e

• δ(L) := inf {m ∈ N | ker Lm + ran L = X} – generalized descent of L.

Of course, inf ∅ := ∞, as usual.

(i) ker Lm = ker Lm+1 ⇒ ker Lm = ker Lm+k ,

(ii) ran Lm = ran Lm+1 ⇒ ran Lm = ran Lm+k ,

(iii) ker L ∩ ran Lm ⊆ mul Lm ⇒ ker L ∩ ran Lm+k ⊆ mul Lm+k ,

(iv) ker Lm + ran L ⊇ dom Lm ⇒ ker Lm+k + ran L ⊇ dom Lm+k ,

(v) ker L ∩ ran Lm = {0} ⇒ ker L ∩ ran Lm+k = {0} and

The first two points are Lemma 3.4 and Lemma 3.5 in [6]. We repeat their proofs

here for the sake of completeness.

Proof: Let m ∈ N.

(i) Let ker Lm = ker Lm+1 . ker Lm ⊆ ker Lm+k follows inductively from Lemma

1.16 for all k ∈ N. We show the other inclusion by another induction: As-

sume ker Lm ⊇ ker Lm+k was already shown for some k ∈ N. Now, given

x ∈ ker Lm+k+1 , we find some (x, 0) ∈ Lm+k+1 = Lm+k L, which means (x, y) ∈

L, (y, 0) ∈ Lm+k , so y ∈ ker Lm+k ⊆ ker Lm for some y ∈ X. But then

(x, 0) ∈ Lm+1 = Lm , so x ∈ ker Lm .

(ii) Let ran Lm = ran Lm+1 . Again, ran Lm ⊇ ran Lm+k follows from Lemma 1.16

for all k ∈ N. Concerning the other inclusion: Assuming ran Lm ⊆ ran Lm+k

is true for some k ∈ N, we argue: given y ∈ ran Lm ⊆ ran Lm+k , we find

17

1 PRELIMINARIES

(x, xm ) ∈ Lm , (xm , y) ∈ Lk ,

implies (z, xm ) ∈ Lm+1 for some z ∈ X, so together with (xm , y) ∈ Lk we

conclude (z, y) ∈ Lm+k+1 , which means y ∈ ran Lm+k+1 .

(iii) Let ker L∩ran Lm ⊆ mul Lm . Since ran Lm+k ⊆ ran Lm and mul Lm ⊆ mul Lm+k ,

we conclude

(iv) Let ker Lm + ran L ⊇ dom Lm . Since ker Lm+k ⊇ ker Lm and dom Lm ⊇

dom Lm+k , we conclude

and descent.

(i) ker L ∩ ran Lm = {0} ⇒ ker L ∩ ran Lm ⊆ mul Lm , so α e(L), and

b(L) ≤ α

b

(ii) ker Lm + ran L = X ⇒ ker Lm + ran L ⊇ dom Lm , so δ(L) e

≤ δ(L).

The converse implications (and consequently the converse inequalities) do not hold

in general.

Example 1.2 Consider for instance X = R2 and L := span {(0, e1 ), (e1 , 0)}, then

So ker L ∩ ran L = span {e1 } = mul L 6= {0}, which means αb(L) = 1. But Lm = L

holds for all m ∈ N , so we always have ker L ∩ ran L = span {e1 } =

∗ m

6 {0}, which

implies α

e(L) = ∞.

b

Furthermore, we have ker L + ran L = span {e1 } = dom L 6= X, so δ(L) = 1. Since

m ∗ e

L = L holds for all m ∈ N , we also have δ(L) = ∞.

It turns out that the numbers α

and descent δ respectively.

18

1 PRELIMINARIES

b(L).

Proof: It suffices to show: ker L ∩ ran Lm ⊆ mul Lm ⇐⇒ ker Lm = ker Lm+1 . But

this is trivial using Lemma 1.17, setting k = 1 there:

ker Lm+1 ∼ ker L ∩ ran Lm

= .

ker Lm ker L ∩ mul Lm

b

Proposition 1.23 We always have δ(L) = δ(L).

Proof: We have to show: ran Lm = ran Lm+1 ⇐⇒ ker Lm + ran L ⊇ dom Lm . Again,

this is trivial using Lemma 1.19, setting k = 1:

ran Lm ∼ dom Lm

m+1 = .

ran L (ker L + ran L) ∩ dom Lm

m

Proposition 1.24 The ascent and generalized ascent have the property that the

defining (characterizing) inclusions hold true even for kernels of any powers of L.

(i) ker L ∩ ran Lm = {0} ⇒ ker Lk ∩ ran Lm = {0} for all k ∈ N,

(ii) ker L ∩ ran Lm ⊆ mul Lm ⇒ ker Lk ∩ ran Lm ⊆ mul Lm for all k ∈ N

Proof:

1. The statement is obviously true for k = 1. Suppose it has been shown for

some k ∈ N. Let x ∈ ker Lk+1 ∩ ran Lm , then we have (x0 , x) ∈ Lm and

(x, x1 ) ∈ L, (x1 , 0) ∈ Lk for some x1 ∈ X, so actually x1 ∈ ran Lm+1 and

x1 ∈ ker Lk , which in view of ker L ∩ ran Lm+1 = {0} implies x1 = 0. But then

we have x ∈ ker Lk ∩ ran Lm = {0}, so x = 0.

2. The inclusion ker L ∩ ran Lm ⊆ mul Lm implies ker Lm = ker Lm+k . So, using

Lemma 1.17, we see

{0} = =

ker Lm ker Lk ∩ mul Lm

Hence, ker Lk ∩ ran Lm ⊆ ker Lk ∩ mul Lm ⊆ mul Lm .

19

1 PRELIMINARIES

Proposition 1.25 The characterizing inclusion of the descent remains true for the

ranges of powers of L:

Proof: Let k ∈ N. The inclusion ker Lm +ran L ⊇ dom Lm implies ran Lm = ran Lm+k ,

by Proposition 1.23. Using Lemma 1.19, we obtain:

ran Lm ∼ dom Lm

{0} = = ⇒ ker Lm + ran Lk ⊇ dom Lm

ran Lm+k (ker Lm + ran Lk ) ∩ dom Lm

The following lemma condenses Lemma 3.3 and Lemma 5.5 from [6].

(i) α

e(L) = p < ∞,

(ii) Rc (L) = {0} and α(L) = p < ∞, and

(iii) either p = 0 or ker L ∩ ran Lp−1 6= {0}, together with ker Lk ∩ ran Lp = {0} for

all k ∈ N.

Proof:

e(L) = p < ∞, so ker L ∩ ran Lm = {0} for all m ≥ p. By Proposition

(i) ⇒ (ii) Let α

1.24 i, we have ker Lk ∩ ran Lm = {0} for all k ∈ N. Since ker Lk ∩ mul Lm ⊆

ker Lk ∩ ran Lm = {0}, there are no non-trivial singular chains by Lemma 1.15.

Concerning α(L): Proposition 1.21 tells us α(L) ≤ p, so we only have to prove

the converse inequality. But that one follows again from Lemma 1.17, or its

Corollary 1.18: Suppose we had α(L) = q < p, then

ker Lq+1 ∼

ker L ∩ ran Lq ∼

= = {0} ,

ker Lq

so ker L ∩ ran Lq = {0}, which would imply αe(L) = q < p, a contradiction!

(ii) ⇒ (iii) Let Rc (L) = {0} and α(L) = p. If p = 0, then {0} = ker L = ker Lk for all

k ∈ N, so ker Lk ∩ ran Lp = {0}. Now, let p > 0. Using Corollary 1.18 again, we

see with ker Lp−1 ( ker Lp :

ker Lp ∼

ker L ∩ ran Lp−1 ∼

= 6 {0}

=

ker Lp−1

and

ker Lk ∩ ran Lp ∼ ker Lp+1 ∼

ker Lk ∩ ran Lp ∼

= = = {0} ∀ k ∈ N,

ker Lk ∩ mul Lp ker Lp

so ker Lk ∩ ran Lp = {0}.

20

1 PRELIMINARIES

We have proved: If the generalized ascent is finite, it coincides with the traditional

ascent and there are no singular chains.

Example 1.3 In general, the equality of ascent and generalized ascent does not im-

ply the absence of singular chains. Consider X = ℓ2 (C) and the linear relation

L ⊆ X × X which arises from the left-shift with the multivalued part mul L =

{λ ∈ C : (λ, 0, 0, . . .)}. Now, mul L ∩ ker L 6= {0}, but α

e(L) = ∞ = α(L).

To see that the abscence of singular chains alone does not imply that the generalized

ascent is finite, consider any operator with infinite ascent, for example the left-shift in

ℓ2 (C). Then, the generalized ascent has to be infinite as well. But operators do not

have singular chains.

The authors of [6] have established various connections between α(L) and δ(L), as

well as our generalized numbers, without explicitly introducing them. We express their

(more sophisticated) Theorem 5.7 with our notions.

Theorem 1.1 If generalized ascent and generalized descent of L are finite, they are

equal and coincide with (the usual) ascent and descent of L. In short form:

α e

e(L), δ(L) < ∞ =⇒ α e

e(L) = α(L) = δ(L) = δ(L).

Proof: Let

1.26 1.21

p := α e

e(L) = α(L), q := δ(L) ≤ δ(L) =: r.

Assume p > q, then ran Lp = ran Lq , in particular

e(L) ≤ q < p = α

e

e(L) = α(L) = p ≤ q = δ(L) ≤ δ(L)

α =r (1.2)

e

so actually r = δ(L) ≤ p. This shows: (1.2) holds with equality everywhere.

The following lemma is (in a similar form) part of Theorem 8.3 in [6]. Again, we

present its proof for the sake of completeness.

21

1 PRELIMINARIES

by (ran Lp , ker Lp ).

Proof: Since ran L + ker Lp ⊇ ran Lp ⊕ ker Lp ⊇ dom L ⊇ dom Lp , we have δ(L) ≤ p.

Abbreviate R := ran Lp and N := ker Lp . The intersection LN ∩ LR is trivial, because

N ∩ R = {0}. Also, by definition of compression, LN ⊕ b LR ⊆ L is obvious. The

converse inclusion remains to be shown.

Let (x, y) ∈ L, then in particular x ∈ dom L, so x = x1 + x2 for some x1 ∈ R and

x2 ∈ N . Now, there is a y2 ∈ X with (x2 , y2 ) ∈ L and (y2 , 0) ∈ Lp−1 . We see that

(x1 , y −y2 ) = (x, y)−(x2 , y2 ) ∈ L. But x1 ∈ ran Lp , so y −y2 ∈ ran Lp+1 = ran Lp = R.

Hence, (x1 , y − y2 ) ∈ LR .

Moreover, (x2 , y2 ) ∈ LN , since x2 ∈ ker Lp = N and also y2 ∈ ker Lp−1 ⊆ ker Lp =

N . Hence, (x, y) = (x1 , y − y2 ) + (x2 , y2 ) ∈ LR ⊕ LN .

⇐⇒ ran(L−1 )m = ran(L−1 )m+1 ⇐⇒ ker(L−1 )m + ran L−1 ⊇ dom(L−1 )m

⇐⇒ mul Lm + dom L ⊇ ran Lm .

Remark: We have shown that for the co-descent δc (L) := inf m ∈ N : dom Lm = dom Lm+1

the corresponding δbc (L) should be defined as

e(L) = p < ∞, δ(L) = q < ∞ (by Theorem 1.1 p = q) and

ran L ⊆ dom L + mul L. Set N := ker Lp and R := ran Lp . Then L is completely

reduced by (N, R), LN is a nilpotent operator with nilpotence p and LR is surjective

onto R.

Proof: We first show that L is completely reduced by the pair (N, R). According to

Lemma 1.27, we have to show dom L ⊆ N ⊕ R.

22

1 PRELIMINARIES

Since αe(L) = p, the intersection ker L ∩ ran Lp = {0} is trivial. Proposition 1.24

implies ker Lp ∩ ran Lp = {0}, so the sum is direct. The sum ker Lp + ran L equals X,

e

because δ(L) = p. Now, this implies ker Lp +ran Lp ⊇ dom Lp by Proposition 1.25. The

inclusion ran L ⊆ dom L + mul L shows dom L = dom L2 and thus dom L = dom Lp .

So we also have ker Lp + ran Lp ⊇ dom L.

We verify mul LN = {0}:

But this intersection is trivial, because the finiteness of the generalized ascent implies

the absence of non-trivial singular chains. In particular, x = 0. Thus, LN is an

operator. If (x, y) ∈ LpN , then y ∈ N ∩ ran LpN ⊆ ker Lp ∩ ran Lp = {0}. Proposition

1.26 implies that LN is of nilpotence p.

Let y ∈ R = ran Lp = ran Lp+1 , where the last equality is due to the fact that

e

δ(L) ≤ δ(L). But this already means that there is an element x ∈ R = ran Lp with

(x, y) ∈ L, so (x, y) ∈ LR holds true as well.

This section is motivated by linear relations of type S −1 T for linear operators S, T as

they are used in [2]. As we have seen in Lemma 1.13, we can write every linear relation

L in such a way: L = Q−1 L Lop .

First, we want to characterize relations of type S −1 T , where S and T are operators.

the product L := S −1 T is a linear relation in X × Y , which can be characterized by

Proof: We have

S −1 T = (x, y) ∈ X × Y ∃z ∈ Z : (x, z) ∈ T, (z, y) ∈ S −1

= {(x, y) ∈ X × Y | ∃z ∈ Z : (x, z) ∈ T, (y, z) ∈ S}

= {(x, y) ∈ X × Y | x ∈ dom T, y ∈ dom S ∧ ∃z ∈ Z : T x = z, Sy = z}

= {(x, y) ∈ X × Y | x ∈ dom T ∧ y ∈ dom S ∧ T x = Sy} .

represents the “operator-like” behavior of L whereas S determines its multivalued part,

though we have to be careful, since for L only those x ∈ X and y ∈ Y matter which

are mapped into ran S ∩ ran T by T and S respectively. The first evidence for that

intuition is the following lemma.

23

1 PRELIMINARIES

Then:

(i) dom L = T −1 [ran T ∩ ran S] ⊆ dom T ,

(ii) ran L = S −1 [ran T ∩ ran S] ⊆ dom S,

(iii) ker L = ker T and

Proof: The first two points follow immediately from Lemma 1.29. – We have

ker L = {x ∈ X | (x, 0) ∈ L}

1.29

= {x ∈ X | x ∈ dom T ∧ 0 ∈ ran S ∧ T x = S0 = 0}

= {x ∈ dom T | T x = 0} = ker T.

By replacing L with L−1 , we get mul L = ker L−1 = ker S, since L−1 = T −1 S.

α

compare the definition of the ascent of T relative to S in [2] in section 1, page 149:

α(T : S) := inf n ∈ N ker T ∩ ran(S −1 T )n = {0} .

decomposed into a product S −1 T of operators S, T : X → Y .

Definition 1.8 We say a vector space Z and two linear operators T : X ⊇ dom T →

Z and S : Y ⊇ dom S → Z split L with respect to the auxiliary space Z, if

L = S −1 T . The triple (Z, S, T ) will be called split of L. In case dom T = dom L

and dom S = ran L, we call (Z, S, T ) a minimal split of L with respect to Z.

(Zmin , Smin , Tmin) by setting Zmin := Z, Smin := S|ran L and Tmin := T |dom L .

24

1 PRELIMINARIES

split of L−1 .

ran L

( mul L , QL , Lop ) is a minimal split of L, so L can be written as

L = Q−1 −1

L Lop (= QL QL L),

where

ran L

Lop : X ⊇ dom L →

mul L

is the associated linear operator and

ran L

QL : Y ⊇ ran L →

mul L

is the canonical quotient map.

Y Y

Proof: This was shown in Lemma 1.13 for QL : Y → mul L and Lop : dom L → mul L ,

ran L ran L

which we can obviously compress to ran L → mul L and dom L → mul L respectively.

This way we guarantee the minimality of the split and Lop and QL being surjective.

We will see that minimality of the split and surjectivity of the involved operators

can always be achieved by modifying the operators involved marginally (in algebraic

terms).

Applying the lemma to L−1 yields

dom L dom L

Corollary 1.34 ker L , QL−1 , L−1 op

with QL−1 : X ⊇ dom L → ker L splits

−1

L minimally.

dom L

Corollary 1.35 ker L , L−1 op

, QL−1 splits L.

But the two splits of L we constructed are actually isomorphic, in an adequate sense.

25

1 PRELIMINARIES

Z1 → Z2 such that

S2 = JS1 and T2 = JT1

will be called a morphism (Z1 , S1 , T1 ) → (Z2 , S2 , T2 ). We will also say J induces

such a morphism.

linear map, then (Z2 , S2 , T1 ) with T2 = J ◦ T1 and S2 = J ◦ S1 is a split of L.

Proof: We have

S2−1 T2 = (J ◦ S1 )−1 (J ◦ T1 ) = S1−1 J −1 JT1 = S1−1 T1 = L,

because J −1 J = IZ1 , see Lemma 1.9.

Remark: A straightforward calculation verifies that SplitL is a category given by the class

Ob(SplitL ) := (Z, S, T ) | Z F -vector space, S : Y ⊇ dom L → Z, T : X ⊇ dom T → Z

linear operators such that L = S −1 T

of splits, together with morphisms

MorSplitL ((Z1 , S1 , T1 ), (Z2 , S2 , T2 )) := {J : Z1 → Z2 linear map such that

S2 = JS1 , T2 = JS2 }

for two given splits (Z1 , S1 , T1 ), (Z2 , S2 , T2 ), where the composition and an identity are given

by the composition of linear maps and identity on the vector space respectively. That justifies

the term morphism defined above.

Proposition 1.37 The following diagram of vector spaces and linear operators com-

mutes:

Lop

dom L ran L

mul L

J

QL−1 ∼ QL

=

dom L ran L

ker L

L−1 op

is the canonical linear isomorphism (L^

op associated

→ −1 )

mul L ker L

with (L−1 )op and J −1 g

is the canonical linear isomorphism Lop associated with Lop .

26

1 PRELIMINARIES

g

Proof: The only thing we have to prove is J −1 = L op : let x ∈ dom L, then

g

JL ^

op (x + ker L) = (L

−1 ) Lg

op op (x + ker L)

^

= (L −1 ) L x

op op

^

= (L −1 ) (y + mul L)

op

. . . = L−1 op y

= z + ker L

for some z ∈ dom L such that (z, y) ∈ L. But then (x−z, 0) ∈ L, so z+ker L = x+ker L.

This shows

g

JL op = I dom L

ker L

g

L op J = I ran L .

mul L

Corollary 1.38 The map J from the preceding proposition induces an isomorpism

ran L dom L −1

, QL , Lop → , (L )op , QL−1 .

mul L ker L

Using Lemma 1.30, we get the following import result about minimal splits (Z, S, T )

with surjective T or surjective S: they are actually all isomorphic.

T is surjective, then there exists an isomorphism

dom L

, L−1 op , QL−1 → (Z, S, T ),

ker L

As consequence all splits (Z, S, T ) with surjective T are isomorphic.

The dual statement for splits (Z, S, T ) with surjective S also holds true.

ker T → Z be the linear isomorphism associated

with T . By Lemma

e

1.30, we have ker T = ker L, so actually T : ker L → Z. Since dom

dom L L −1

ker L , L op

, Q L −1

S = Te L−1 op and T = TeQL−1 .

27

1 PRELIMINARIES

The second equality is the very definition of Te, so we only have to prove the

first

equality. It is more convenient to show the equivalent equation Te−1 S = L−1 op : we

have

Te−1 S = QL−1 T −1 S = QL−1 L−1 = QL−1 Q−1

L−1 L

−1

op

,

where the last equality is an immediate consequence of Corollary 1.38. Now, since

QL−1 is an operator, by Lemma 1.9 it holds true that QL−1 Q−1

L−1 = I dom L . But that

ker L

means

Te−1 S = I dom L L−1 op = L−1 op .

ker L

S is surjective.

ran L

Proof: Consider “⇒”: This is obvious for the split ( mul L , QL , Lop ), and for every other

ran L

minimal split (Z, S, T ) of L we have an isomorphism J : mul L → Z such that S = JQL ,

which immediately implies the surjectivity of S. – The other implication follows from

duality.

Definition 1.10 A minimal split (Z, S, T ) with surjective S or T will itself hence-

forth be called surjective.

every split (Z, S, T ) can be made minimal by restricting the domains of S and T , and

it can be made surjective by replacing the auxiliary space Z by the range of S, which

coincides with the range of T .

Lemma 1.40 Let (Z, S, T ) be a minimal split of L, then ran S = ran T =: Z1 . The

maps S1 : Y ⊇ ran L → Z2 , T1 : X ⊇ dom L → Z2 split L in Z2 , and the embedding

Z2 ֒→ Z1 is a morphism (Z2 , S1 , T1 ) → (Z, S, T ).

Proof: Let z ∈ ran T , then there exists an x ∈ dom L with T x = z and (x, y) ∈ L for

some y ∈ ran L. But since L = S −1 T , the latter means Sy = T x = z, so z ∈ ran S.

The converse inclusion follows from using what we just showed on the split (Z, T, S)

of L−1 . The rest of the lemma is trivial now.

Remark: At this point it becomes clear that all results of [2] using only numbers and spaces

concerning S −1 T with operators S, T can be applied to arbitrary relations.

28

1 PRELIMINARIES

split by (Z, S, T ). One could surmise that now (Z1 , S1 , T1 ) splits L, where Z1 := Z,

S1 := S|ran L and T1 := T |dom L . But, unfortunately, this is not true in general. For

instance, Lemma 1.30 tells us that

and analogously mul L = mul K ∩ ran L are necessary conditions, though in general,

we only have “⊆” between these spaces.

Example 1.4 Let X = Y = R2 , and consider K := span {(e1 , 0), (0, e1 ), (0, e2 )} and

L := span {(e1 , e2 )}. Then L ⊆ K, ker L = {0}, but

T : span {e1 } → R2 , T = 0. But L is already an operator, so (R2 , IR2 , L) splits L.

Obviously, it is impossible to construct a split isomorphic to that formed by S and T .

But if we have ker L = ker K ∩ dom L or mul L = mul K ∩ ran L, then we have both

equalities, and (Z1 , S1 , T1 ), as defined above, splits L.

by (Z, S, T ). If additionally

or

mul L = mul K ∩ ran L, (1.4)

then both equalities hold and (Z1 , S1 , T1 ) splits L, where Z1 := Z, S1 := S|ran L and

T1 := T |dom L ).

Proof: We have

dom T |dom L = dom T ∩ dom L = dom L,

since dom T ⊇ dom K ⊇ dom L, and dom S|ran L = ran L analogously. Furthermore,

⇐⇒ x ∈ dom L ∧ y ∈ ran L ∧ T x = Sy

⇐⇒ x ∈ dom T |dom L ∧ y ∈ dom S|ran L ∧ T |dom L (x) = S|ran L (y)

⇐⇒ (x, y) ∈ S1−1 T1 .

Now, suppose (1.3) holds true. We will first show that equality (1.4) also holds. Let

y ∈ mul K ∩ ran L, so (0, y) ∈ K and there is some x ∈ dom L such that (x, y) ∈ L.

29

1 PRELIMINARIES

with (x, y) ∈ L this implies (0, y) ∈ L, so y ∈ mul L. The converse inclusion is trivial.

– If conversely (1.4) holds true, we have

which by what we just showed implies mul L−1 = mul K −1 ∩ ran L−1 , or equivalently

Again, assume (1.3), in order to show S1−1 T1 ⊆ L. Using the equivalences we have

proved in the beginning, we examine (x, y) ∈ (dom L) × (ran L) with (x, y) ∈ K. We

must have some x′ ∈ dom L such that (x′ , y) ∈ L. But then (x − x′ , 0) ∈ K, which

means x − x′ ∈ ker K ∩ dom L = ker L. So actually (x − x′ , 0) ∈ L, which implies

(x, y) = (x − x′ , 0) + (x′ , y) ∈ L.

In [2], the authors also make use of T S −1 for operators S, T . In the following para-

graphs, we want to concern ourselves with this “commutation” of the split components.

This is, of course, only possible for the case X = Y , so in the following we will always

consider a linear relation L ⊆ X × X.

= {(Sx, T x) | x ∈ dom S ∩ dom T }

splits, so we just call it “L commutated”.

Right from the definition, we see that Lc does only depend on the behavior of S and

T on dom S ∩ dom T .

Again, it is easy to see, but useful, that commutating a relation commutes with

inverting.

30

1 PRELIMINARIES

Proof: L−1

c = (T S

−1 −1

) = ST −1 = (L−1 )c , since L−1 is split by (Z, T, S).

The usual linear subspaces ker, ran, dom associated with the powers of S −1 T and

T S −1 are the basis of all considerations in [2], cf. section 1, page 149 ibid. We will

partly recover their notions in the more general context of linear relations here.

Following [2], later we will only consider relations L and splits (Z, S, T ) of those

with dom T ⊆ dom S. Dual to Lemma 1.30 we have

(i) dom Lc = S[dom T ], and dom Lc = ran S if and only if dom S ⊆ dom T + ker S,

(ii) ran Lc = T [dom S], and ran Lc = ran T if and only if dom T ⊆ dom S + ker T ,

(iii) ker Lc = S[ker L] = S[ker T ] and

(iv) mul Lc = T [mul L] = T [ker T ].

Proof: We have

dom Lc = {y ∈ Y | ∃ z ∈ Y : (y, z) ∈ Lc }

= {y ∈ Y | ∃ z ∈ Y ∃ x ∈ dom S ∩ dom T : Sx = y ∧ T x = z}

= {y ∈ Y | ∃ x ∈ dom S ∩ dom T : Sx = y} = S[dom T ].

Passing to the inverse, we get the following by what we have just shown:

So, let x ∈ dom T , then T x = T v for some v ∈ dom S, which menas x − v ∈ ker T and

thus

x = v + (x − v) ∈ dom S + ker T.

Again, exchanging the roles of S and T via inverting Lc yields

31

1 PRELIMINARIES

Concerning ker Lc :

ker Lc = {y ∈ Y | ∃ x ∈ dom S ∩ dom T : Sx = y, T x = 0 = S0}

= S[{x ∈ dom S ∩ dom T | (x, 0) ∈ L}] = S[ker L] = S[ker T ].

Also:

mul Lc = ker(Lc )−1 = ker L−1

c = T [ker L

−1

] = T [mul L] = T [ker S].

This justifies setting

e c ) := inf {n ∈ N | ran Lc + ker Ln = Y },

δ(L c

compare the definition of the descent of T relative to S in [2], section 1, page 149:

δ(T : S) := inf n ∈ N ran T + ker(T S −1 )n = Y .

Remark: Without the additional assumption dom T ⊆ dom S, we would have to consider

the relation (T S −1 )|ran T , if we wanted to reconstruct exactly the same spaces Rm

′

from [2].

Indeed, we have

R1′ = T X = ran T, Rm

′

= ran((T S −1 )|ran T )m−1 ∀ m ≥ 2.

But we see, that the case m = 1 seems to be an exception, and the relation which we have to

consider for m ≥ 2 is rather cumbersome. A closer examination can improve the situation:

It is easy to see that ((T S −1 )|ran T )m = ((T S −1 )m )|ran T for all m ≥ 1. Concerning the first

flaw, we observe

ran((T S −1 )0 )|ran T = ran Iran T = ran T.

So we have

Rm ′

= ran((T S −1 )m )|ran T ∀ m ≥ 1,

which is compact enough, but we would no longer deal with ranges of powers of a relation.

On the other hand, note that dom T ⊆ dom S implies

ker Lp + ran L ⊆ dom L + ran L ⊆ dom T + dom S ⊆ dom S.

e

So unless dom S 6= X, δ(L) < ∞ is impossible.

of S −1 T . But powers of Lc are closely related to those of L.

(T x1 , Sxn+1 ) ∈ Lcn−1 .

32

1 PRELIMINARIES

(i) ker Ln = T −1 [ker Lcn−1 ],

(ii) ker Lnc = S[ker Ln ].

If additionally dom T ⊆ dom S, we have for n ≥ 1:

(iii) ran Ln = S −1 [ran Lnc ] and

(iv) ran Lnc = T [ran Ln−1 ].

x ∈ ker Ln ⇐⇒ (x, 0) ∈ Ln

⇐⇒ (T x, S0) = (T x, 0) ∈ Lcn−1

⇐⇒ x ∈ T −1 [ker Lcn−1 ];

also:

x ∈ S[ker Ln ]

⇐⇒ ∃ z ∈ dom S ∩ ker Ln : Sz = x

⇐⇒ ∃ z ∈ dom S ∩ dom T : (z, 0) ∈ Ln ∧ Sz = x

⇐⇒ ∃ z ∈ dom S ∩ dom T ∃ (y1 , yn ) ∈ Lcn−1 : T z = y1 , yn = S0 = 0, Sz = x

⇐⇒ ∃ z ∈ dom S ∩ dom T ∃ (y1 , 0) ∈ Lcn−1 : T z = y1 , Sz = x

⇐⇒ ∃ y1 ∈ Y : (x, y1 ) ∈ Lc , (y1 , 0) ∈ Lcn−1

⇐⇒ (x, 0) ∈ Lnc

⇐⇒ x ∈ ker Lnc .

Concerning (iii):

⇐⇒ ∃ x1 ∈ dom T : (T x1 , Sx) ∈ Lcn−1 ,

where the last step is possible, because x ∈ ran Ln ⊆ ran L ⊆ dom S. Since we assumed

dom T ⊆ dom S, we can also apply S to x1 , so that the last line is equivalent to

⇐⇒ ∃ x1 ∈ dom S ∩ dom T : (Sx1 , Sx) ∈ Lnc

⇐⇒ Sx ∈ ran Lnc ⇐⇒ x ∈ S −1 [ran Lnc ].

33

1 PRELIMINARIES

y ∈ ran Lnc

⇐⇒ ∃ z ∈ Y : (z, y) ∈ Lnc

⇐⇒ ∃ z ∈ Y, x1 ∈ dom S, xn ∈ dom T : (x1 , xn ) ∈ Ln−1 , z = Sx1 , T xn = y

⇐⇒ ∃ x1 ∈ dom S, xn ∈ dom T : (x1 , xn ) ∈ Ln−1 , T xn = y.

(x1 , xn ) ∈ Ln−1 already implies x1 ∈ dom T , and on the other hand, because of the

additional condition dom T ⊆ dom S, the last line in the chain of equivalences is in

fact equivalent to

⇐⇒ y ∈ T [dom T ∩ ran Ln−1 ] = T [ran Ln−1 ].

Remark: Note that (iv) is in general not true for n = 0. We only have

The following lemma condenses Lemma 1.3 and Lemma 1.4 from [2]. We will need

it later on to prove a pertubation result.

(i) ker Lm

c = (λS − T )T

−1

[ker Lm−1

c ] = (λS − T )[ker Lm ],

(ii) ker Lm = T −1 (λS − T )[ker Lm−1 ],

(iii) ran Lm ∩ dom T = (λS − T )−1 T [ran Lm−1 ] = (λS − T )−1 [ran Lm

c ] and

(iv) ran Lm

c = T (λS − T )

−1

[ran Lm−1

c ].

Proof:

(i) Using Corollary 1.46 as well as Lemmas 1.6, 1.9 and 1.5, we get

1.46

(λS − T )[ker Lm ] = (λS − T )T −1 [ker Lm−1

c ]

1.6 + 1.9

⊆ (λST −1 − Iran T )[ker Lm−1

c ]

1.5

⊆ L−1 m−1

c [ker Lc ] + ker Lm−1

c

1.5 1.5

= ker Lm m−1

c + ker Lc = ker Lm

c .

c = S[ker L ], so there exists some

m m

x0 ∈ ker L such that Sx0 = y. x0 ∈ ker L means we find an L-chain

34

1 PRELIMINARIES

n−1

X

u := λ−(k+1) xk .

k=0

n−2

X

Tu = λ−(k+1) T xk

k=0

n−2

X

= λ−(k+1) Sxk+1

k=0

n−1

X

= (λS) λ−(k+1) Sxk+1

k=1

= (λS)u − Sx0 = (λS)u − y,

(ii) By the first point, we have

c ] = T −1 [(λS − T )[ker Lm−1 ]].

⊆ ran Lm m+1

c + ran Lc

⊆ ran Lm

c = T [ran L

m−1

],

so by Lemma 1.5, we have ran Lm ∩dom T ⊆ (λS − T )−1 T [ran Lm−1 ]. – We show

the converse inclusion by induction. For m = 1 let

T (λ−1 (x + u)), which means (λ−1 (x + u), x) ∈ L. So we have x ∈ ran L ∩ dom T .

The induction step is very similar: Let

u), x) ∈ L. Since λ−1 (x + u) ∈ ran Lm ∩ dom T , we have x ∈ ran Lm+1 ∩ dom T .

The second equality follows from Corollary 1.46.

(iv) For m = 1 we have

c ] = T (λS − T )−1 [X] = T [dom S ∩ dom T ] = ran T = ran L1c ,

35

1 PRELIMINARIES

where the last step follows from Lemma 1.44. For all other natural numbers, we

use (iii):

ran Lm+1

c = T [ran Lm ∩ dom T ]

= T [(λS − T )−1 [ran Lm

c ]]

= T (λS − T )−1 [ran Lm

c ] ∀ m ≥ 1.

36

2 POLES OF THE GENERALIZED RESOLVENT

2.1 Algebraic decomposition

As in the treatment of this subject in the single operator case, we seek out to decompose

the domain of our relation L into a direct sum of a kernel part and a range part, in

our case depending on a given split (Y, S, T ) of L, in such a way that the restrictions

of S and T yield a bijective and a nilpotent part of L.

We have to further investigate the properties of commutated splits. So in the fol-

lowing, let L ⊆ X × X be a linear relation, (Y, S, T ) a split of L and let Lc be L

commutated with respect to (Y, S, T ), so Lc = T S −1 .

The following lemma can also be found on page 149 of [2]. It establishes a relation-

ship between α e(L) and α e

e(Lc ), δ(L) e c ) respectively.

and δ(L

From now on we will only consider splits (Z, S, T ) of L with dom T ⊆ dom S.

(i) ker Lkc ∩ ran Lm k m

c = S[ker L ∩ ran L ] and

c ].

Proof:

(i) For k = 0 we have

c = {0} = S[{0}] = S[ker L ∩ ran L ].

For m = 0 we have

c = ker Lc = S[ker L ] = S[ker L ∩ ran L ].

c = S[ker L ] ∩ T [ran L

m−1

]

⇐⇒ ∃ (x1 , 0) ∈ Lk , (x′1 , x′m ) ∈ Lm−1 : Sx1 = y = T (x′m )

⇐⇒ ∃ (x1 , 0) ∈ Lk , (x′1 , x1 ) ∈ Lm : Sx1 = y

⇐⇒ y ∈ S[ker Lk ∩ ran Lm ].

37

2 POLES OF THE GENERALIZED RESOLVENT

ker Lk , y ∈ ran Lm such that z = x + y. Using Corollary 1.46 we have y ∈

S −1 [ran Lm k n

c ]. But also Sx ∈ S[ker L ] = ker Lc , so

c ,

as desired. – If on the other the other hand z ∈ S −1 [ker Lkc + ran Lm c ] ⊆ dom S,

we find x ∈ ker Lkc = S[ker Lk ], y ∈ ran Lm

c = T [ran L m−1

c ] with Sz = x + y. We

thus find v ∈ dom S ∩ ker Lk and w ∈ dom T ∩ ran Lm−1 c such that Sv = x and

T w = y. We conclude

Sz = Sv + T w,

or equivalently S(z − v) = T w, so (w, z − v) ∈ L, which means z − v ∈ ran Lm .

So

z = v + (z − v) ∈ ker Lk + ran Lm .

e(Lc ) ≤ α

e(L).

Proof: Let p := α

e(L). The case p = ∞ is trivial, so let p < ∞, then

e(Lc ) ≤ p.

The situation is more complex for the generalized descent. For the traditional de-

scent we had Proposition 1.25, which stated that the definition of the descent remains

true for ran Lk instead of ran L. We can establish a similar result for the generalized

descent of Lc under the condition dom L ⊆ ran L. In [2], this is Proposition 3.3.

Proposition 2.3 Let dom L ⊆ ran L. If δ(L

(i) ker Lp + ran Lk = dom S and

(ii) ker Lpc + ran Lkc = Y .

That means

ker Lp + ran Lk ⊇ dom S.

38

2 POLES OF THE GENERALIZED RESOLVENT

On the other hand ker Lp + ran Lk ⊆ dom L + ran L ⊆ dom T + dom S = dom S.

e c ) = p, we have the equality for k = 1.

Concerning (ii): “⊆” is trivial. Since δ(L

Now, suppose it was shown for some k ≥ 1 and let z ∈ Y . Then we have some

x ∈ ker Lpc , y ∈ ran Lkc such that z = x + y. Corollary 1.46 gives y ∈ T [ran Lk−1 ], so

there is some w ∈ dom T ∩ ran Lk−1 with y = T w. Now, also w ∈ dom S, so from the

induction assumption and Lemma 2.1 we get

w =u+v

1.46

y = T w ∈ T [ker Lp ] + T [ran Lk ] = T [T −1 [ker Lpc ]] + ran Lk+1

c ⊆ ker Lpc + ran Lk+1

c ,

as desired.

Corollary 2.4 If δ(L

ran Lc for all k, m ∈ N.

m

Proof: Let p := δ(L

and Y = ker Lc + ran Lkc for all k ∈ N. By Proposition 1.16, we have ker Lp ⊆ dom Lm ,

p

and

Y = ker Lpc + ran Lkc ⊆ S[dom Lm ] + ran Lk ⊆ Y.

Before we can proof connections between δ and δe for L and Lc similar to Lemma

1.26, we need an isomorphism-type result similar to 1.19, which makes use of Lc . The

following lemma is the second part of Lemma 3.2 in [2]. We give the proof omitted

there.

= .

ran Lm+k ker Lm k

c + ran Lc

= ∀ k, m ≥ 1.

ran Lm+k ker Lm + ran Lk

39

2 POLES OF THE GENERALIZED RESOLVENT

A : (dom Lm + ran Lk ) → , x 7→ [Sx].

ker Lm k

c + ran Lc

It suffices to prove that A is well-defined, surjective and ker A = ker Lm + ran Lk . The

proof relies heavily on Corollary 1.46, which we will use without further notice.

• A is well-defined: Firstly, dom Lm + ran Lk ⊆ dom T + dom S ⊆ dom S. And if

x ∈ dom Lm and y ∈ ran Lk = S −1 [ran Lkc ], we have

Sx ∈ ker Lm k m k

c and Sy ∈ ran Lc , so S(x + y) ∈ ker Lc + ran Lc , which means

[S(x + y)] = 0. – Now, let x ∈ dom L , y ∈ ran L = S [ran Lkc ] such that

m k −1

c + ran Lc = S[ker L ] + ran Lc ,

m k

so we find x1 ∈ ker Lc and y1 ∈ ran Lc such that

Hence, there exists some y2 ∈ ran Lk−1 with S(x − x1 ) = T y2 , which in turn

means (y2 , x − x1 ) ∈ L. We get x − x1 ∈ ran Lk , so there is a y3 ∈ ran Lk such

that x = x1 + y3 . Eventually, we get

c

particular δ(Lc ) = 0, and

(

0, dom S = X

δ(L) =

1, dom S 6= X.

If 0 < δ(L

Proof: Firstly, suppose δ(L c

thus ran L = S [ran Lkc ] = dom S for k ≥ 1. We note that δ(Lc ) = 0 can also be

k −1

e c ) = p < ∞, which means ker Lp +

Now we examine the situation in case of 0 < δ(L c

p−1

ran L = Y and ker Lc + ran L 6= Y .

40

2 POLES OF THE GENERALIZED RESOLVENT

e c ) = p.

• From Proposition 1.21 we already know δ(Lc ) ≤ δ(L

• Concerning δ(L) ≤ p: By Proposition 2.3 and its Corollary 2.4 we know ker Lp +

ran L = dom S = dom Lp + ran L, and Lemma 1.19 gives

ran Lp ∼ dom Lp + ran L

= = {0} ,

ran Lp+1 ker Lp + ran L

so ran Lp ⊆ ran Lp+1 , which shows δ(L) ≤ p.

• Suppose δ(L) < p, then ran Lp−1 = ran Lp . Now, Lemma 2.5 gives

{0} = = p−1 = p−1 ,

ran Lp−1+1 ker Lc + ran Lc ker Lc + ran Lc

+ ran Lc = Y , which contradicts δ(L

c

and since S is an operator, Lc = S [ran L ] 6= S −1 [ran Lp ] = ran Lpc , which

p−1 −1 p−1

implies δ(Lc ) ≥ p.

Now, we are able to show analogons of Theorem 1.1 and Theorem 1.2 for the case

where both α e c ) are finite. Note again that due to our assumption dom T ⊆

e(L) and δ(L

e

dom S we must have δ(L) = ∞, so the conditions of the previously mentioned theorems

cannot be satisfied.

finite, then they are equal and coincide with the ascent of L. In short form:

α e c ) < ∞ =⇒ α

e(L), δ(L e c ).

e(L) = α(L) = δ(L

e

In case δ(L) e

= 0 as well as dom S = X, or δ(L) > 0, the numbers also coincide with

the descent of L.

Proof: The proof is very similar to the one of Theorem 1.1: Let

1.26 e c ) =: r.

p := α

e(L) = α(L), q := δ(L), δ(L

e

First we examine the case of r = δ(L) = 0. Then dom S ⊆ ran Lm for all m ∈ N by

Proposition 2.6, and because of

we have

ker L ∩ ran L0 = ker L = ker L ∩ ran Lp = {0} .

41

2 POLES OF THE GENERALIZED RESOLVENT

e(L) = 0 = δ(L

with . . . = δ(L).

e c ) = r by Proposition 2.6. Assume p > q, then

Now, let r > 0, then q = δ(L) = δ(L

p q

ran L = ran L , in particular

ker L ∩ ran Lq = ker L ∩ ran Lp = {0} ,

so α e(L), an obvious contradiction, which means we have

e(L) ≤ q < p = α

e c ) = r.

e(L) = α(L) = p ≤ q = δ(L) = δ(L

α (2.5)

It remains to show that r ≤ p. By Corollary 2.2, we know α e(Lc ) ≤ α e(L) = p < ∞.

e(Lc ) ≤ p ≤ r, so ker Lrc = ker Lpc , in particular

Lemma 1.26 thus gives α(Lc ) = α

Y = ker Lrc + ran Lc = ker Lpc + ran Lc ,

e c ) ≤ p. This shows: (2.5) holds with equality everywhere.

so actually r = δ(L

The following two theorems are Theorem 4.2 and Theorem 4.3 respectively in [2].

e(L) = δ(L

e(L) = δ(L

Proof: By Proposition 2.3, we have ker Lp + ran Lp = dom S and ker Lpc + ran Lpc = Y .

e(L) = p, the intersection ker L ∩ ran Lp = {0} is trivial. Then, using Lemma

Since α

2.1, we get ker Lc ∩ ran Lpc = S[{0}] = {0}. We already know from Proposition 1.24

that in that case

ker Lp ∩ ran Lp = {0} and ker Lpc ∩ ran Lpc = {0} .

(2.6) follows. – If now (2.6) holds for 0 < p < ∞, we have

ker L ∩ ran Lp ⊆ ker Lp ∩ ran Lp = {0} , ker Lpc + ran Lc ⊇ ker Lpc + ran Lpc = Y,

so α e c ) ≤ p, and by the preceding proposition, they coincide.

e(L), δ(L

and T = T1 ⊕ T2 , where S1 , T1 : ker Lp → ker Lpc and S2 , T2 : ran Lp → ran Lpc are

defined by

S1 := S|ker Lp , T1 := T |ker Lp , S2 := S|ran Lp and T2 := T |ran Lp .

We have

(i) T2 is bijective,

(ii) S1 is bijective and Lker Lp is a nilpotent operator whose nilpotence index is p.

42

2 POLES OF THE GENERALIZED RESOLVENT

p

T [ker L ] ⊆ ker Lcn−1 ⊆ ker Lnc , p

T [ran L ] = ran Lp+1

c ⊆ ran Lpc . (2.8)

and T = T1 ⊕ T2 . From the calculations in (2.7) and (2.8) we also see that T2 and S1

are surjective. Also:

ker L ∩ ker S = ker Lp ∩ mul L ⊆ 8 ker Lp ∩ ran Lp = {0} .

p

(2.9)

since α

Obviously, Lpker Lp = 0ker Lp , so Lker Lp is nilpotent with nilpotence index less or equal

than p. Suppose it was strictly smaller than p, so Lp−1 ker Lp = 0ker Lp . That would mean

We turn on to the main part of this work. In the following, X and Y will denote

F-Banach spaces, where F ∈ {R, C}, and S, T will denote linear operators in X to Y .

First, we generalized the notions of resolvent sets and resolvents.

ρS (T ) := {λ ∈ F | λS − T : X ⊇ dom T → Y is bijective}

S-resolvent of T in λ.

We differ from [2] in that we first assume S and T to be bounded, everywhere defined

operators, and relax that assumption later on.

8 Proposition 1.16

43

2 POLES OF THE GENERALIZED RESOLVENT

open set ρS (T ).

that A−1 ∈

B(X) and B a linear operator with dom B ⊆ dom A and BA−1 ∈ B(X)

such that BA −1 −1

< 1. Then (A − B) ∈ B(X) with

∞

X

−1 −1

k

(A − B) =A BA−1 . (2.10)

k=0

∞

X ∞

k X k

C ≤ kCk < ∞,

k=0 k=0

P∞

since kCk < 1. Thus, because X is a Banach space, k=0 C k converges and C k → 0.

We observe !

Xn Xn

k n+1 k

(I − C) C =I −C = C (I − C),

k=0 k=0

so (I − C) is invertible with

∞

X

(I − C)−1 = Ck.

k=0

Furthermore:

P

∞

A−1 C k = A−1 (I − C)−1

k=0

= A−1 (AA−1 − BA−1 )−1

= ((AA−1 − BA−1 )A)−1

1.6

= ((A − B)A−1 A)−1

1.9

= (A − B)−1 .

44

2 POLES OF THE GENERALIZED RESOLVENT

Corollary 2.10 If A has an inverse A−1 ∈ B(X) and B ∈ B(X) such that kBk <

−1
−1

A
, then (A − B) is invertible and (2.10) holds. In particular, if A, A−1 , B ∈

−1

B(X) and kA − Bk <
A−1
, then B is invertible with

∞

X

B −1 = A−1 ((A − B)A−1 )k (2.11)

k=0

Proof: We have BA−1 ∈ B(X) such that
BA−1
≤ kBk
A−1
< 1. The claim thus

follows immediately from the preceding proposition.

Proof (of Lemma 2.8): By our assumption, X and Y are Banach spaces and S, T :

X → Y are bounded linear operators. Thus, Rλ (S, T ) = (λS − T )−1 is bounded by

the open mapping theorem for all λ ∈ ρS (T ).

Let µ ∈ F, then

µ→λ

kλS − T − µS + T k = |λ − µ| kSk −→ 0,

so we find some r > 0 such that µS − T is invertible for all µ ∈ F with |λ − µ| < r

using Corollary 2.10. So ρS (T ) is indeed open and we have, by (2.11),

∞

X

(µS − T )−1 = (λS − T )−1 ((λS − T − µS + T )(λS − T )−1 )k

k=0

∞

X

= (λS − T )−1 (λ − µ)k (S(λS − T )−1 )k ,

k=0

The general strategy of the main result’s proof is similar to the one used in the single

operator-case: Let α e c ) = p < ∞, then we will show:

e(L) = δ(L

1. X can be decomposed into ker Lp ⊕ ran Lp using Theorem 2.1.

2. λS − T is bijective on a deleted neighborhood of 0.

3. ker Lp and ran Lp are closed.

4. λ 7→ λS − T restricted to ker Lp has a pole of order p at 0, and on the other

hand, if we restrict it to ran Lp , it is analytic.

The converse direction uses the identity theorem for Laurent series expansions and

algebraic considerations to show that α e c ) are finite.

e(L) and δ(L

In order to show point 2 of the strategy, we have to show a pertubation result to

e c ) and α

guarantee surjectivity and injectivity using the finiteness of δ(L e(L), respec-

tively. We will be using a standard theorem for that, which requires us to control kernel

45

2 POLES OF THE GENERALIZED RESOLVENT

and range of the operators which we examine. It is natural to restrict these operatores,

so that they become injective (respectively, surjective), if α e c ) respectively) are

e(L) (δ(L

finite.

The pertubation result will also require the involved normed vector spaces to be

complete, which might fail, if we compress our operators. Since we are mainly inter-

ested in the stability of algebraic properties, we are free to choose norms on our spaces

that guarantee their completeness. The following is called Remark 5.1 in [2].

Proposition 2.11 Let (E, k·kE ) and (F, k·k)F be Banach spaces, A, B ∈ B(E, F ).

Set F0 := ran A, E0 := B −1 [F0 ]. Then A0 := A|E0 , B0 := B|E0 : E0 → F0 are

well-defined operators and there exist norms k·kE0 on E0 and k·kF0 on F0 such that

(i) k·kE0 and k·kF0 are stronger than k·kE and k·kF respectively.

(ii) (E0 , k·kE0 ) and (F0 , k·kF0 ) are complete.

(iii) A0 and B0 are bounded linear operators from (E0 , k·kE0 ) to (F0 , k·kF0 ).

e−1

so it is closed, and its inverse A : ran A → E/ ker A is also closed. Equip F0 = ran A

with the graph norm

e−1

kykF0 := kykAe−1 = kykF + A y ∀ y ∈ F0

E/ ker A

e−1 , which is stronger than k·k and with respect to which F0 is complete.

induced by A F

Equip E0 = B −1 [F0 ] ⊆ dom B with the norm

e−1

kxkE0 := kxkE + kBxkF0 = kxkE + kBxkF +
A Bx
∀ x ∈ E0 .

E/ ker A

induced by B|E0 : (E0 , k·kE ) → (F0 , k·kF0 ), which is stronger than the graph norm

k·kB induced by B, which in turn is stronger than k·kE . Hence, B0 : (E0 , k·kE0 ) →

(F0 , k·kF0 ) is obviously bounded, and B|E0 : (E0 , k·kE ) → (F0 , k·kF0 ) is closed: let

(xn )n∈N ⊆ E0 satisfy

k·k k·kF

E

xn −→ x, Bxn −→0 y,

so, by the definition of k·kE0 ,

k·kE k·kF

xn −→0 x, B0 xn −→0 y,

(E0 , k·kE0 ) is complete. Thus, if we can show that A0 is closed, it is bounded by the

closed graph theorem. But since k·kE0 and k·kF0 are stronger norms than k·kE and

k·kF respectively, the same is true for the associated product norms on E × F , which

means that A0 is necessarily closed, because its graph coincides with that of the closed

operator A.

46

2 POLES OF THE GENERALIZED RESOLVENT

Remark: The proof of the proposition shows that it suffices to assume A as closed.

Now, we move onto the aforementioned compressions, which is the content of Lemma

5.2 in [2]. Remember, for now, S, T are bounded, everywhere-defined linear operators

from (X, k·k) to (Y, k·k).

c .

c such that these spaces are complete

and Tm and Sm are bounded with respect to those norms.

Proof: For m = 0 there is nothing to do, we set k·k0 := k·kX as norm on ran L0 = X

and |k·k|0 := k·kY as norm on ran L0c = Y . – The rest follows by induction: Suppose

we have proved the claim for some m ≥ 0, then S m , T m are bounded operators between

the Banach spaces (ran Lm ∩ dom T, k·km ) and (ran Lm c , |k·k|m ). Also:

ran Lm+1

c = T [ran Lm ]

= ran Tm ,

ran Lm+1 ∩ dom T = S −1 [ran Lm+1

c ] ∩ dom T

−1

= Sm [ran Lm+1

c ].

As already mentioned, we will need a theorem from standard pertubation theory for

semi-Fredholm operators. It can, for instance, be found in [5], Satz 82.4.

that is: n(A) := dim ker A < ∞ (called nullity of A) and ran A is closed, or d(A) :=

dim F/ ran A < ∞ (called defect of A; ran A is closed in that case)9 . Then there

exists an ε > 0 such that for all B ∈ B(E, F ) with kBk < ε, we have: A + B is also

a semi-Fredholm operator and

as well as

n(A + B) − d(A + B) = n(A) − d(A).

9 In the first case, A is called upper semi-Fredholm, and in the second case it is called lower

semi-Fredholm.

47

2 POLES OF THE GENERALIZED RESOLVENT

All we further need is a purely algebraic result, which relates kernel and range of the

compressions from Lemma 2.12 to those of S and T . The following is a reformulation

of Lemma 1.5 in [2].

(i) If λ 6= 0, then ker(λSm − Tm ) = ker(λS − T ).

(ii) If m ≥ 1 and S[dom Lm ] + ran Lm

c = Y , and if λ 6= 0, then

ran Lm

c ∼ Y

= .

ran(λSm − Tm ) ran(λS − T )

Proof:

(i) Let λ 6= 0. For m = 0 the equality is trivial, so let m ≥ 1.

(x, λx) ∈ L. Since λ 6= 0, (λ−1 x, x) ∈ L. A trivial induction shows (λ−m x, x) ∈

Lm , so x ∈ ran Lm .

(ii) Firstly, we show

ran(λS − T ) ∩ ran Lm

c = ran(λSm − Tm ). (2.12)

“⊇” is obvious. Concerning “⊆”: Suppose x ∈ dom T and (λS − T )x ∈ ran Lmc =

T [ran Lm−1 ]. Then there exists some u ∈ ran Lm−1 ∩dom T such that λSx−T x =

T u, so

(λ−1 (x + u), x) ∈ L = L1 ,

in particular x ∈ ran L1 . Also: v := λ−1 (x + u) ∈ dom T and

⊆ ran Lm−1

c + ran Lm m−1

c = ran Lc .

So we have

(λS − T )v ⊆ ran Lm−1

c .

We can apply the same procedure as above to v instead of x, and thus a finite

induction shows x ∈ ran Lm , so

ran Lm

c ran Lmc

(1.1) ran Lm + ran(λS − T )

∼ c

= = .

ran(λSm − Tm ) ran(λS − T ) ∩ ran Lm

c ran(λS − T )

48

2 POLES OF THE GENERALIZED RESOLVENT

ran Lm m

c + ran(λS − T ). To see this, let x ∈ dom L ∩ dom S. Then there exists

an L-chain (x0 , . . . , xm ) with x0 = x, so

We claim:

k

X

Sx = λ−(i+1) (λS − T )xi + λ−(k+1) T xk ∀ 0 ≤ k ≤ m − 1.

i=0

P

k

Sx = λ−(i+1) (λS − T )xi + λ−(k+1) T xk

i=0

2.13 Pk

= λ−(i+1) (λS − T )xi + λ−(k+1) Sxk+1 .

i=0

Replacing x by xk+1 in (2.14) and replacing Sxk+1 with the result in (ii) yields

the desired induction step.

Finally, we are ready to prove two pertubation results (Proposition 5.3, Corollary

5.4 and Proposition 5.5 in [2]) for S and T , using the better behaved operators Sm

and Tm introduced before.

Proposition 2.14 Suppose δ(L

λS − T is surjective and

d(T ) = 0. And since it is also bounded, T is (lower) semi-Fredholm. By Theorem 2.3,

there exists an ε > 0 such that if |λ| < ε, λS − T is also semi-Fredholm, since S is

bounded. Also, the thereom gives

d(λS − T ) ≤ d(T ) = 0

and

n(λS − T ) = n(λS − T ) − d(λS − T ) = n(T ) − d(T ) = n(T ).

49

2 POLES OF THE GENERALIZED RESOLVENT

e c ) = p,

Now, suppose p > 0. Let Sp and Tp be as in Lemma 2.12. Since δ(Lc ) ≤ δ(L

we have

ran Tp = T [ran Lp ] = ran Lp+1

c = ran Lpc ,

so Tp is surjective. Again, Sp is bounded, so Theorem 2.3 guarantees the existence of

some ε > 0 such that for all |λ| < ε, λSp − Tp is also semi-Fredholm, surjective and

satisfies

dim ker(λSp − Tp ) = dim ker Tp = dim(ker T ∩ ran Lp ).

By Lemma 2.13 (i), we have for λ 6= 0:

We can also use 2.13 (ii), since Y = S[dom Lp ] + ran Lpc by Corollary 2.4. This way

we get

Y ∼ ran Lpc ∼

= = {0} ,

ran(λS − T ) ran(λSp − Tp )

since ran(λSp − Tp ) = ran Lpc . So λS − T is indeed surjective.

Corollary 2.15 Suppose δ(L

sition.

(i) If α

e(L) < ∞, then λS − T is bijective for 0 < |λ| < ε.

(ii) If α

e(L) = ∞, then λS − T is surjective, but not injective for 0 < |λ| < ε.

Proof:

(i) If α

e(L) < ∞, by Proposition 2.7, it equals p, so we have

which in view of Proposition 2.14 yields for all 0 < |λ| < ε:

(ii) Follows from the same arguments.

50

2 POLES OF THE GENERALIZED RESOLVENT

operators. Let p ≥ 1, then:

α e c ) if and only if R• (S, T ) has a pole of order p at 0.

e(L) = p = δ(L

This is Theorem 5.6 in [2], except that their assumptions on S and T are less strict.

e(L) = p = δ(L

2.1, we know

X = ker Lp ⊕ ran Lp , Y = ker Lpc ⊕ ran Lpc . (2.16)

These decompositions are also topological: By Corollary 2.15, there exists some ε > 0

such that λS − T is bijective for all 0 < |λ| < ε. Fix such a λ and let

ker Lm

c = (B ) [ker L0c ] = ker B m ,

−1 m

ran Lm m 0 m m

c = B [ran Lc ] = B [Y ] = ran B .

Since A and B are bounded, ker Lm and ker Lm c are closed, and because of our decom-

position (2.16), ran Lp and ran Lpc must also be closed, which follows from a well-known

result, see for instance [5], Satz 55.3.

By Theorem 2.2, we can decompose S and T as

X = ker Lp ⊕ ran Lp

S T S1 T 1 S2 T 2

We know that T2 is bijective, so similar arguments as in the proof of Lemma 2.8 show

that λ 7→ (λS2 − T2 )−1 is an analytic function on a neighborhood Bε (0) of 0. We also

know that S1 is bijective and that Lker Lp = S1−1 T1 is a nilpotent operator with index

p. The latter means that ρ(Lker Lp ) = F \ {0}, where ρ denotes the usual resolvent set.

Thus:

λS1 − T1 = S1 (λI − S1−1 T1 ),

51

2 POLES OF THE GENERALIZED RESOLVENT

is bijective for λ 6= 0, and for those we have the following series expansion

−1 −1

= λ−1 I − λ−1 Lker Lp S1

∞

!

X

= λ−1 λ−k Lkker Lp S1−1

k=0

p−1

X

= λ−(k+1) Lkker Lp S1−1

k=0

−1

X −(k+1)

= λk Lker Lp S1−1 .

k=−p

That shows: λ 7→ (λS1 − T1 )−1 has a pole of order p at 0. It is easy to verify that

Now conversely assume that R• (S, T ) has a pole of order p at 0, so we have a series

expansion of the form

X∞

Rλ (S, T ) = λk Ck

k=−p

C−p 6= 0. Let y ∈ Y and set xk = Ck y for all k ≥ −p. Then

∞

X

= (λS − T ) λk xk

k=−p

∞

X ∞

X

= λk+1 Sxk − λk T xk

k=−p k=−p

∞

X X∞

= λk Sxk−1 − λk T xk

k=−p+1 k=−p

∞

X

= λ−p T x−p + λk (Sxk−1 − T xk ).

k=−p+1

T x−p = 0 (2.17)

Sxk−1 − T xk = 0 ∀ − p + 1 ≤ k ≤ −1 or 1 ≤ k (2.18)

Sx−1 − T x0 = y. (2.19)

52

2 POLES OF THE GENERALIZED RESOLVENT

From (2.18) we get (xk , xk−1 ) ∈ L for all −p + 1 ≤ k ≤ −1, and from (2.17) we

get (x−p , 0) ∈ L. That means (x−1 , . . . , x−p , 0) ∈ Ch(L), hence x−1 ∈ ker Lp and

Sx−1 ∈ S[ker Lp ] = ker Lpc . But then by (2.19)

This proves Y = ker Lpc + ran Lc , so δ(Le c ) ≤ p < ∞. By Corollary 2.15 and our

assumption that R• (S, T ) is defined on a punctured neighborhood of 0, we get α e(L) <

∞. Using Proposition 2.7, we have q := α e

e(L) = δ(Lc ) ≤ p. Suppose by contradiction

that q < p, then, by the first part of the proof, R• (S, T ) had a pole of order q < p in

0, which contradicts C−p 6= 0. So indeed, we have p = α e c ).

e(L) = δ(L

in [2], page 159, in the following, we assume S and T to be (not necessarily everywhere

defined) linear operators in X to Y . Furthermore, T shall be closed and S shall be

T -bounded.

respect to the graph norm induced by T , that is: there exists some C > 0 such that

A priori, it is not clear, whether Rλ (S, T ) is bounded, and also, dom T need not be

complete. But we can remedy that by equipping dom T with the graph-norm k·kT

induced by T : Set

S0 , T0 : (dom T, k·kT ) → (Y, k·k).

Note that (dom T, k·kT ) and (Y, k·k) are Banach spaces and S0 , T0 are everywhere

defined, bounded linear operators.

It is clear that ρS (T ) coincides with ρS0 (T0 ), and Rλ (S, T ) has the same graph as

Rλ (S0 , T0 ) for all λ ∈ ρS (T ).

open set ρS (T ). Furthermore, it has a pole of order p at λ0 ∈ C if and only if that

is the case for ρS0 (T0 ) ∋ λ 7→ Rλ (S0 , T0 ) ∈ B(Y, dom T ).

Proof: The canonical inclusion map J : (dom T, k·kT ) → (X, k·k) is obviously bounded,

so for λ ∈ ρS (T ) we get Rλ (S, T ) = J ◦Rλ (S0 , T0 ) ∈ B(Y, X). We immediately see that

R• (S, T ) is analytic, and if R• (S0 , T0 ) has a pole of order p at λ0 ∈ F, then R• (S, T )

53

2 POLES OF THE GENERALIZED RESOLVENT

has a pole of order p at λ0 . Conversely, let R• (S, T ) have pole of order p at λ0 , that

is: there are Qk ∈ B(Y, X) such that

∞

X

Rλ (S, T ) = (λ0 − λ)k Qk

k=−p

for all λ ∈ ρS (T ) close enough to λ0 . But we also have operators Pk ∈ B(Y, dom T )

such that

∞

X

Rλ (S0 , T0 ) = (λ0 − λ)k Pk

k=−∞

for the λ above, since ρS0 (T0 ) = ρS (T ). Because of Rλ (S, T ) = JRλ (S0 , T0 ), we have

Eventually, we only have to show that the algebraic quantities used in the main

theorem are the same for S, T and S0 , T0 .

e(L0 ) and δ(L

e(L) = α

e

δ((L0 )c ).

• It is clear that ker L0 = ker L. Also,

0 = ran L ∩

m+1 m+1

dom T for some m ≥ 1. Then ran L0 ⊆ ran L ∩ dom T is trivial; for the

converse inclusion, we let y ∈ ran Lm+1 ∩ dom T , so there exists some x ∈ ran Lm

such that (x, y) ∈ L. In particular, x ∈ dom L ⊆ dom T , so (x, y) ∈ L0 and x ∈

m+1

ran Lm ∩ dom T = ran Lm 0 by induction assumption, which yields y ∈ ran L0 .

This gives ker L ∩ ran Lm = ker L0 ∩ ran Lm ∩ dom T = ker L0 ∩ ran Lm for all

m ≥ 0, and thus α e(L) = αe(L0 ).

• By Proposition 1.42, we know that Lc does only depend on the behavior of

S and T on dom S ∩ dom T = dom T . Thus (L0 )c = Lc , and in particular

e 0 )c ) = δ(L

δ((L e c ).

Y such that T is closed and S is T -bounded. Let p ≥ 1, then:

α e c ) if and only if R• (S, T ) has a pole of order p at 0.

e(L) = p = δ(L

54

2 POLES OF THE GENERALIZED RESOLVENT

e(L0 ) = p = δ((L

Since S0 , T0 are bounded linear operators in Banach spaces, by Theorem 2.4, we have

α e 0 )c ) if and only if R• (S0 , T0 ) has a pole of order p at 0, which, by Lemma

e(L0 ) = δ((L

2.16, is necessary and sufficent for a pole of the function R• (S, T ) of order p at 0.

55

References

References

[1] Richard Arens. Operational calculus of linear relations. Pacific J. Math, 11(1):9–

23, 1961.

[2] Harm Bart and David C. Lay. Poles of a generalised resolvent operator. Proceed-

ings of the Royal Irish Academy. Section A: Mathematical and Physical Sciences,

74:147–168, 1974.

[3] Ronald Cross. Multivalued Linear Operators. Marcel Dekker, Inc., 2. edition, 1998.

[4] Michiel Hazewinkel, editor. Handbook of Algebra, volume 1. North Holland, 1996.

[5] Harro Heuser. Funktionalanalysis. Teubner, 4. edition, 2007.

[6] Adrian Sandovici, Henk de Snoo, and Henrik Winkler. Ascent, descent, nullity,

defect, and related notions for linear relations in linear spaces. Linear Algebra and

its Applications, 423(2-3):456–497, 2007.

[7] John von Neumann. Über Adjungierte Funktionaloperatoren. The Annals of Math-

ematics, 33(2,):294–310, 1932.

[8] John von Neumann. Functional Operators, Volume 2: The Geometry of Orthogonal

Spaces. Princeton University Press, 1950.

56