Sie sind auf Seite 1von 10

An Instance of the Lehman Game

Benjamin CHEN
May 28, 2010

Abstract
We present a simple instance of the more general Lehman Game,
played over a system of linear equations. This allows for a simple treat-
ment of the game that avoids reference to matroid theory. This note is
self-contained and should be accessible to all audiences.

1 Rules of the Game


We consider a system of linear equations

Ax = b

where A is a m × n matrix, x is a column vector with n entries and b is a column


vector with m entries, m, n > 1:
 
a1,1 ... a1,n
A =  ... .. .. 

. . 
am,1 · · · am,n
   
x1 b1
x =  ...  ; b =  ... 
   

xn bm
This is the matrix form of the following system of linear equations:
Pn
k=1 a1,k xk = b1
..
Pn .
k=1 am,k xk = bm .

We can assume all matrix entries to be in R, although the results obtained can
be generalised to a system of linear equations over any field. Cut and Short
take turns to play, with Cut going first.

On his turn, Cut chooses a column of A to delete. The corresponding value

1
of x is set to 0; for instance, if the j th column of A is marked for deletion, the
game continues with a new system of linear equations
0 0
Ax =b
0 0 0
where A is the matrix A deprived of the column j and xi = xi ∀i 6= j, xj = 0.

On his turn, Short chooses a column of A to secure. We then carry out a


Gaussian elimination procedure using the selected row as a pivot. For instance,
suppose the j th column of A has been selected. We then consider the augmented
matrix:  
a1,1 · · · a1,j · · · a1,n b1
 .. .. .. .. .. .. 
 . . . . . . 
am,1 ··· am,j ··· am,n bm
and we add multiples of the first row to all subsequent rows so as to achieve

a1,1 · · · a1,j · · · a1,n b1


 
 a02,1 ··· 0
0 0
· · · a2,n b2 
..  .
 
 .. .. .. .. ..
 . . . . . . 
0 0 0
am,1 ··· 0 · · · am,n bm

The game then continues with the new system of linear equations
0 0
Ax=b

where 0 0 0 0
a2,1 ··· a2,j−1 a2,j+1 ··· a2,n
0
A = .. .. .. .. .. ..
. . . . . .
0 0 0 0
am,1 ··· am,j−1 am,j+1 ··· am,n
and  
x1
 ..  0
 .  
b2

 
0  xj−1  0  ..  .
x =
 xj+1
;b = 
 . 
  0
 . bm
 ..


xn
0
We call A the contraction of A by the column j. If there exists a solution to
0 0 0
A x = b , then we also have a solution to Ax = b. In fact, we obtain the value
of xj by plugging the corresponding values for x1 , ..., xj−1 , ..., xj+1 , ..., xn into
n
X
a1,k xk = b1 .
k=1

2
Each column may only be tagged once by either player. We consider a full turn
to be complete when both players have eached tagged a column. Cut wins if he
deletes enough columns so that there is no solution to Ax = b using only the
secured columns. Short wins if he obtains a solution to Ax = b using only the
column he secures.

2 Who wins?
Definition 1: A set of columns, R, is said to span another set of columns, T ,
if every column t ∈ T can be written as a linear combination of elements of R,
i.e. ∀t ∈ T ,
Xm
t= αn rn
n=1
where αn scalars, rn ∈ R, ∀n .

Definition 2: A set of columns is said to be linearly independent if for any


subset {v1 , ..., vm }, there does not exist scalars α1 , ..., αn , not all zero, such that
m
X
αk vk = 0.
k=1

Theorem 1: Short always wins if there are two disjoint linearly independent
subsets R and T of the columns of A, both of which span b and such that R and
T span each other (“R and T are cospanning”).

Proof : Suppose that we have two such subsets R and T . We can assume
without loss of generality that Cut deletes an element r ∈ R. Then Short se-
cures an element in t ∈ T such that R \ r and T \ t span each other. How do
we know that t exists? By the fact that R and T are cospanning, we know
0
that there is at least a subset T = {t1 , ..., tn } ⊆ T with corresponding scalars
α1 , ..., αn , αi ∈ R, αi 6= 0 ∀i, such that
u
X
r= αi ti .
i=1
0 0
We claim that T is a singleton. Suppose T were not a singleton. Then we have
u
X
r= αi ti
i=1

where n > 1. Since R spans T , each ti can be written as a linear combination


of elements of R,
Xv
ti = βij rij
j=1

3
from which we obtain  
u
X v
X
r= αi βij rj 
i=1 j=1

which implies that r can be written as the linear combination of at least two
elements in R. However, this contradicts the linear independence of R.
0
T must also be the only such subset. Otherwise we have r = α1 t1 and r = α2 t2
which implies t1 = α
α1 t2 , contradicting the linear independence of T .
2

Thus, each time Cut deletes an element r ∈ R (t ∈ T ), Short can always


secure an element t ∈ T (r ∈ R) such that R \ r and T \ t are cospanning.
Furthermore, we have that t = αr for some α ∈ R. Since R and T both span
b, what Short will have tagged at the end of the game is a set of elements that
span b but this implies that there is a solution to Ax = b using only the columns
tagged by Short. Short wins! 

Theorem 2: Cut always wins if there is a contraction of A by a subset S


0 0 0
of the columns so that in the resulting system A x = b , the columns can be
partitioned into two linearly independent sets J and K such that at least one of
0
them does not span b .

Proof : We have
 
J1,1 ··· J1,k K1,1 ··· K1,l
0  .. .. .. .. .. .. 
A = . . . . . . 
Jn,1 ··· Jn,k Kn,1 ··· Kn,l

and  0 
b1
0  ..
b = .


0
bk+l
Suppose there exists such a contraction. Assume without loss of generality that
J does not span b. On his first move, Cut deletes a column such that K does
not span b. Is this possible? Suppose not. Then for every element k ∈ K, K \ k
0
spans b . This implies that there exists at least two distinct representations of
b, i.e.
u
0 X
b = α i ki
i=1
v
0 X
b = αi ki
i=u+1

4
ki ∈ K ∀i and αi ∈ R, αi 6= 0 ∀i. This, however, contradicts the linear
independence of K since
u
X v
X
αi ki + (−αi )ki = 0.
i=1 i=u+1

We can assume without loss of generality that Short, on his turn, secures a
column j ∈ J (we exclude the column that has already been deleted by Cut).
We then carry out the Gaussian elimination procedure using column j as a pivot
 0 
J1,1 · · · J1,j · · · J1,k K1,1 · · · K1,l b1
 .. .. .. .. .. .. .. .. .. 
 . . . . . . . . . 
00 00 00 00 00
Jn,1 ··· 0 ··· Jn,k Kn,1 ··· Kn,l bk+l
00 J 00 J
i,j i,j
where for ∀i 6= 1, Ji,s = − J1,j J1,s + Ji,s , ∀i 6= 1, Ki,s = − J1,j K1,s + Ki,s and
00 J 0 0
i,j
∀i 6= 1, bi = − J1,j b1 + bi . The game continues with
00 00 00 00 00 00
··· ··· ···
 
J2,1 J2,j−1 J2,j+1 J2,k K2,1 K2,l
00  .. .. .. .. .. .. .. .. ..  = 00 00 
A = . . . . . . . . .  J K
00 00 00 00 00 00
Jn,1 ··· Jn,j−1 Jn,j+1 ··· Jn,k Kn,1 ··· Kn,l

and  0 
x1
 .
 ..

  00 
 0  b2
 x
  0
00
x =  j−1 .. 
;b = 
 
0
 xj+1 . 
00

 ..
 
 bm
 . 
0
xn
00 00
We note that both J and K are linearly independent sets of columns1 . Then
00 00 00 00 00 00
Cut deletes a column k ∈ K such that neither J nor K \ k spans b . Why
00 00
is this possible? Firstly, J cannot span b . If it did, then there would be a
00 00 00 00
solution to A x = b using the columns in J , but this would imply the exis-
0 0 0
tence of a solution to A x = b using the columns in J (Gaussian elimination
was performed using a column of J as the pivot). This would mean, however,
0
that b is spanned by J, a contradiction.
00 00
Therefore, the only possibility is that K spans b . We claim that it is al-
00 00 00 00
ways possible to delete an element of K such that K does not span b . If K
00 00
does not span b , then deleting any element of K will do the trick. Suppose,
00 00 00
therefore, that K does span b . Then, this claim follows from the fact that K
1 The linear independence of a set of columns is unaffected by elementary row operations.

The reader unfamiliar with this proposition will find a proof of the result in the appendix.

5
00 00
is linearly independent. If there were no element k ∈ K we could delete such
00 00 00 00
that K \ k did not span b , then b would have two distinct representations
u
00 X 00
b = αi ki
i=1

v
00 X 00
b = αi ki
i=u+1
00 00
where ki ∈ K ∀i and αi ∈ R αi 6= 0 ∀i. This would imply, however, that
u
X v
X
αi ki + (−αi )ki = 0
i=1 i=u+1

00
contradicting the linear independence of K . We thus conclude that Cut can
00
always ensure the existence of a partition of A into two linearly independent
00 0
sets such that neither set spans b . At the end of the game, the columns of A
0 0
tagged by Short will not give a solution to A x = b since he is always tagging
from a set that does not span b(n) where

A(n) x(n) = b(n)

is the system of linear equations obtained after n complete turns. Thus, even if
all the columns in S was conceded to Short, he cannot achieve victory because
0 0 0
he cannot gain a solution to A x = b in the remaining moves. 

Theorem 3: Either there exists two disjoint linearly independent subsets R


and T of the columns of A, both of which span b and such that R and T span
each other or there is a contraction of A by a subset S of the columns so that in
0 0 0
the resulting system A x = b , the columns can be partitioned into two linearly
0
independent sets J and K such that at least one of them does not span b . Not
both.

Proof : Suppose that there does not exist two disjoint linearly independent
subsets R and T of the columns of A, both of which span b and such that R
and T span each other. Then either
1. There does not exist a subset of columns of A which span b. We pick
any two non-empty linearly independent subsets of the columns of A: call
them B and C. We then carry out the Gaussian elimination procedure
on all columns in A that are not contained in either B or C (let us call
0 0 0
this set of columns S). We thus obtain b from b, B from B and C
0 0
from C. B and C are both linearly independent, and neither of them
0 0
spans b . The last claim is true for the following reason: If say B were
0 0 0 0
to span b , then there would be a solution to A x = b using the columns
0 0
in B . Since B was obtained from B by performing Gaussian elimination

6
on the columns of S, there would also be a solution to Ax = b using the
columns in B∪S, i.e. b is spanned by the columns of B∪S, a contradiction!

This gives us a contractions of A by S, such that in the resulting sys-


0 0 0 0
tem A x = b , A can be partitioned into two linearly independent sets
0 0 0
B and C such that neither of them spans b .
2. The only subset of columns of A which spans b is the trivial subset, i.e.
all of the columns of A form a minimal linearly independent set spanning
b. Then, we pick any two non-empty, linearly independent subsets of
the columns of A: neither of them will span b, and we apply the same
argument as in Case 1.
3. There exists a nontrivial subset of the columns of A spanning b. We con-
sider a linearly independent (and therefore minimal) subset of the columns
of A spanning b: call it B. Consider the set of columns in A but not in B
(call this C):

• If the columns of C do not span b, then let D be any linearly independent


subset of C. We then carry out the Gaussian elimination procedure on
all columns in A that are not contained in either B or D (let us call this
0 0 0
set of columns S). We thus obtain b from b, B from B and D from
0 0 0 0
D. B and D are both linearly independent and D does not span b . In
0 0 0 0 0
fact, if D were to span b , then there would be a solution to A x = b
0 0
using the columns in D . However, D being obtained from D by Gaussian
elimination on the columns of S, this would imply a solution to Ax = b
using the columns in D ∪ S, i.e. b is spanned by the columns of D ∪ S ⊆ C,
a contradiction!
• If the columns of C span b, then let D be a linearly independent subset of
C that spans b. By our very first supposition, it must be the case that B
and D are not cospanning. We then carry out the Gaussian elimination
procedure on all columns in A that are not contained in either B or D (let
us call this set of columns S ∗ ). We thus obtain b∗ from b, B ∗ from B and
D∗ from D. If either B ∗ or D∗ does not span b∗ , we are done.

Suppose, therefore, that both B ∗ and D∗ span b∗ . We observe that B ∗


and D∗ are not cospanning. To see this, we recall that B and D are not
cospanning. Without loss of generality, let d ∈ D be such that d is not
spanned by the columns in B. Then d∪B forms a linearly independent set
of columns. The linear independence of this set of columns is preserved
after Gaussian elimination on the columns in S ∗ . Thus, after Gaussian
elimination, we obtain d∗ from d, B ∗ from B and d∗ is not spanned by the
columns in B ∗ which implies that D∗ and B ∗ are not cospanning. Let us
now carry out Gaussian elimination using d∗ as a pivot, where d∗ is the
0 0
element of D∗ not spanned by B ∗ . We thus obtain b from b∗ , D from
0 0 0
∗ ∗ ∗
D \d and B from B . We claim that B does not span b . If it did, then

7
0 0 0 0
there would be a solution to A x = b using the columns in B , which
would imply a solution to A∗ x∗ = b∗ using the columns in B ∗ ∪ d∗ , i.e.
u
X

b = αi Bi∗ + αm+1 d∗
i=1

where αi ∈ R \ {0} ∀i, Bi∗ ∈ B ∗ ∀i. However, B ∗ itself spans b∗ , i.e.


v
X
b∗ = βi Bi∗
i=1

where βi ∈ R \ {0} ∀i, Bi∗ ∈ B ∗ ∀i. So we have


v u
!
∗ 1 X X
d = βi Bi∗ − αi Bi∗
αm+1 i=1 i=1

which contradicts the fact that d∗ is not spanned by B ∗ .

Cases 1, 2 and 3 exhausting all possibilities, we conclude that if there does not
exist two disjoint linearly independent subsets R and T of the columns of A,
both of which span b and such that R and T span each other, there is a contrac-
0 0 0
tion of A by a subset S of the columns so that in the resulting system A x = b ,
the columns can be partitioned into two linearly independent sets J and K such
0
that at least one of them does not span b .

For any matrix A, the game is either won for Cut or Short but not both. This
observation, together with Theorems 1 and 2, allows us to conclude the proof
of Theorem 3. 

Corollary 3.1: Short always wins if and only if there are two disjoint lin-
early independent subsets R and T of the columns of A, both of which span b
and such that R and T span each other (“R and T are cospanning”).

Corollary 3.2: Cut always wins if and only if there is a contraction of A


0 0 0
by a subset S of the columns so that in the resulting system A x = b , the
columns can be partitioned into two linearly independent sets J and K such that
0
at least one of them does not span b .

8
Appendix:
The row/column space of a matrix A is the vector space generated by taking all
possible linear combinations of the row/columns of a matrix. The row/column
rank of a matrix is the size of the largest linearly independent set contained in
row/column space of the matrix. We aim to prove the following result: if a set
of columns is linearly independent, then it remains linearly independent after
the execution of an elementary row operation. The elementary row operations
are:

1. Interchanging two rows


2. Adding a multiple of one row to another row
3. Multiplying any row by a nonzero element.

It should follow immediately from the definition of linear independence that if


a set of rows is linearly independent, then it remains linearly independent after
the execution of an elementary row operation. It is therefore sufficient to prove

Theorem A: The row and column spaces of a matrix have dimensions that
are equal.

To prove Theorem A, it is in turn sufficient to prove the following

Theorem B: row rank A ≤ row rank A

By applying Theorem B to the transpose of A, AT , we would have equality


since the row space of A is the column space of AT and vice versa.

Before proving Theorem B, let us first establish some basic definitions and prop-
erties we are going to use later.

Definition: The kernel of a m × n matrix A is the set

Ker(A) = {x ∈ Rn : Ax = 0}

where 0 denotes the zero vector with m components.

Definition: Let V be a real vector space. If (v1 , v2 , ..., vn ) is a list of vec-


tors in V , then these vectors form a basis if and only if every vector v ∈ V can
be uniquely written as
v = α1 v1 + ... + αn vn
where αi ∈ R ∀i.

Proof of Theorem B: Let A be a m × n matrix, and let the vectors x1 , ..., xr


in Rn be a basis for the row space of A. Then the r vectors Ax1 , Ax2 , ..., Axr

9
are in the column space of A and are linearly independent. If Ax1 , ..., Axr were
not linearly independent, then there exists α1 , ..., αr in R, not all zero, such that
α1 Ax1 + ... + αr Axr = 0. This would imply that A(α1 x1 + ... + αr xr ) = 0 and
the vector v = α1 x1 + ... + αr xr would be in Ker(A). However, v is also in the
row space of A, being a linear combination of basis elements. Thus if Av = 0,
then it must be that v = 0. The linear independence of v in turn implies that
α1 = α2 = ... = αr = 0, which is a contradiction. We thus have row rank
A = r ≤column rank A. 

References
[1] J. Edmonds, “Lehman’s switching game and a theorem of Nash-Williams”,
J. Res. Nat. Bur. Standards 69B, 1965

[2] K. Hoffman and R. Kunze, Linear Algebra, 2nd edition, Prentice Hall, 1971

10

Das könnte Ihnen auch gefallen