Beruflich Dokumente
Kultur Dokumente
Conditional Probability
Let A, B be two events
P(A | B) = Probability of A given that B has
happened.
P(A | B) = P(A and B)/P(B)
This is the proportion of Bs that are also As
Independence
Random Variables
A random variable is a random number
generator
For example
Roll 2 dice, let X = sum of values
Flip 10 coins, let X = # of tails
Markov Chains
This is a sequence of random variables
X1, X2, X3, such that
P(Xn | Xn-1, Xn-2, , X1) = P(Xn | Xn-1)
That is, given the immediately preceding
variable Xn-1, the rest of the variables X1, ,
Xn-2 do not add any further information. This
is an example of conditional independence.
Note: This does not mean Xn is independent
of X1 or of X2 or of X3, . In fact, Xn is usually
dependent with all of these.
Roger Bilisoly 4-08
From http://www.worldofmonopoly.co.uk/history/images/bd-usa.jpg
Square 1
10
Square 10
Square 2
Roger Bilisoly 4-08
10
Example, Continued
P(9 to 10) = 1 is a possibility
P(9 to 10) = , P(9 to 9) = is another
possibility
Both of these are used in actual games
Matrix of Probabilities
Square 1
Square 2
Square 3
1
2
0 0
1
2
1
2
0 0 0
0 0 0 0 0 0 0
1
2
1
2
0 0 0 0
0 0 0 0 0 0
1
2
1
2
0 0 0 0 0
0 0 0 0 0
1
2
1
2
0 0 0 0
1
2
1
2
0 0 0
1
2
1
2
0 0
0 0 0 0 0 0 0 0
1
2
0 0 0 0 0 0
0 0 0 0 0 0 0
1
2
1
2
0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 1
Square 1
Square 2
10
Start
.5 .5 0
.5 .5 0
.5 .5 0
.5 .5 0
.5 .5 0
P= 0
.5 .5 0
.5 .5 0
.5 .5 0
.5 .5
.5 0
.5
.5 .5 0
Transient States:
Mean Number of Visits
Let P = matrix of transition probabilities.
Let PT be the submatrix of P corresponding to
only the transient states.
0
1
2
0 0
1
2
1
2
0 0 0
0 0 0 0 0 0 0
1
2
1
2
0 0 0 0
P=
0 1
2
0 0 0 0 0 0
1
2
1
2
0 0 0 0 0
0 0
1
2
1
2
0 0 0 0 0 0
1
2
1
2
0 0 0 0 0
0 0 0
1
2
1
2
0 0 0 0
0 0 0 0
1
2
1
2
0 0 0
PT =
0 0 0 0 0
1
2
1
2
0 0 0 0 0
0 0 0 0
1
2
1
2
0 0 0
1
2
1
2
0 0
1
2
1
2
1
2
1
2
1
2
1
2
0 0
0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
1
2
1
2
1
2
0 0 0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 1
Roger Bilisoly 4-08
0 0 0 0 0 0 0 0 0
Mean # of Visits
(Using Mathematica)
S = (I PT)-1 gives mean number of visits
On average player moves 1.5
squares, so mean time on a square
starting at any state.
converges to 1/1.5 = 2/3.
1.
0.
0.
0.
0.
0.
0.
0.
0.
0.5
1.
0.
0.
0.
0.
0.
0.
0.
0.75
0.5
1.
0.
0.
0.
0.
0.
0.
0.625
0.75
0.5
1.
0.
0.
0.
0.
0.
0.6875
0.625
0.75
0.5
1.
0.
0.
0.
0.
0.65625
0.6875
0.625
0.75
0.5
1.
0.
0.
0.
0.671875
0.65625
0.6875
0.625
0.75
0.5
1.
0.
0.
0.664063
0.671875
0.65625
0.6875
0.625
0.75
0.5
1.
0.
0.667969
0.664063
0.671875
0.65625
0.6875
0.625
0.75
0.5
1.
First row gives results starting at first square. For example, entry (1,1)
= 1 means 1 visit on average (player starts on first square and must
move forward one or two squares, so exactly one visit is guaranteed).
Entry (1,2) = 0.5 since 50% chance of going one square to the right.
Roger Bilisoly 4-08
Gamblers Ruin
Start with $x, win or lose $1 each play. Stop
when $0 or $max is reached.
This is a board game where winning moves a
square to the right, losing a square to the left.
Now there are two recurrent states: one for $0
and one for $max.
Two questions:
What is the probability of reaching $0 and $max?
What is the mean number of visits to the other states?
Roger Bilisoly 4-08
S=(I PT)-1 =
1
2
0 0 0 0 0 0 0
1
2
1
2
0 0 0 0 0 0
1
2
1
2
0 0 0 0 0
0 0
1
2
1
2
0 0 0 0
PT = 0 0 0
1
2
1
2
0 0 0
0 0 0 0
1
2
1
2
0 0
0 0 0 0 0
1
2
1
2
0 0 0 0 0 0
1
2
1
2
0 0 0 0 0 0 0
1
2
1.8
1.6
1.4
1.2
1.
0.8
0.6
0.4
0.2
1.6
3.2
2.8
2.4
2.
1.6
1.2
0.8
0.4
1.4
2.8
4.2
3.6
3.
2.4
1.8
1.2
0.6
1.2
2.4
3.6
4.8
4.
3.2
2.4
1.6
0.8
1.
2.
3.
4.
5.
4.
3.
2.
1.
0.8
1.6
2.4
3.2
4.
4.8
3.6
2.4
1.2
0.6
1.2
1.8
2.4
3.
3.6
4.2
2.8
1.4
0.4
0.8
1.2
1.6
2.
2.4
2.8
3.2
1.6
Here P(Win) =
P(Lose) =
0.2
0.4
0.6
0.8
1.
1.2
1.4
1.6
1.8
Gamblers Ruin:
Recurrent State Probabilities
Let R = matrix of transition probabilities from
transient to recurrent states.
Let F = matrix of eventual probabilities of
reaching recurrent states from transient states.
In general
9
PT)-1
F = SR = (I
Example at right
R=
1
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
2
F=
10
4
5
7
10
3
5
1
2
2
5
3
10
1
5
1
10
1
10
1
5
3
10
2
5
1
2
3
5
7
10
4
5
9
10
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
10
19
9
19
10
19
9
19
10
19
9
19
10
19
9
19
10
19
9
19
10
19
9
19
10
19
9
19
10
19
9
19
10
19
9
19
0.940518
0.874426
0.800992
0.719397
0.628737
0.528003
0.416077
0.291715
0.153534
P matrix
0.0594822
0.125574
0.199008
0.280603
0.371263
0.471997
0.583923
0.708285
0.846466
F matrix
Note bias for reaching $0
Roger Bilisoly 4-08
Simplified Version of
Chutes and Ladders
0 1
6
0 0
1
6
1
6
0 0 0
1
6
1
6
1
6
0 0 0 0
1
6
1
6
1
6
1
6
0 0 0 0 0
1
6
1
6
1
6
1
6
1
6
0 0 0 0 0 0 0
0 1 0 0 0 0 0
0
0
0
0 0 0 0 0 0 0
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
6
1
6
3.0
0 0 0 0 0 0 0 0 0
2.5
0 0 0 0 0 0 0 0 0 0
0 0
2.0
0 0
1.5
0 0
0 0
1.0
0 0
0.5
0 0
0.0
0
10
15
Square #
0 0 0
1
6
1
6
0 0
1
6
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0
6
Expected #
0 0 0 0
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
6
1
6
1
6
0 0 0 0 0 0 0 0
6
1
6
1
6
1
6
0 0 0 0 0 0 0 0 0
6
1
6
1
6
1
6
1
6
0 0 0 0 0 0 0 0 0 0
6
1
6
1
6
1
6
1
6
1
6
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0
1
6
1
6
1
6
1
6
1
6
1
6
1
6
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0
1
6
1
6
1
6
1
6
1
6
1
6
1
3
0 0 0 0
0 0 0 0
0 1 0 0
6
0 1
0
0
0
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 1 1
20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 6 6
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 1
6
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Example 1:
12 Square Loop
Use one die for movement and the 12
square loop of last slide. We have:
0
1
6
0 0
1
6
1
6
0 0 0
1
6
1
6
1
6
0 0 0 0
1
6
1
6
1
6
1
6
0 0 0 0 0
P=
1
6
1
6
1
6
1
6
1
6
0 0 0 0 0 0
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
1
6
0 0 0 0 0 0
1
6
1
6
1
6
1
6
1
6
0 0 0 0 0
1
6
1
6
1
6
1
6
1
6
1
6
0 0 0 0 0 0
1
6
1
6
1
6
1
6
0 0 0 0
1
6
1
6
1
6
1
6
1
6
1
6
0 0 0 0 0 0
1
6
1
6
1
6
0 0 0
1
6
1
6
1
6
1
6
1
6
1
6
0 0 0 0 0 0
1
6
1
6
0 0
1
6
1
6
1
6
1
6
1
6
1
6
0 0 0 0 0 0
1
6
0
1
6
1
6
1
6
1
6
1
6
1
6
0 0 0 0 0 0
Example 2:
12 Square Loop
P=
0
0
0
0
0
0
0
0.3
0.5
0
0.2
0.1
0.1
0
0
0
0
0
0
0
0.3
0.5
0
0.2
0.2
0.1
0
0
0
0
0
0
0
0.3
0.5
0
0
0.2
0.1
0
0
0
0
0
0
0
0.3
0.5
0.5
0
0.2
0.1
0
0
0
0
0
0
0
0.3
0.3
0.5
0
0.2
0.1
0
0
0
0
0
0
0
0
0.3
0.5
0
0.2
0.1
0
0
0
0
0
0
0
0
0.3
0.5
0
0.2
0.1
0
0
0
0
0
0
0
0
0.3
0.5
0
0.2
0.1
0
0
0
0
0
0
0
0
0.3
0.5
0
0.2
0.1
0
0
0
0
0
0
0
0
0.3
0.5
0
0.2
0.1
0
0
0
0
0
0
0
0
0.3
0.5
0
0.2
0.1
0
Periodicity
Let Pii = P(state i to state i)
Let Pnii = P(state i to state i in n moves)
If Pnii = 0 when d does not divide n (and d
is the largest such integer), then the
Markov chain has period d.
Example: For the 12 square loop, let
P(Advance 1) = P(Move back 1) = ,
which is a random walk on the loop. This
has periodicity = 2 (see next slide).
Roger Bilisoly 4-08
Example of Periodicity:
Random Walk on Loop
A player can only
return back to any
square after an even
number of moves.
Limiting probabilities
are 1/6 for even
squares after an odd
number of moves, and
1/6 for odd squares
after an even number
of moves.
P=
1
2
0 0 0 0 0 0 0 0 0
1
2
1
2
0 0 0 0 0 0 0 0 0
1
2
1
2
0 0 0 0 0 0 0 0
0 0
1
2
1
2
0 0 0 0 0 0 0
0 0 0
1
2
1
2
0 0 0 0 0 0
0 0 0 0
1
2
1
2
0 0 0 0 0
0 0 0 0 0
1
2
1
2
0 0 0 0
0 0 0 0 0 0
1
2
1
2
0 0 0
0 0 0 0 0 0 0
1
2
1
2
0 0
0 0 0 0 0 0 0 0
1
2
1
2
0 0 0 0 0 0 0 0 0
1
2
1
2
0 0 0 0 0 0 0 0 0
1
2
1
2
1
2
Move Clockwise
Jail
Go to Jail
Start
{0.0229,0.0231,0.0233,0.0236,0.0232,0.023,0.0229,0.0229,0.023,0.0231,
0.05,0.0231,0.0239,0.0246,0.0253,0.0261,0.027,0.028,0.0276,0.0273,
0.0271,0.0269,0.0267,0.0264,0.0268,0.027,0.0271,0.0271,0.027,0.0269,
0,0.0269,0.0261,0.0254,0.0247,0.0239,0.023,0.022,0.0224,0.0227}
Roger Bilisoly 4-08
Approximate Monopoly
Ignore that three doubles in a row means going to jail.
There are 10 (of 16) Chance cards that relocate the
player: Advance to Go, Advance to Illinois Avenue,
Advance to next Utility, 2 Advance to nearest railroad,
Advance to St. Charles Place, Go to Jail, Go to Reading
RR, Go to Boardwalk, Go back 3 spaces
There are 2 (of 16) Community Chest cards that relocate
the player: Advance to Go, Go to Jail.
Assume that each card has equal chance of appearing
(equivalent to shuffling the cards each time a player
lands on Chance or Community Chest).
Assume players immediately pay to get out of jail.
The best strategy for the beginning of the game
Roger Bilisoly 4-08
P=
7
576
5
576
1
144
1
192
1
288
1
288
1
288
1
192
1
144
5
576
7
576
7
576
7
576
7
576
7
576
7
576
5
576
1
144
1
192
1
288
1
576
1
576
1
288
1
192
9
1024
19
1536
49
3072
37
2304
45
1024
331
4608
901
9216
95
768
1379
9216
817
4608
153
1024
71
576
55
576
13
192
23
576
7
576
7
288
1
18
1
36
3
32
37
576
5
144
1
192
1
288
1
576
35
288
53
576
1
16
19
576
1
288
1
576
5
36
1
9
1
12
1
18
1
36
1
16
5
96
1
24
1
32
1
48
1
96
5
36
1
6
5
36
1
9
1
12
1
18
1
36
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
1
576
1
288
1
192
1
144
5
576
1
96
5
576
1
144
1
192
1
288
1
576
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
7
288
7
144
7
96
7
72
35
288
7
48
35
288
7
72
7
96
7
144
0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
0
0
1
36
1
18
1
12
65
576
41
288
11
64
7
48
23
192
1
192
1
96
1
64
1
48
5
192
1
32
5
192
1
48
1
64
11
288
35
576
49
576
11
96
83
576
25
144
85
576
0
1
36
1
18
1
12
1
9
5
36
1
6
1
96
1
48
1
32
1
24
5
96
0
1
36
1
18
1
12
1
9
0
0
1
36
1
18
1
12
55
576
23
192
7
48
11
64
41
288
11
96
25
288
35
576
5
144
5
576
7
576
7
576
7
576
7
576
7
576
7
576
5
576
1
144
19
576
17
288
49
576
65
576
41
288
11
64
1361
9216
569
4608
305
3072
55
768
45
1024
25
1536
133
9216
29
2304
11
1024
49
4608
97
9216
7
576
7
576
7
576
23
576
13
192
19
288
53
576
17
144
83
576
49
288
9
64
1
9
1
12
1
18
1
36
1
576
1
288
1
192
1
144
5
576
1
96
5
576
1
144
1
192
1
288
1
576
11
288
37
576
13
144
67
576
41
288
97
576
5
36
1
9
1
12
1
18
1
36
0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
0
0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
1
48
5
288
1
72
11
288
1
16
25
288
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
0 0
0 0
0 0
0 0
0 0
1
96
5
576
1
144
1
192
1
288
1
576
0
0
0 0
0 0
7
288
7
144
7
96
7
72
35
288
7
48
35
288
7
72
7
96
7
144
7
288
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
1
36
1
18
1
12
65
576
41
288
11
64
7
48
23
192
3
32
37
576
5
144
1
192
1
288
1
576
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
1
96
1
48
1
32
1
24
5
96
1
16
5
96
1
24
1
32
1
48
1
96
0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
0 0
1
288
1
144
1
96
1
72
5
288
1
576
1
288
1
192
1
144
5
576
1
96
5
576
1
144
1
192
1
288
1
576
1
576
1
288
1
192
1
144
5
576
1
576
1
288
1
192
1
144
5
576
1
96
5
576
1
144
1
192
1
288
1
576
1
576
1
288
1
192
1
144
7
192
1
96
5
576
1
144
1
192
1
288
1
576
1
576
1
288
19
576
1
16
53
576
35
288
85
576
25
144
83
576
11
96
49
576
1
18
1
36
0
1
576
1
288
1
192
1
144
5
576
1
96
5
576
1
144
1
192
1
288
1
576
1
576
1
288
1
192
1
144
5
576
1
288
1
144
1
96
1
24
7
96
5
48
37
288
11
72
17
96
7
48
11
96
1
12
1
18
1
36
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
1
576
1
288
1
192
1
144
5
576
1
96
7
192
1
16
17
192
11
96
9
64
1
6
5
36
1
9
1
12
1
18
1
36
0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
0 0
0 0
0 0
0
0
0
0
0
0
0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0
0
7
288
7
144
7
96
455
4608
287
2304
77
512
49
384
161
1536
21
256
259
4608
35
1152
7
1536
7
2304
7
4608
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
1
576
1
288
1
192
1
144
5
576
1
96
5
576
1
144
1
192
1
288
1
576
1
96
1
48
1
32
1
24
5
96
1
16
5
96
1
24
1
32
1
48
1
96
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
0
1
36
1
18
1
12
1
9
5
36
1
6
5
36
1
9
1
12
1
18
1
36
0 0
0 0
1
576
1
288
1
192
5
144
37
576
3
32
23
192
7
48
11
64
41
288
65
576
49
576
17
288
19
576
1
144
5
576
Limiting Probabilities of
Approximate Monopoly
Limiting probabilities given below
and plotted to the right (rescaled).
Jail
Illinois Avenue
Start
{0.03114,0.02152,0.019,0.02186,0.02351,0.02993,0.02285,0.00876,0.02347,0.02331,
0.05896,0.02736,0.02627,0.02386,0.02467,0.02919,0.02777,0.02572,0.02917,0.03071,
0.02875,0.0283,0.01048,0.02739,0.03188,0.03064,0.02707,0.02679,0.02811,0.02591,
0,0.02687,0.02634,0.02377,0.0251,0.02446,0.00872,0.02202,0.02193,0.02647}
http://www.tkcs-collins.com/truman/monopoly/monopoly.shtml
Mathematica: Create P
and Compute S
tranMatrixLinear1[n_,probs_]:=Module[{p={},r,i},
Do[AppendTo[p,{Table[0,{i,1,r-1}],
probs[[1;;Min[n-r+1,Length[probs] ] ]],
{Table[0,{i,r+Length[probs],n}]}}//Flatten],
{r,1,n}]
Do[p[[r,n]] = 1 - Fold[Plus,0,p[[r,1;;n-1]] ],{r,1,n}];
Return[p]
]
n=10;
p=tranMatrixLinear1[n,{0,1,1}/2];
pt=p[[1;;n-1,1;;n-1]];
s=Inverse[IdentityMatrix[n-1]-pt];
s//MatrixForm//N
1.
0.
0.
0.
0.
0.
0.
0.
0.
0.5
1.
0.
0.
0.
0.
0.
0.
0.
0.75
0.5
1.
0.
0.
0.
0.
0.
0.
0.625
0.75
0.5
1.
0.
0.
0.
0.
0.
0.6875
0.625
0.75
0.5
1.
0.
0.
0.
0.
0.65625
0.6875
0.625
0.75
0.5
1.
0.
0.
0.
0.671875
0.65625
0.6875
0.625
0.75
0.5
1.
0.
0.
0.664063
0.671875
0.65625
0.6875
0.625
0.75
0.5
1.
0.
0.667969
0.664063
0.671875
0.65625
0.6875
0.625
0.75
0.5
1.
Mathematica: Graphics
piMatrix = Table[Table[1,{limit}],{limit}];
piMatrix[[1]] = pi[[1;;limit]];
Do[piMatrix[[i,1]] = pi[[n-i+2]];
piMatrix[[i,limit]] = pi[[limit+i-1]],
{i,2,limit-1}]
piMatrix[[limit]] = pi[[n/2+1;;3 n/4+1]]//Reverse;
g1 = Graphics[Raster[piMatrix //Transpose] ];
g2 = ListLinePlot[gridLoop[n],
PlotStyle->Directive[Black,Thick],AspectRatio->1/n];
Show[g1,g2]
Further Readings
Abbott, Steve and Matt Richey. 1997. Take a Walk on the Boardwalk. The College Mathematics
Journal. 28(3): 162-171.
Althoen, S. C., L. King, and K. Schilling. 1993. How Long is a Game of Snakes and Ladders'?
Mathematical Gazette. 77: 71-76.
Ash, Robert and Richard Bishop. 1972. Monopoly as a Markov Process. Mathematics Magazine.
45: 26-29.
Bewersdorff, Jrg, 2005. Luck, Logic, and White Lies: The Mathematics of Games. A. K. Peters,
Ltd. Chapter 16 analyzes Monopoly with Markov Chains.
Diaconis, Persi and Rick Durrett, 2000. Chutes and Ladders in Markov Chains, Technical Report
2000-20, Department of Statistics, Stanford University.
Dirks, Robert. 1999. Hi Ho! Cherry-0, Markov Chains, and Mathematica. Stats. Spring (25): 2327.
Johnson, Roger W. 2003. Using Games to Teach Markov Chains. Primus.
http://findarticles.com/p/articles/mi_qa3997/is_200312/ai_n9338086/print
Gadbois, Steve. 1993. Mr. Markov Plays Chutes and Ladders. The UMAP Journal. 14(1): 31-38.
Murrell, Paul. 1999. The Statistics of Monopoly. Chance. 12(4): 36-40.
Stewart, Ian. 1996. How Fair is Monopoly! Scientific American. 274(4): 104-105.
Stewart, Ian. 1996. Monopoly Revisited. Scientific American. 275(4): 116-119.
Tan, Baris,. 1997. Markov Chains and the Risk Board Game. Mathematics Magazine. 70(5):
349-357.
Roger Bilisoly 4-08