Beruflich Dokumente
Kultur Dokumente
in state H and I
or I
A
I +I
. Let o > 0
denote the continuous time discount rate. In this section,
we limit consideration to the case in which exit is never
optimal after expansion due to a prohibitively high exit
cost. In similar settings with investment in project-specic
assets, the assumption of a prohibitively high exit cost was
also made by Lippman and Rumelt (1992) and Bernardo
and Chowdhry (2002). We relax this assumption in 4 and
investigate its effect.
The rm observes the prot stream without error. How-
ever, because of noise, the rm can never perfectly deter-
mine the underlying prot rate (i.e., state of the project).
We model the cumulative prot stream ]X
t
: t 0] during
the pilot phase as a Brownian motion with unknown drift:
X
t
=jt +uB
t
, (1)
where the unknown drift j ]h, I] is the mean prot per
unit time, u is the volatility of the cumulative prot, and
B = ]B
t
: t 0] is a one-dimensional standard Brownian
motion. After the rm expands the project, the new prot
stream ]Y
t
: t 0] is given by Y
t
=j
A
t +uB
t
, where j
A
]h
A
, I
A
]; if the rm exits, then the prot stream is zero.
We use a continuous-time approach and model the prot
stream as a one-dimensional Brownian motion because of
the analytical tractability of the resulting model. Likewise,
numerous papers in economics have modeled cumulative
prots under incomplete information as Brownian motion
with unknown drift and obtained useful insights from ana-
lytically tractable models (Bolton and Harris 1999, Keller
and Rady 1999, Bergemann and Vlimki 2000, Bernardo
and Chowdhry 2002, Ryan and Lippman 2003, Decamps
et al. 2005).
This model is general enough to be applicable in many
contexts.
Example 1 (Expansion). Consider a rm testing a new,
unproven technology by operating a single pilot production
plant. If the plant proves to be a success, n more plants will
be built at cost k. Alternatively, n can be regarded as an
increase in the production capacity. Suppose that the prot
per plant is not changed by the expansion. The expected
prot per unit time from the pilot plant is h > 0 or I - 0,
and the expected prot per unit time from the n+1 plants
after expansion will be h
A
=(n +1)h or I
A
=(n +1)I.
Example 2 (Acquisition of a Dedicated Facility).
Consider a rm testing a new project by outsourcing the
main task or by renting a facility for the core task of
the project. By outsourcing or renting, the rm can easily
terminate the project, but the prot stream is diminished
because it has to pay a price for outsourcing or renting.
The rm can acquire a dedicated facility of its own to
avoid paying the price for renting or outsourcing. Before
the acquisition of the dedicated facility, the expected prot
per unit time is h or I; after the acquisition, the expected
prot per unit time improves by h
and I
or I
in expected prot
per unit time might be state-dependent because the cost of
renting might be on a per-unit-sold basis.
To simplify our presentation, we will focus on expan-
sion decisions (Example 1) with the understanding that our
model fully addresses Example 2 as well. For consistency,
we shall speak in terms of project expansion rather than
acquisition of project-specic assets.
3.1. Posterior Probability
At time zero, the rms prior probability that S = H is
0
. Our rst task is to compute the posterior probability
that the project is in the high state at time t. Subsequently,
we utilize the posterior probability process to ascertain the
optimal expansion and exit decisions.
Let X
t
, j, and B
t
of Equation (1) be dened on a
probability space (D, , ) so that X
t
, j, and B
t
are
measurable with respect to , and let ]
t
: t 0] be
the natural ltration with respect to the observable pro-
cess ]X
t
: t 0]. (The unknown drift j and the unobserv-
able process ]B
t
: t 0] are not adapted to the ltration
t
.) Here the probability measure is denoted by . Let
P
t
|j =h
t
] =E
0
|1
]j=h]
t
] denote the posterior
probability of the event ]j = h] at time t, where E
0
|]
denotes the expectation conditional on the initial condi-
tion P
0
=
0
, and 1
A
is the indicator function of a set A.
The posterior probability process P is a continuous mar-
tingale with respect to ]
t
: t 0], a strong Markov pro-
cess that is homogeneous in time, and the unique solution
to the following stochastic differential equation (Liptser
and Shiryaev 1974, p. 371; Peskir and Shiryaev 2006,
pp. 288289; Ryan and Lippman 2003, pp. 252254):
JP
t
=
hI
u
P
t
(1 P
t
)J
B
t
, (2)
Kwon and Lippman: Acquisition of Project-Specic Assets with Bayesian Updating
Operations Research 59(5), pp. 11191130, 2011 INFORMS 1123
where
B = ]
B
t
,
t
: t 0] is a one-dimensional standard
Brownian motion dened by
B
t
1
u
_
X
t
_
t
0
E|j
s
] Js
_
=
1
u
_
X
t
_
t
0
(P
s
h+(1 P
s
)I) Js
_
.
Unlike B
t
which is unobservable, the new Brownian motion
B
t
is observable because it can be completely constructed
from the observed value of X
t
over time.
Equation (2) implies that the posterior process undergoes
a more rapid change if the coefcient ((hI),u)P
t
(1P
t
)
increases. The factor (hI),u, which is the uncertainty
in the drift normalized by the volatility, has the interpreta-
tion of the signal-to-noise ratio. Hence, (hI),u measures
the rate of arrival of new information, and it follows that
the posterior evolves more rapidly with a high value of
(hI),u (Bergemann and Vlimki 2000). The additional
factor P
t
(1 P
t
), which peaks at P
t
=0.5, results from the
fact that the posterior process requires more information
(and hence a longer time) to change if P
t
is closer to strong
beliefs (either 0 or 1).
Using the strong Markov property of the process P and
Bayes rule (Peskir and Shiryaev 2006, pp. 288289), we
have
P
t
E
0
|1
]j=h]
t
] =E
0
|1
]j=h]
X
t
]
=
_
0
exp
_
(X
t
ht)
2
2u
2
t
__
0
exp
_
(X
t
ht)
2
2u
2
t
_
+(1
0
) exp
_
(X
t
It)
2
2u
2
t
__
1
=
_
1 +
1
0
0
exp
_
hI
u
2
__
j
h+I
2
_
t +uB
t
___
1
. (3)
Because j (h+I),2 = (h I),2 if j = h and j
(h+I),2 =(I h),2 if j=I, Equation (3) reects the fact
that P
t
tends to increase if j=h and to decrease if j=I.
3.2. The Objective Function
The rm seeks to maximize its expected discounted cumu-
lative prot where o > 0 is the discount rate. Suppose the
rm decides to expand at the stopping time t. Then the
discounted reward over the time interval (t, ) discounted
back to time t is given by
E
0
_
_
t
e
o(tt)
JY
t
t
_
=E
P
t
_
_
0
e
ot
JY
t
_
=E
P
t
_
_
0
j
A
e
ot
Jt
_
=
1
o
|P
t
h
A
+(1 P
t
)I
A
],
where the rst equality follows from P being a strong
Markov process and the second from E
x
|
_
0
e
ot
JB
t
] =0.
If the rm decides to exit, then the associated reward from
exit is zero. Thus, the optimal reward r( ) from stopping
at time t is the greater of the reward from expansion and
the reward from exit
r(P
t
) =max
_
1
o
|P
t
h
A
+(1 P
t
)I
A
] k, 0
_
,
where k is the cost of expansion.
The objective function of the rm is the time-integral of
the discounted prot stream up to time t plus the reward
from stopping at time t
V
t
(
0
) =E
0
_
_
t
0
e
ot
JX
t
+e
ot
r(P
t
)
_
=E
0
_
j
o
(1 e
ot
) +e
ot
r(P
t
)
_
=
1
o
|
0
h+(1
0
)I] +E
0
|e
ot
g(P
t
)], (4)
where
g(x)=
1
o
max]xh
+(1x)I
ko,xh(1x)I]. (5)
The reward function g(x) =V
0
(x)V
0
|e
ot
g(P
t
)], the stopping problem of Equation (4) is
equivalent to maximizing
R
t
(
0
) =E
0
|e
ot
g(P
t
)]. (6)
For convenience, we regard R
t
(
0
) as the objective func-
tion for the remainder of this section.
3.3. Optimal Policy
As revealed in Equation (4), the rm seeks a stopping
time t
() R
t
() denote the optimal return
function whenever the optimal stopping time t
exists.
The existence of an optimal stopping time is not guaran-
teed in general, but we can prove that our model satises
the sufcient conditions for the existence of an optimal
stopping time. Before proving the sufcient conditions, we
Kwon and Lippman: Acquisition of Project-Specic Assets with Bayesian Updating
1124 Operations Research 59(5), pp. 11191130, 2011 INFORMS
need to lay out some technical preliminaries. Clearly, if
P
t
= at some time t, then it is optimal to continue if the
return R
() >
R
0
()] as the continuation region. Let
o+
1
2
_
hI
u
_
2
2
(1 )
2
o
2
is that R
( )
satisfy the differential equation R
() =0 if belongs to
the continuation region. There are two fundamental solu-
tions, and , to the second-order linear ordinary differ-
ential equation ] () =0:
() =
(1,2)(1+)
(1 )
(1,2)(1)
,
() =
(1,2)(1)
(1 )
(1,2)(1+)
,
where
_
1 +
8ou
2
(hI)
2
. (7)
Note that ( ) is convex increasing and ( ) is convex
decreasing.
If the rm stops at time t with P
t
= , then the
rms (expected discounted) return over the interval |t, )
is g(). From Equation (5), note that g() is the maxi-
mum of two functions g
1
() h
,o+(1)I
,ok and
g
2
() h,o (1 )I,o, each of which is linear in
the posterior probability P
t
= at the stopping time. The
rst function g
1
() gives the return associated with expan-
sion and is increasing in . The second function g
2
() is
associated with the return to exit and is decreasing in .
If (koI
A
),(h
A
I
A
) (0, 1), then the two linear func-
tions cross at where
koI
A
h
A
I
A
,
and g( ) is minimized at . Note that g
1
() >g
2
() if and
only if > , so it is optimal to expand if P
t
and
optimal to exit if P
t
- .
First, we consider the cases (koI
A
),(h
A
I
A
) 1 or
(koI
A
),(h
A
I
A
) 0.
Proposition 1. (i) Suppose (koI
A
),(h
A
I
A
) 1. Then
the optimal policy is to exit when P
t
hits a lower threshold
0 =
( 1)I
( +1)h( 1)I
and to continue operation otherwise. The optimal return is
given by
R
() =
_
_
2Ih,o
_
(
2
1)(Ih)
_
( 1)
( +1)
_
I
h
__
,2
()
if >0,
g() otherwise.
(ii) Suppose (koI
A
),(h
A
I
A
) 0. Then the optimal
policy is to expand the project when P
t
hits an upper
threshold
0 =
( +1)(I
ko)
( 1)(h
ko) ( +1)(I
ko)
and to continue operation otherwise. The optimal return is
given by
R
() =
_
_
2(I
ko)(h
ko),o
_
(
2
1)(h
ko)(koI
_
( 1)
( +1)
_
h
ko
koI
__
,2
() if -
0,
g() otherwise.
If (koI
A
),(h
A
I
A
) 1 or ko h
A
, the exit option is
always better than the expansion option because the expan-
sion cost is too high, so expansion is never optimal. If
(koI
A
),(h
A
I
A
) 0 or ko I
A
, then the expansion
option is always better than the exit option because the
prot improvement, even in the low state, exceeds the cost
of investment. In these two cases, the optimal policy is
characterized by only one threshold. The single-threshold
solutions are studied in detail by Ryan and Lippman (2003)
and Kwon et al. (2011). Henceforth, we focus on the more
interesting case of = (koI
A
),(h
A
I
A
) (0, 1) when
the optimal policy is characterized by two thresholds.
Theorem 1. Assume (koI
A
),(h
A
I
A
) (0, 1). (i) The
optimal policy to Equation (6) always exists, and there is
a pair of thresholds 0 and
0 which satisfy 0 - 0 -
0 - 1
such that the optimal stopping time is given by
t
=inf]t >0: P
t
(0,
0)].
The optimal return function is given by
R
() =
_
_
_
c
1
() +c
2
() if (0,
0)
g() otherwise,
(8)
where c
1
and c
2
are the unique positive numbers such that
R
( ) is continuously differentiable.
Kwon and Lippman: Acquisition of Project-Specic Assets with Bayesian Updating
Operations Research 59(5), pp. 11191130, 2011 INFORMS 1125
(ii) The time-to-decision t
|t
] =
0
0 0
T (
0) +
0 0
T (0) T (), (9)
where T () (2u
2
,(hI)
2
)(2 1) ln(,(1 )).
The proof of Theorem 1 is given in the online appendix.
The proof of the existence of an optimal policy consists
of constructing a function ] ( ) (a candidate for the opti-
mal return function) that satises the differential equa-
tion ] () =0, the boundary conditions ] () =g() for
]0,
0], and continuous differentiability (smooth past-
ing conditions). The expected time-to-decision formula in
Equation (9) was also proved in Poor and Hadjiliadis (2008,
p. 87) using the standard property of martingales; we repro-
duce its proof in the online appendix.
The optimal policy is intuitively straightforward: expand
the project if the posterior probability P
t
is high enough,
and exit if it is low enough; otherwise, continue the
pilot project. The expected time-to-decision E
|t
] is nite
because the optimal policy is characterized by two thresh-
olds, either of which is reached by P
t
eventually. By con-
trast, in exit-only or investment-only models as in the case
of Proposition 1 or as in Ryan and Lippman (2003), the
expected time-to-decision is innite because there is a pos-
itive probability that the threshold is never reached in a
nite amount of time.
The optimal return R
|t
] 0.
(ii) Suppose g( ) 0. As u , 0 ,
0 , and
E
|t
] 0.
(iii) Suppose g( ) - 0. As u , 0 I,(hI),
0 (I
+ko),(h
), E
|t
] if (I,(hI),
(I
+ko),(h
)), E
|t
] 0 if |I,(hI),
(I
+ko),(h
)], and E
|t
] = O(u) if ]I,
(hI), (I
+ko),(h
)].
1
Note that g()-0 if and only if I,(hI)--
(I
+ko),(h
+ko),(h
+ko),
(h
) -
0 because an immediate stop in the interval
(I,(hI), (I
+ko),(h
|t
] as a function of
the volatility u. As u increases from zero to very large
numbers, 0 increases from zero to a limiting value, and
+ko),(h
|t
|t
] for large u
depends on the initial probability and the sign of g( ). If
g( ) 0, then g() 0 for all |0, 1] because g( )
takes its minimum value at . In this case, if u is very
large, due to slow arrival of information, it is not worth
spending much time on the pilot project, and the rm is
better off making a quick expansion or exit decision. Con-
sequently, E
|t
+ko),(h
] in case g( ) >0.
0 2 4 6 8 10
0
0.2
0.4
0.6
0.8
1.0
P
r
o
b
a
b
i
l
i
t
y
t
h
r
e
s
h
o
l
d
s
0
0.1
0.2
0.3
E
x
p
e
c
t
e
d
t
i
m
e
t
o
d
e
c
i
s
i
o
n
0 2 4 6 8 10
Base model
Extension
Notes. o =1, k =0.25, h =1, I =1, h
A
=1.5, and I
A
=0.5. The dashed lines are the results from the model with a post-expansion exit option.
because it takes a long time for P
t
to exit the interval
(I,(hI), (I
+ko),(h
] in case g( ) -0.
0 2 4 6 8 10
0
0.2
0.4
0.6
0.8
1.0
P
r
o
b
a
b
i
l
i
t
y
t
h
r
e
s
h
o
l
d
s
0
0.2
0.4
0.6
0.8
1.0
1.2
E
x
p
e
c
t
e
d
t
i
m
e
t
o
d
e
c
i
s
i
o
n
0 2 4 6 8 10
Base model
Extension
Notes. o =1, k =0.25, h =1, I =1, h
A
=10, and I
A
=10. The dashed lines are the results from the model with a post-expansion exit option.
cases of small and large values of u are robust against the
embedded exit option.
4.1. The Objective Function
In this model with an embedded option, the decision-
makers policy determines two stopping times: t
A
(time to
acquire assets) and t
E
(time to exit). The cumulative prot
process is X
t
given by Equation (1) until t min]t
A
, t
E
].
After acquisition at time t
A
, the cumulative prot process
is represented by X
A
t
, where
JX
A
t
=j
A
Jt +u
A
JB
t
.
Here, u
A
= (n + 1)u for Example 1 (expansion) and
u
A
=u for Example 2 (acquisition of a dedicated facility).
Then the objective function is given by
V
t
A
, t
E
(
0
)
=E
0
_
_
t
0
e
ot
JX
t
+1
]t
A
-t
E
]
_
ke
ot
A
+
_
t
E
t
A
e
ot
JX
A
t
__
.
Kwon and Lippman: Acquisition of Project-Specic Assets with Bayesian Updating
Operations Research 59(5), pp. 11191130, 2011 INFORMS 1127
From the strong Markov property of ]B
t
: t 0], we nd
that
V(
0
) sup
t
A
, t
E
V
t
A
, t
E
(
0
)
=sup
t
E
0
_
_
t
0
e
ot
JX
t
+e
ot
max]V
A
(P
t
) k, 0]
_
,
where
V
A
(x) =sup
t
E
x
_
_
t
0
e
ot
JX
A
t
_
.
In analogy with Equation (4), we re-express V( ) as
V(x) =
1
o
|xh+(1 x)I] +sup
t
R
t
(x),
where
R
t
(x) =E
x
|e
ot
g(P
t
)],
g(x) =
1
o
max
_
oV
A
(x) koxh(1 x)I,
xh(1 x)I
_
.
If we assume that (h
A
I
A
),u
A
= (h I),u, which is
the case of Examples 1 and 2, then the form of V
A
( ) is
obtained by Ryan and Lippman (2003) as follows:
V
A
(x) =
_
_
1
o
|xh
A
+(1 x)I
A
]
+C x
(1,2)(1)
(1 x)
(1,2)(1+)
if x >0
A
,
0 otherwise,
where
0
A
=
( 1)I
A
( +1)h
A
( 1)I
A
,
C=
2
o
_
2
1
_
1
+1
_
,2
(h
A
)
(1,2)(1)
(I
A
)
(1,2)(1+)
. (10)
For the remainder of this section, we assume that
V
A
(1) >k so that it is protable to expand when the pos-
terior probability is sufciently high.
4.2. Optimal Policy
In this subsection, we obtain the optimal stopping time t
that maximizes R
t
( ). We do so by establishing the exis-
tence of t
( ) R
t
( ) in the
interval |0,
0] is given by
R
() =
_
_
_
c
1
() +c
2
() if (0,
0),
g() if ]0,
0],
(11)
where c
1
and c
2
are positive numbers such that R
( ) is
continuously differentiable.
Note that this theorem is an analog of Theorem 1(i)
and that Theorem 1(ii) also applies to this model without
modication. Theorem 3 does not preclude the existence
of other components of the continuation region discon-
nected from (0,
0) because it is technically difcult to pre-
clude them. Even if other components of the continuation
region exist, the interval |0,
0] is the only region where the
dichotomous (exit vs. expansion) decision takes place, so
we will focus only on the region |0,
0].
4.3. Comparative Statics
In this subsection, we scrutinize the robustness of the com-
parative statics results of 3.4 and investigate the effect
of the exit option after expansion. The post-expansion exit
option has the following effect on 0.
Proposition 3. The post-expansion exit option induces 0
to weakly decrease.
If the post-expansion exit option is available, then the
expansion option becomes more attractive to the decision-
maker, and it is optimal to wait longer before making a
permanent exit from the pilot project. In contrast, the effect
of the post-expansion exit option on the upper threshold
0 =1 u
2
2(h
ko)
k(hI)
2
+o(u
2
). (13)
We also note that the upper threshold is 1 u
2
(2(h
ko),((k I
A
,o)(hI)
2
)) + o(u
2
) if the post-
expansion exit option is absent.
2
In contrast, the lower
threshold is not affected by the post-expansion exit option
up to o(u
2
). Hence, the following proposition obtains.
Kwon and Lippman: Acquisition of Project-Specic Assets with Bayesian Updating
1128 Operations Research 59(5), pp. 11191130, 2011 INFORMS
Proposition 4. In the small-u limit, the post-expansion
exit option causes both
0 and the expected time-to-decision
to decrease.
With a post-expansion exit option, the expansion option
is more attractive to the decision-maker, so it makes sense
that the threshold for expansion is lower. The expected
time-to-decision is also smaller because the decision-maker
makes an earlier decision to expand the project due to a
higher value of the expansion option. It is also understand-
able that the effect of Proposition 3 (smaller value of 0)
is a secondary effect on the expected time-to-decision: the
increased value of the expansion option does not make
an exit option too much less attractive to the decision-
maker when the posterior probability P
t
is low (when the
prospect of exercising the expansion option is low). Numer-
ical examples are displayed in Figures 1 and 2, which show
that the extra exit option indeed decreases 0,
0, and the
expected time-to-decision.
In the large-u limit, we obtain the following result.
Proposition 5. In the large-u limit, the post-expansion
exit option does not have any effect on 0,
0, or the expected
time-to-decision up to O(u
n
) for any nite positive n.
The above proposition merely states that there is essen-
tially no effect of the post-expansion exit option in the
large-u limit. This is consistent with the intuition that the
option value from the learning opportunity is minimal if
u is very large due to the slow arrival of information, and
hence the effect of the post-expansion exit option is also
minimal.
Lastly, we are in a position to inspect the comparative
statics of the expected time-to-decision with respect to u.
The next theorem follows immediately from Lemma 1 and
Proposition 5.
Theorem 4 (Expected Time-to-Decision). Theorem 2(i),
(ii), and (iii) holds for the model with a post-expansion exit
option.
Figures 1 and 2 also illustrate a numerical example of
Theorem 4. Therefore, we established the robustness of our
result to the additional exit option.
5. Reducing the Volatility
In some business situations, the rm has an external source
of information regarding the protability of the project that
effectively reduces u. Therefore, the comparative statics
with respect to u can provide a useful prediction of how
the additional source of information impacts the optimal
policy and the duration t
|j=h
r
t
]
=
_
exp
_
h
(u
r
)
2
X
r
t
1
2
_
h
u
r
_
2
t
__
_
exp
_
h
(u
r
)
2
X
r
t
1
2
_
h
u
r
_
2
t
_
+(1 ) exp
_
I
(u
r
)
2
X
r
t
1
2
_
I
u
r
_
2
t
__
1
, (14)
which is clearly constructed from the observable pro-
cess X
r
. As before, the posterior process satises the
stochastic differential equations
JP
r
t
=
hI
u
r
P
r
t
(1 P
r
t
)J
W
t
,
where
W
t
is another one-dimensional standard Brownian
motion that is observable. Last, the objective function for
this problem is
V
r
t
() =E
_
_
t
0
e
ot
JX
t
+e
ot
r(P
r
t
)
_
=
1
o
|h+(1 )I] +E
|e
ot
g(P
r
t
)], (15)
where is the initial prior P
r
0
|j =h
r
0
] and t is a
stopping time for the ltration ]
r
t
]. The objective function
has the same form as Equation (4), except that the posterior
process P
r
t
in Equation (15) is constructed from X
r
with
volatility u
r
via Equation (14). Therefore, the problem with
external information reduces to that of 3 with new volatil-
ity u
r
such that u
r
- min]u, u
e
]. The only effect of the
additional source of information is to reduce the volatility
of the cumulative prot.
6. Conclusions
Prior to expanding a new project, it behooves the rm
to learn more about the projects protability. Firms
Kwon and Lippman: Acquisition of Project-Specic Assets with Bayesian Updating
Operations Research 59(5), pp. 11191130, 2011 INFORMS 1129
undertaking a pilot project can learn about protability and
subsequently make expansion and exit decisions. In this
paper, we studied the expansion and exit problem under
incomplete information. Our results show that there is a
nonmonotonic relationship between the expected time-to-
decision and the size of the continuation region for an irre-
versible decision: as the volatility of the cumulative prot
increases (so the rate of information arrival decreases),
the optimal continuation region shrinks, but the time-to-
decision does not monotonically decrease.
Our paper focused on extracting useful insights from a
simple and tractable model. Hence, we have not addressed
practical applications of our ndings. To model realistic
business expansion decisions, a number of extensions need
to be undertaken. For example, our model assumes that
the noise in the cumulative prot is a Wiener process with
constant volatility. Relaxing this assumption would sacri-
ce the analytical tractability. More signicantly, our model
assumes that the number of possible states of the project
is two. For practical applications, we need to provide an
analytical or numerical solution to problems with a larger
number of states.
As shown by Decamps et al. (2005), our model can
generalize to a problem in which the number of possi-
ble states of the project is larger than two but nite. (See
also Olsen and Stensland 1992 and Hu and Oksendal 1998
for multidimensional stopping time problems.) In such a
multi-state extension, it is straightforward to show that
much of 3 generalizes to multi-state analogs. In particu-
lar, there is a multi-state analog of Theorem 2 regarding
the asymptotic behavior of the expected time-to-decision.
However, analytical solutions such as in Equation (8) are
not available for a multi-state model. Hence, for practi-
cal purposes, we need to establish numerical procedures to
obtain the optimal policy and the solution. The difculty
is that a numerical procedure would require continuous
differentiability (smooth-pasting condition) of the optimal
solution (Muthuraman and Kumar 2008), but the multi-
state model does not necessarily result in continuously dif-
ferentiable solutions.
3
An even more difcult problem is
a model with a continuous distribution of states of the
project. Finding appropriate numerical solution procedures
for multi-state extensions would constitute a major research
program.
7. Electronic Companion
An electronic companion to this paper is available as part
of the online version that can be found at http://or.journal
.informs.org/.
Endnotes
1. A function ] (u) is said to be O(u) if there is a positive
constant M such that ] (u) - Mu for all u sufciently
large.
2. A function ] (u) is said to be o(u
2
) if ] (u),u
2
0 in
the limit as u 0.
3. Continuous differentiability over the boundary points
requires regularity of boundary points of the continua-
tion region, but the regularity is sometimes violated for a
degenerate diffusive process such as a multi-dimensional
generalization of the posterior process P
t
governed by the
dynamics of one-dimensional Brownian motion
B
t
. The
authors thank Renming Song for pointing this out.
Acknowledgments
The authors thank three anonymous referees and the asso-
ciate editor for their helpful suggestions that considerably
improved the manuscript.
References
Alvarez, L. H. R. 2001. Reward functionals, salvage values and optimal
stopping. Math. Methods Oper. Res. 54(2) 315337.
Alvarez, L. H. R. 2003. On the properties of r-excessive mappings for a
class of diffusions. Ann. Appl. Probability 13(4) 15171533.
Alvarez, L. H. R., R. Stenbacka. 2004. Optimal risk adoption: A real
options approach. Econom. Theory 23(1) 123147.
Arrow, K. J. 1962. The economic implications of learning by doing. Rev.
Econom. Stud. 29(3) 155173.
Bergemann, D., J. Vlimki. 2000. Experimentation in markets. Rev.
Econom. Stud. 67(2) 213234.
Bernardo, A. E., B. Chowdhry. 2002. Resources, real options, and corpo-
rate strategy. J. Financial Econom. 63(2) 211234.
Bolton, P., C. Harris. 1999. Strategic experimentation. Econometrica 67(2)
349374.
Dayanik, S., I. Karatzas. 2003. On the optimal stopping problem for
one-dimensional diffusions. Stochastic Processes Their Appl. 107(2)
173212.
Decamps, J.-P., T. Mariotti, S. Villeneuve. 2005. Investment timing under
incomplete information. Math. Oper. Res. 30(2) 472500.
Decamps, J.-P., T. Mariotti, S. Villeneuve. 2006. Irreversible investment
in alternative projects. Econom. Theory 28(2) 425448.
Dixit, A. 1989. Entry and exit decisions under uncertainty. J. Political
Econom. 97(3) 620638.
Dixit, A. 1992. Investment and hysteresis. J. Econom. Perspectives 6(1)
107132.
Ghemawat, P., H. Stander. 1998. Nucor at a crossroads. Harvard Bus.
School Case 9-793-039. Harvard Business School, Boston.
Hu, Y., B. Oksendal. 1998. Optimal time to invest when the price pro-
cesses are geometric Brownian motions. Finance and Stochastics 2(3)
295310.
Keller, G., S. Rady. 1999. Optimal experimentation in a changing envi-
ronment. Rev. Econom. Stud. 66(3) 475507.
Kwon, H. D. 2010. Invest or exit? Optimal decisions in the face of a
declining prot stream. Oper. Res. 58(3) 638649.
Kwon, H. D., S. A. Lippman, C. S. Tang. 2011. Optimal markdown
pricing strategy with demand learning. Probab. Engrg. Inform. Sci.
Forthcoming.
Lai, T. L. 2001. Sequential analysis: Some classical problems and new
challenges. Statistica Sinica 11(2) 303408.
Lippman, S. A., R. P. Rumelt. 1992. Demand uncertainty, capital speci-
city, and industry evolution. Ind. Corp. Change 1(1) 235262.
Liptser, R. S., A. N. Shiryayev. 1974. Statistics of Random Processes, Vol.
1. Springer-Verlag, New York.
McCardle, K. F. 1985. Information acquisition and the adoption of new
technology. Management Sci. 31(11) 13721389.
Moscarini, G., L. Smith. 2001. The optimal level of experimentation.
Econometrica 69(6) 16291644.
Kwon and Lippman: Acquisition of Project-Specic Assets with Bayesian Updating
1130 Operations Research 59(5), pp. 11191130, 2011 INFORMS
Muthuraman, K., S. Kumar. 2008. Solving free-boundary problems with
applications in nance. Foundations and Trends in Stochastic Systems
1(4) 259341.
Oksendal, B. 2003. Stochastic Differential Equations: An Introduction
with Applications, 6th ed. Springer, Berlin.
Olsen, T. E., G. Stensland. 1992. On optimal timing of investment
when cost components are additive and follow geometric diffusions.
J. Econom. Dynam. Control 16(1) 3951.
Peskir, G., A. Shiryaev. 2006. Optimal Stopping and Free-Boundary Prob-
lems. Birkhauser, Basel, Switzerland.
Poor, H. V., O. Hadjiliadis. 2008. Quickest Detection. Cambridge Univer-
sity Press, Cambridge, UK.
Ryan, R., S. A. Lippman. 2003. Optimal exit from a project with noisy
returns. Probab. Engrg. Inform. Sci. 17(4) 435458.
Shiryaev, A. 1978. Optimal Stopping Rules. Springer-Verlag, New York.
Shiryaev, A. N. 1967. Two problems of sequential analysis. Cybernetics
Systems Anal. 3(2) 6369.
Ulu, C., J. E. Smith. 2009. Uncertainty, information acquisition, and tech-
nology adoption. Oper. Res. 57(3) 740752.
Wald, A. 1945. Sequential tests of statistical hypotheses. Ann. Math.
Statist. 16(2) 117186.
Wald, A. 1973. Sequential Analysis. Dover Publications, Mineola, NY.
Wang, H. 2005. A sequential entry problem with forced exits. Math. Oper.
Res. 30(2) 501520.