Sie sind auf Seite 1von 17

STOCHASTIC MODELLING:

Production line problems

L.C.M. KALLENBERG and F.M. SPIEKSMA

UNIVERSITEIT LEIDEN
Contents

1 INTRODUCTION 1

2 PRODUCTION LINE and MARKOV CHAINS 5


2.1 Introduction and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Classification of states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.1 Properties based on the associated directed graph . . . . . . . . . . . . . . . 9
2.2.2 Absorption probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.3 Absorption time and cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3
Chapter 1

INTRODUCTION

Mathematics is everywhere!

In this short chapter we briefly discuss some typical examples, based on practical problems. During
the lectures we will develop tools to model and analyse the performance of the problems discussed.
Each chapter will contain many examples as well, and problems to be solved.

Example 1.0.1 (Triage-Treat-and-Release) At the Lutheran Medical Center (LMC) in Brook-


lyn, New York a new model of care known as the “Triage-Treat-and-Release” (TTR) program has
recently been implemented. Traditionally, nurses in the emergency department are solely responsible
for triaging patients while doctors are solely responsible for treating patients. In contrast to this
traditional set-up, the TTR program makes physician assistants or nurse practitioners responsible
for both phases of service for low-acuity patients. Providers in this setting must decide between pri-
oritising triage versus treatment in order to balance initial delays for care with the need to discharge
patients in a timely fashion.

Example 1.0.2 (Polling models: Efficient operation of N machines by one operative)


An operative is responsible for the operation and maintenance of N different machines. He services
the machines in a predetermined order. He repairs any machine found stopped, and inspects and/or
services any machine that is still running.
The manager has to provide the operative with the servicing order that yields maximum effective
production rate.

Example 1.0.3 (Testing a new medicine) A new medicine, say medicine A, has been developed
for a given disease. The traditional medicine used, is medicine B, of which the cure probability is
known. The cure probability of the new medicine A is not known.

1
2 CHAPTER 1. INTRODUCTION

A standard way to estimate it, would be to administer it to


many patients and then take the fraction of cured patients
as the estimate for the cure probability. However, if this
fraction is low compared to the cure probablity of B, many
patients suffer unnecessary damage.
Thus, a safer method should be used. One such method is to
determine whether or not to administer medicine A (as op-
posed to B) to each subsequent patient, based on the known
results so far. The objective is to maximise the number of
cured patients.

Example 1.0.4 (Call center) Suppose that you are an agent, working at a call center. Your target
(imposed by the manager of the call center) is to start servicing at least 80% of the customers handled
by you within at most 20 seconds. What will you do to comply with your target, if you have all the
information on the number of customers waiting to be served, and how long they have been waiting
already? How should the manager set the targets to optimise customer satisfaction?

Example 1.0.5 (Quality control in a production process) In a television production line,


each product has to undergo a number of operations, such as soldering the electronics, installing
software, putting parts together, etc. After different stages of the production, the product is tested
in order to ensure the product to meet the quality standards. If the product fails the test, there are
several options: either to reject it all together, or, undo the last operation and redo it.
The question is: how well is the production line functioning? What is the fraction of rejected
products? What is the cost involved in rejecting products? Should the manager of the production
line invest money to improve parts of the production line, and, if yes, which parts?

Example 1.0.6 (Customer service in a super market) The management of a super market
wants to renovate the shop. They want to know what kind of counters they should install in order
to have maximum customer satisfaction (due to shorter waiting lines), while keeping the operation
cost as low as possible. Nowadays there are many options: manned counters, manned counters for
customers with less than 5 products say, automated counters, or hand scanners used by the cus-
tomers themselves. Management has to take into account that (very) elderly people might need the
possibility of paying for their products at a manned counter. What should management do?
3

Example 1.0.7 (Hiring a postdoc) The Leiden department of mathematics has obtained money
to hire a Chinese postdoc financed by the Chinese government. They get an enormous amount
of applicants. After selecting the applicants that seem promising, they are still left with over 100
candidates. Since generally the work by the candidates is either in Chinese, has many authors if in
English, and most of the supervisors are not internationally known, the committee faces the problem
how to select the best candidate. The best thing would be to invite the candidates to Holland.
However, this is too costly.
Thus, the committee decides to randomly invite the selected applicants one by one: if the invited
applicant is approved, he will be hired straightaway, otherwise he is rejected and the next applicant
is invited to come over.
When should the committee approve an applicant in order to select the best one of all with maximum
probability?

Example 1.0.8 (Google PageRank) A query through an internet browser for cheap airplane
tickets will yield a list of sites and companies offering cheap tickets. The internet browser displays
the result in a certain order. Google has developed an algorithm to determine the display order. in
view of its success, customer satisfaction by using Google is high. What is the secret of Google’s
success?

The basis of the analysis is the so-called web-


graph: each webpage corresponds to a node.
There is an arrow from node i to node j, when
there is a link on web page i to web page j. The
idea is as follows: let a random surfer surf infinitely
long over the web graph. The fraction of visits of
web page i is called the Google PageRank PR(i) of
page i (upto normalisation). If web pages i and j
are both relevant for your query, then web page i is
displayed above web page j, when PR(i) >PR(j).
Nowadays, there are 4.76 billion webpages: how
can one calculate the Google PageRank? What
does Google do with dead ends and dis-connected
components of the web graph?
4 CHAPTER 1. INTRODUCTION
Chapter 2

PRODUCTION LINE and MARKOV


CHAINS

2.1 Introduction and examples


Problem 2.1 Part of the production process of leather bags is as follows. After cutting and sowing
the leather, the bag has to be painted in the desired colour. Then: 80% of the bags is painted
immediately, 20% of the bags gets an extra treatment first, since the quality of the leather is too
low to guarantee sufficient adhesion of the paint. During this extra treatment, 10% of the bags is
disapproved; during the painting also 10% of the bags is disapproved. 70% is approved and the
remaining 20% needs either to be repainted (25%) or even to get an extra treatment (75%).

a) Model this part of the production process as a Markov chain. Specify the state space and
transition matrix. Determine the closed classes and the transient states.

b) Suppose that cutting and sowing takes 6 minutes, and both extra treatment and painting take 2
minutes. What is the probability that the bag is ready and approved within 10 minutes?

c) Calculate the probability that the bag is eventually disapproved.

d) Calculate the expected time that the bag spends in this part of the production process.

e) Assume that the cost of cutting and sowing, of the extra treatment, and of painting are D 10, D
2 and D 7 respectively. Which should be the minimum price of a bag in order that the factory
does not lose any money?

f ) What fraction of the disapproved bags has passed through the painting phase at least twice?

Problem 2.2 Each day Garoeda has a flight from Amsterdam to Jakarta. A passenger can drop off
his luggage at the check-in counter, where the luggage is put on a transportation belt. The luggage
on the transportation belt will then be transported to a hall, where it is distributed (by an automatic
distributor) over various luggage cars, that take it to the right plane.

5
6 CHAPTER 2. PRODUCTION LINE AND MARKOV CHAINS

The passenger can then pick it up at the luggage belts in Jakarta. However, when he has a transfer
flight to Yogyakarta, the luggage is (fortunately) automatically labelled through. In Jakarta, the
luggage will be unloaded from the airplane. The luggage with a destination other than Jakarta, is
put on a transportation belt, and the same procedure as in Amsterdam is followed.
Sometimes, luggage is delayed (this means that it arrives later on its destination than the owner):
luggage can fall off the belt, and found only after the plane with it’s next destination is already in
the process of leaving. The plane can be delayed, and the luggage cannot be transferred in time.
By a detection error of the automatic luggage distributor, the luggage can also be put on the wrong
luggage car and sent to the wrong destination.
It is useful to measure the quality of the airport’s transportation service. Garoeda would like to
know:

1. the average luggage delay per origin-destination pair of airports.

2. the fraction of delayed luggage that is caused by a delay of the plane by which it is carried.

For the flight from Amsterdam to Yogyakarta the data are as follows. In Amsterdam, 1% of the
luggage stays in Amsterdam accidently, and is sent to Jakarta the next day on the next Garoeda
flight, with probability 1; another 1% of the luggage is sent to a wrong destination: it comes back to
Amsterdam in 2 days on average, and it is sent on the first Garoeda flight to Jakarta possible (with
probability 1).
5% of the flights to Jakarta suffers a delay large enough, so that both passenger and his luggage miss
the transfer to Yogyakarta; a further 2% suffers a delay that is short enough to allow the passenger
not to miss his plane, but the luggage is delayed and is put on the plane to Yogyakarta departing
2 hours later. Also in Jakarta, 1% of the luggage stays there accidently and is sent to Yogyakarta
on the flight departing 4 hours later. A further 1% is sent to the wrong destination and suffers a
delay of 8 hours, before being put on a plane to Yogjakarta (with probability 1). Finally, 10% of the
flights from Jakarta to Yogyakarta has a delay of 1.5 hour on average.

a) Calculate the probability that luggage gets delayed (the fraction of luggage that gets delayed).

b) Calculate the average delay of luggage (delay is the difference between the arrival time of the
passenger and of his luggage.

c) What is the fraction of the delayed luggage, that is delayed due to a plane delay on the route
from Amsterdam to Jakarta? What is the faction, that is delayed due to a plane delay on the
flight from Jakarta to Yogya?

In order to answer these questions, we need the definition of a stochastic process.


A stochastic process Xt , t ∈ T , is a collection of random variables. In other words: Xt is a random
variable for each t ∈ T , The set T is called the index set of the process. Very often t denotes time,
and then we call Xt the state of the process at time t. An example is the number of customers
queueing up for the security check at an airport, at time t. The state space S of the process is the
2.1. INTRODUCTION AND EXAMPLES 7

collection of realisations of Xt , t ∈ T . If T is countable, then the process is called a discrete process;


if T is an interval, say T = [0, ∞), then the process is a continuous process. We restrict to discrete
stochastic processes with a finite state space S.
We need some structure to be able to infer any information on the process behaviour. The structure
that we will use, is the following.
Let {Xt , t = 0, 1, . . . } be a collection of random variables with S a finite or countable state space.
The process {Xt , t = 0, 1, . . . } is called a Markov chain 1 if for i0 , i1 , . . . , it−1 , i, j ∈ S and t = 0, 1, . . .
it holds that

P{Xt+1 = j | X0 = i0 , X1 = i1 , . . . , Xt−1 = it−1 , Xt = i} = P{Xt+1 = j | Xt = i}. (2.1.1)

In words, a stochastic process is a Markov chain, if past and future are independent, given the
present.

The Markov chain is called stationary or homogeneous, if the probabilities P{Xt+1 = j | Xt = i} are
independent of t, for any i, j ∈ S. In this case we can present in matrix vorm: P is the matrix with
elements
pij = P{Xt+1 = j | Xy = i}.

The matrix P is called the transition matrix of the Markov chain, and the elements pij are called
P
transition probabilities. Notice that pij ∈ [0, 1] and j pij = 1.
The probability law of homogeneous Markov chain is completely determined by the distirbution of
X0 and the transition matrix P . In this entrie chapter we will restrict the analysis to homogeneous
Markov chains.

Example 2.1.1 (Inventory problem) One of the products sold by a small supermarket is white
chocolate. The weekly demand for white chocolate is represented by the random variables Yt , t =
1, . . ., where Yt is the demand in week t. The probability distribution is given by dk = P{Yt = k},
k = 0, 1, . . ..
Suppose that the shop keeper uses the so-called (s, S)-ordering policy: if the inventory at the start
of the week is at least s, then he places no order for white chocolate; if the inventory at the start
of the week is i < s, then he order S − i, and so the inventory level is S after the order has been
delivered.
Assume that delivery of orders is instanteneous. Further, if ia customer asks for white chocolate,
and there is none left in stock, then that customer has bad luck and he can try to buy it next week
(no back-orders).
Denote by Xt the inventory at the beginning of week t, before ordering.

 max{Xt − Yt+1 , 0}, ifXt ≥ s
Xt+1 =
 max{S − Y , 0}, if Xt < s.
t+1

1
The Russian scientist Andrey Markov (1856 – 1922) introduced this notion in1906. See e.g.
http://en.wikipedia.org/wiki/ Andrey Markov.
8 CHAPTER 2. PRODUCTION LINE AND MARKOV CHAINS

Then, {Xt } is a Markov chain on Z+ with transition probabilities


 P

 k≥i dk , if i ≥ s, j = 0


di−j , if i ≥ s, 1 ≤ j ≤ i




 P
pij = k≥S dk , if i < s, j = 0


 dS−j , if i < s, 1 ≤ j ≤ i





0, otherwise.

The (simultaneous) distribution of a Markov chain is specified by:

P{X0 = i0 , X1 = i1 , . . . , Xt = it } = P{X0 = i0 } · pi0 i1 pi1 i2 · · · pit−1 it , t = 0, 1, . . . .

The distribution is therefore completely specified by the transition probabilities and the initial dis-
tribution (the distribution of X0 ). Further, we denote the so-called t-step transition probabilities
(t) (0)
P{Xt = j | X0 = i} by pij , t = 0, . . .. N.B. pij = δij , the Kronecker delta.

Theorem 2.1.1 (Chapman-Kolmogorov Equations2 ) For s, t ∈ N with s ≤ t, it holds that


(t) (s) (t−s)
X
pij = pik · pkj , i, j ∈ S.
k

Proof. Let s, t ∈ N with s ≤ t. For i, j ∈ S, we have


(t) P
pij = P{Xt = j | X0 = i} = k P{Xt = j, Xs = k | X0 = i}
P
= k P{Xt = j | Xs = k, X0 = i} · P{Xs = k | X0 = i}
P
= k P{Xt = j | Xs = k} · P{Xs = k | X0 = i}
P P (s) (t−s)
= k P{Xt−s = j | X0 = k} · P{Xs = k | X0 = i} = k pik · pkj .

Corollary 2.1.1 P (t) = P t , for t = 0, 1 . . .. In other words, the matrix P (t) is the t-th power of the
transition matrix P .

Proof. By induction, check this yourself. QED

Question 2.1 Consider a finite Markov chain with S = {0, 1, 2} and with transition matrix
 
0.7 0.2 0.1
 
P =  0.0 0.6 0.4 .

0.5 0.0 0.5

Determine the conditional probabilities P{X1 = 1, X2 = 1 | X0 = 0} en P{X2 = 1, X3 = 1 | X0 = 0}.


2.2. CLASSIFICATION OF STATES 9

Question 2.2 Consider an inventory problem, where the inventory level i is observed, and then the
following happens:

1) if i ≤ 1, then 4 − i units of product are ordered and immediately delivered;


if i ≥ 2, then no order is placed.

2) the demand per period is 0,1, or 2, each with probability 1/3.

Let Xt , t = 0, 1, . . . be the inventory at the start of period t + 1 (before delivery of an order). Show
that {Xt , t = 0, 1, . . . } is a Markov chain, and derive the transition matrix.

2.2 Classification of states


2.2.1 Properties based on the associated directed graph

A Markov chain can be represented by a direct graph. The vertices of this graph correspond to
(t)
states, and there is an arrow from state i to state j, when pij > 0. Thus, pij is equal to the sum of
all product of the probabilities along all paths from i to j with t arrows.
As a consequence, there is a positive probability to reach state j from state i in precisely t transitions
iff there is a path consisting of t from i to j. Therefore, all properties related to reachability are
essentially properties of the associated graph.
(t)
State j is called reachable from state i (notation: i → j), if there exists t ∈ N0 with pij > 0. This
(n) (m)
property is transitive. Indeed, suppose that pij > 0 and pjk > 0, then by virtue of Chapman-
Kolmogorov’s Theorem 2.1.1
(n+m) (n) (m) (n) (m)
X
pik = pil plk ≥ pij pjk > 0.
l

We say that states i and j communicate (notation: i ↔ j), if j is reachable from i, and, vice versa,
i is reachable from j.

Lemma 2.2.1 The ‘communicating’ property is an equivalence relation.

Proof. It is easily checked that the communicating property is reflexive, symmetric and transitive.
QED

Corollary 2.2.1 S is the union of equivalence classes, called the classes of the Markov chain.

Graphically, reachability of state j from state i corresponds to the existence of path from i to j in
the associated graph. From the definition of the communicating property, it can be deduced that
equivalence classes correspond to the strongly connected components in the associated graph.
We shall see that the states within a class have certain properties in common. We will first provide
some definitions. If state i communicates with every state that can be reached from i, then state i
is said to be essential, otherwise it is called inessential.
10 CHAPTER 2. PRODUCTION LINE AND MARKOV CHAINS

Thus, from an essential state no inessential state can be reached. Therefore, the properties ‘essential’
and ‘inessential’ are class properties. We have the following result.

Lemma 2.2.2 A class is essential iff it is closed.

Proof. Suppose C is an essential class. Let i ∈ C and j ∈ S with pij > 0. Then i → j, and therefore
j → i (by essentiality of i). Hence, i and j communicate, and so j ∈ C.
Next, suppose that C is closed. Let i ∈ C. For any j ∈ S, with i → j, it holds that j ∈ C. Since C
is a communicating class, it follows that j → i. Consequently i ↔ j. This implies that i is essential.
QED

Another graph based property is the period of a state. This property plays a role in investigating the
limit behaviour of the t-step transition probabilities as t → ∞. The period d(i) of state i is defined
by
(t)
d(i) = g.c.d.{t ≥ 1 | pii > 0},
(t)
with d(i) = ∞, if pii = 0 for all t ≥ 1.

Theorem 2.2.3 If i ↔ j, then d(i) = d(j).

Proof. Because of the symmetry between states i and j, it is sufficient to show that d(j) divides d(i).
(t)
In other words, it is sufficient to show that d(j)/t, for each t ≥ 1 with pii > 0.
(t) (k) (l)
Let t ≥ 1, with pii > 0 and let k, l ≥ 1 be such that pij > 0 and pji > 0 (k and l exist, since i ↔ j).
(2t) (t) (t)
Since pii ≥ pii · pii > 0, we get

(k+t+l) (l) (t) (k)


pjj ≥ pji pii pij > 0

as well as
(k+2t+l) (l) (2t) (k)
pjj ≥ pji pii pij > 0.

Therefore, d(j) divides both k + t + l and k + 2t + l, and thus d(j) divides t. QED

As a consequence of the theorem, all states within one class have the same period. We call this the
period of a class.
If the Markov chain consists of only one class, then all states communicate, and we call the Markov
chain irreducible. All states have the same period in this case, and so we may call this the period of
the Markov chain. If d(i) = 1, we call state i aperiodic, and, thus, its class is called aperiodic. If the
Markov chain is irreducible with period 1, the chain is called aperiodic.
The notions ‘essentiality’ and ‘periodicity’ essentially concern the question, wether certain probabil-
ities are positive or zero. The value is not relevant. Next we will study questions related to the value
of certain probabilities.
2.2. CLASSIFICATION OF STATES 11

2.2.2 Absorption probabilities


(t)
Define by fij the probability that the Markov chain enters stage j for the first time at time t, given
a start in state i. Similarly, denote by fij the probability that state j is reached, given a start in
state i. Formally,
(t)
fij = P{Xt = j, Xs 6= j for s = 1, 2, . . . , t − 1 | X0 = i};
P∞ (t)
fij = P{∪∞
t=1 {Xt = j} | X0 = i} = t=1 fij .

Clearly, fii has the interpretation that the Markov chain ever returns to state i, given a start in
state i.
In the problem of a production line, we are e.g. interested in the fraction of the products entering
the production process, that is finally disapproved. To obtain this fraction, we will use a so-called
first step analysis.
The idea is as follows: in order to calculate the probability that state j is reached from state i, either
the Markov chain jumps to state j (from i) immediately, or, it jumps to a state k 6= i, and has to
reach j from thereon. This gives rise to the following linear system of equations: let S 0 = {i | i → j}.

X
fij = pij + pik fkj , i ∈ S0. (2.2.1)
k6=j

The Markov property is used implicitly in setting up this system of equations, since the probability
to reach state j from a given state k, does not depend on the behaviour of the process before reaching
k.
We can write this in a simpler way by putting P 0 = {pik }i,k∈S 0 . Then (2.2.1) can be written as
follows. Write f (j) for the column vector with i-th component fij and p(j) for the vector with i-th
component pij , i ∈ S 0 . Note that we restrict to the subset of states S 0 .

f (j) = p(j) + P 0 f (j), (2.2.2)

in other words, with I the identity matrix

(I − P 0 )f (j) = p(j),

or
f (j) = (I − P 0 )−1 p(j).

The latter can be calculated by a matrix inversion and is useful when one needs to calculate other
performance measures. It also holds that (I−P 0 )−1 = ∞ 0 n
P
n=0 (P ) (like a geometric series argument).

2.2.3 Absorption time and cost

For instance, we want to assess the quality of the production system, by the expected number of
operations a product has to undergo. Each operation in our operation line corresponds to one ‘time
unit’.
12 CHAPTER 2. PRODUCTION LINE AND MARKOV CHAINS

First we define the following. Let



(t)
X
µij = tfij ,
t=1

which is the expected time to hit state j from state i. Note, that starting at j itself, the chain must
have left j, before it can hit j again!

Remark 2.2.1 Notice that fij < 1 is possible. A ‘correct’ definition of the expected hitting time is
therefore

(t)
X
µij = tfij + ∞ · (1 − fij ).
t=1

Since the latter expression is infinitely large if fij < 1, it is only meaningful to consider the case that
fij = 1.

Let us assume that fij = 1 for all i ∈ S 0 . In order to calculate µij , i ∈ S 0 , one can use a first step
analysis as in the above. Then,
X
µij = 1 + pik µkj , i ∈ S0
k6=i

and so, with µ(j) the vector with i-th component µij , and e for the vector with i-component 1,
i ∈ S 0 , we get, similarly as before,
µ(j) = (I − P 0 )−1 e.

Unfortunately, in our production line there may be two absorbing states, the approval and the
rejection states. The probability is then not likely to be 1 that a product reaches the approved state.
Nevertheless, one can calculate the average time a product spend in the production system before it
as absorbed into either the approved or the rejection states by substituting everywhere the state j
by the set {approved, rejected} in the above equations.

Average cost Suppose that in state i a cost ci is incurred. Let γij be the average cost till reaching
state j, when starting in state i. Then we can replace the ‘1’ in the equation for µij by ci , and do
the calculations as before. This applies similarly if there is an absorbing set.

2.3 Problems

Problem 2.3 The following production process takes place in a semi-conductor factory: semi-
conductors are soldered on slices of silicium, approximately 10,000 per slice. The soldering process
requires three phases. The time requires per phase is two days. After each phase an inspection
takes: there is a positive probability that previous phases have to be redone. The corresponding
probabilities are given in the next matrix, where states I, II and III represent the phases 1,2 and 3
respectively.
2.3. PROBLEMS 13

I II III ready
 
I 4/10 6/10 0 0
 
II  2/10 2/10
 6/10 0 

III   2/10 0 2/10 6/10 

ready 0 0 0 1

a) The subsequent phases of the production of a slice is a Markov chain. Determine the classes of
the chain.

b) Calculate the total expected time that a slice spends in the production process.

We next assume, that during each inspection a percentage of the slices is disapproved. Conditionally
on approval, the redo/not redo probabilities are the same as before. The percentage are 10%, 10%
and 50% in phases I, II and III respectively.

c) Calculate the probability that a slice is eventually disapproved.

d) Calculate the fraction of disapproved slices that underwent phase I at least a second time.

e) Calculate the fraction of disapproved slices that underwent phase II at least a second time.

Problem 2.4 Consider a production line of bottles of beer at Heineken. There are three subsequent
operations:

• I: the bottles are filled and closed;

• II: a sticker is glued on the filled bottles;

• III: the bottles are packed into crates.

In each phase, a percentage of the bottles breaks: 10% in phase I, 16% in phase II and 5% in phase
III.
After phases I en II, non-broken bottles are inspected: after phase I, 10% of the bottles turns out
not to be completely full, and so they have to be refilled. After phase II, on 5% of the bottles the
sticker has not glued properly and so they have to get another sticker.
There is cost associated with each phase: phases I and II each cost 2, and phase III costs 1. Each
broken bottle costs 5.

a) The subsequent phases of a bottle during the production process is a Markov chain. Specify the
transition matrix. Determine the classes.

b) Calculate the total expected cost per bottle.

c) Heineken thinks that it might be cost effective to invest in better machines, for which the per-
centage of broken bottles is twice less in each phase). This requires an investment of 106 . How
many bottles should Heineken produce in expectation to earn back the cost of this investment?

Das könnte Ihnen auch gefallen