Sie sind auf Seite 1von 78

ME 5702 – PHYSICAL ASSET MANAGEMENT

REPAIRABLE SYSTEMS
MARKOV PROCESSES
Enrique López Droguett
Associate Professor
Mechanical Engineering Department
University of Chile
e-mail: elopezdroguett@ing.uchile.cl
REPAIRABLE SYSTEMS MODELING

• Different probabilistic models can be used

• The choice of an appropriate model depends on several


factors associated with an item such as the percentage of time
it is under maintenance:
– Unavailable
– Down time
REPAIRABLE SYSTEMS MODELING

• If the downtime (due to maintenance) is small compared to


the operational time:
– Stochastic Point Processes:
• Homogeneous Poisson Process
• Renewal Process
• Non-Homogeneous Poisson Process

• Otherwise (replacement time, repair time, etc) are relevant:


– Alternating Renewal Process
– Markovian Process
– Semi-Markovian Process
NON-REPAIRABLE SYSTEMS

• Non-repairable item: one failure


– Decreasing hazard rate: burn-in
– Increasing hazard rate: wear out
h(t) h(t)
Failure time
5 Failure time 5
4 4
3 3
2 2
1 1
X X
t t
0 0
1 1.5 2 2.5 3 1 1.5 2 2.5 3

• Time to failure is characterized by a non-negative random


variable
REPAIRABLE SYSTEM

• It can go through several failures

• Failure record is characterized by the intensity function l(t):


– Increasing l(t): deterioration, increasing probability of failure

– Decreasing l(t): improvement, decreasing failure probability


REPAIRABLE SYSTEM
l(t) l(t)
5 5
4 Improvement 4 Deterioration
3 3
2 2
1 1 X X X XX
X X X X X
t t
0 0
1 1.5 2 2.5 3 1 1.5 2 2.5 3

• System under deterioration: frequency of failures shows an


increasing trend
• System under improvement: frequency of failures shows a
decreasing trend
REPAIRABLE SYSTEM

• Times to failure are described by a probabilistic mechanism


characterizing a sequence of random variables

Non-Repairable Repairable

Improvement with time h '(t ) £ 0 l '(t ) £ 0

Deterioration with time h '(t ) ³ 0 l '(t ) ³ 0


SEQUENCE OF FAILURES AND REPAIRS

• System is put into operation at t = 0

• When it fails, it goes into maintenance:


– Sequence of operational times up to failures T1, T2, ...

• Upon failure, system is unavailable for a period of time:


– Sequence of repair times D1, D2, ...
SEQUENCE OF FAILURES AND REPAIRS

• System state is given by the following state variable:

⎧1, If system operates at t


X (t) = ⎨
⎩0, If system is down at t

X(t)
1

D1 D2 D3
0 T1 T2 T3 t
AVAILABILITY

• Availability A(t) at time t of a repairable system is the


probability of being operational at t

A(t ) = P [ X (t ) = 1]
• Note that if the system does not go through repair, A(t) = R(t)

• A(t) in general depends on the time to failure and time to


repair distributions
UNAVAILABILITY

• Unavailability U(t) at time t of a repairable system is the


probability of the system being in a failed state at time t:

U (t ) = 1 - A(t ) = P [ X (t ) = 0]
EXAMPLE – 1

• Times to failure (in days) correspond to 7 failures from t =0


and for a total time of 410 days

• The data comes from a single system and the repair times are
considered small when compared to the operational times

• It is assumed that the system is put back into operation


immediately after a failure
EXAMPLE – 1

Number of Failures N(t) Time Tj Time between events Xj

0 0 0
1 177 177
2 242 65
3 293 51
4 336 43
5 368 32
6 395 27
7 410 15
EXAMPLE – 1

177 65 51 43 32 27 15
0 177 242 293 336 368 395 410
Time
• Sad System L
– Times between events decrease with time
– System is deteriorating
– Failures are more frequent with time

• Happy System J
– Times between events increase with time
– System is improving
– Failures are less frequent with time
EXAMPLE – 1
7

5
Number of Failures N(t)

0
0 50 100 150 200 250 300 350 400 450
Time [days]
MARKOV MODELS IN
RELIABILITY

l0®2
k=0 l0®1

l1®0 l1®2
k=1

l2®1 k=2
STOCHASTIC PROCESSES
ì ü
• Stochastic Process ïí X (t),t ÎT ïý is a collection of random
ïî ïþ
variables:
– For each t ÎT , X (t ) is a random variable

• If the index t is the time, then X (t) is the state of the process at
t

• X (t) can be:


– Total number of customers in a store at t
– Total number of sales at t
– Number of failed components at t
STOCHASTIC PROCESSES

• Discrete Time Process:


– When T is a countable set
⎧⎪
⎨ X n,n=0,1,2,…⎫⎪⎬
⎪⎩ ⎪⎭

• Conitnous Time Process:


– When T is an interval in Â

ìï ü
í X (t),t ³ 0ïý
ïî ïþ
STOCHASTIC PROCESSES

• It is a family of random variables that decribes


the time evolution of a process
WHAT A MARKOVIAN ANALYSIS IS?

• Modeling technique applied to the analysis of the reliability and


availability of systems

• System reliability behavior represented via a transition diagram


among states:
– Discrete set of states that a system can be at a given point in time
– Transition rates among states

1 0

µ
WHAT A MARKOVIAN ANALYSIS IS?

• Markovian models represent chain of events, i.e.,


transitions of a system that correspond to a sequence of
failures and repairs
OBJECTIVES

• Estimate reliability metrics such as the probability that the


system is in a given state at time t

• Average time the system is in a state

• Expected number of transitions between states such as the


number of failures and repairs
WHEN TO USE

• Markov models can be used when simple parametric time to


failure models (e.g., Exponential, Weibull distributions) are not
able to describe the dynamic reliability characteristics of a
system

• Example:
– Redundant systems in standby
– Systems with multiple failure modes
– Systems with some degraded states
MARKOV MODELS AS REPRESENTATION OF THE
DYNAMIC BEHAVIOR
• Describe the dynamics of a system:
– By a set of states in which that system can be in a given time
– Through the likelihood and rate of transitions among states

• States can represent several system conditions:


– Operational, degraded, standby, failed, under maintenance

• Transitions due to different mechanisms and events:


– Inspections, failures, repairs, switching operations
MARKOV MODELS AS REPRESENTATION OF THE
DYNAMIC BEHAVIOR

• The system’s dynamics can be considered as a sequence


of states visited by the system in time

• A Markovian model is a representation of the state


sequences and their properties in time
EXAMPLE – 2: TWO COMPONENTS IN PARALLEL

• System states:

Component 1 Component 2
System States
States States
0 Operational Operational
1 Failed Operational
2 Operational Failed
3 Failed Failed
EXAMPLE – 2: TWO COMPONENTS IN PARALLEL

• Markov Diagram
l0®1

0 1

l1®0

l2®0 l0®2 l3®1 l1®3

l2®3

2 3

l3®2
EXAMPLE – 2: TWO COMPONENTS IN PARALLEL

• Markov Diagram:

l0®1

0 1

l1®0

l0®3
l2®0 l0®2 l3®1 l1®3

l2®3

2 3

l3®2
MAROVIAN MODEL ANALYSIS

• Find the probability that the system will be in a given state as a


function of time

• Example:
– At t = 0, the system is in the initial state 0
– Determine the probability that the system is in one of the possible
states 0, 1, 2 or 3 at time t

• At time t, the sum of state probabilities should be equal to 1:


– If a given state probability decreases by an amount x, then the
same amount x should be distributed among the other system
sates
MARKOV PROCESSES

• X (t) : system state variable in a given point in time

• System with r + 1 possible states: 0, 1, 2, …, r

ì ü
• The event ïí X (t) = j ïý means that the system is in state j at time t
ï
î ï
þ

• The probability of this event is given by

é ù
Pj (t) = P êê X (t) = j úú
ë û
MARKOV PROCESSES

• Markovian Property:
– Given that the system is in state i at time t, X (t) = i , the future
states X (t +n ) do not depend on the previous states, with X (u) , u < t

é ù é ù
P êê X (t + v) = j | X (t) = i, X (u) = x(u)úú = P êê X (t + v) = j | X (t ) = i úú
ë û ë û

• When the current state is known, the probability of any


process future behavior is not modified by the additional
knowledge about its past behavior
MARKOV PROCESSES

• A time continuous stochastic process X (t) is a Markov


chain with continuous time (Markov process) that satisfies
the Markovian property
TRANSITION PROBABILITIES

• The conditional probabilities


⎡ ⎤
P ⎢⎢ X (t + Δt)= j| X (t)= i⎥⎥
⎣ ⎦

are known as the transition probabilities

• When these probabilities do not depend on time t, but only


depend on the time interval Dt, one has the so called stationary
transition probabilities:
⎡ ⎤
P ⎢⎢ X (t + Δt)= j| X (t)= i⎥⎥ = Pij (Δt)
⎣ ⎦
MEMORYLESS MARKOV PROCESS

• Markov process with stationary transition probabilities

• The likelihood of a state transition at time t does not depend on


how long the system has been in the current state, and does
not depend on how the system arrived at the current state

• Therefore:
– Degradation processes are not easily analyzed via Markov models
TRANSITION PROBABILITIES

• Properties:
Pij (t) ³ 0; t > 0

r
å Pij (t) =1; t > 0
j=0

Chapman-Kolmogorov Equations:

r
Pij (t + Δt)= ∑ Pik (t)⋅Pkj (Δt); t,Δt >0
k=0
CHAPMAN-KOLMOGOROV EQUATIONS

⎛ ⎞
Pij (t + Δt)≡ P ⎜⎜ X(t + Δt)= j| X(0)= i⎟⎟
⎝ ⎠
⎛ ⎞
= ∑ P ⎜⎜ X(t + Δt)= j, X(t)= k | X(0)= i⎟⎟
⎝ ⎠
k
⎛ ⎞ ⎛ ⎞
= ∑ P ⎜⎜ X(t + Δt)= j| X(t)= k, X(0)= i⎟⎟ ⋅P ⎜⎜ X(t)= k | X(0)= i⎟⎟
⎝ ⎠ ⎝ ⎠
k
⎛ ⎞ ⎛ ⎞
= ∑ P ⎜⎜ X(t + Δt)= j| X(t)= k ⎟⎟ ⋅P ⎜⎜ X(t)= k | X(0)= i⎟⎟
⎝ ⎠ ⎝ ⎠
k
= ∑ Pkj (Δt)⋅Pik (t)
k
TRANSITION RATES

• Transition rates λij (t) from state i to state j are defined as:

⎡ ⎤
P ⎢⎢ X (t + Δt)= j| X (t)= i⎥⎥
λij (t)= lim ⎣ ⎦
Δt→0 Δt

Assuming stationary transition probabilities:

Pi j (Δt) d
λij = lim = Pij (0)= P!ij ⎛⎜ 0⎞⎟
Δt→0 Δt dt ⎝ ⎠
STATE EQUATIONS

• From the Chapman-Kolmogorov equations:


r r
P t +Dt = å P t × P Dt = å Pik æççt ö÷÷× Pkj çç Dt ÷÷ + Pij æççt ö÷÷× Pjj çç Dt ÷÷
æ ö æ ö æ ö æ ö æ ö
ç ÷ ç ÷÷ ç ÷
ij çè ÷
ø ik çè ø kj çè ÷
ø è ø è ø è ø è ø
k =0 k =0
k¹ j

r
P Dt =1- å Pjk çç Dt ÷÷
æ ö æ ö
As ç ÷
jj çè ÷
ø è ø
k =0
k¹ j

Thus,
é ù
ê ú
r ê r ú
P t +Dt = å Pik t × P Dt + P t × 1- å Pjk çç Dt
æ ö æ ö æ ö æ ö ê æ öú
ç ÷ çç ÷÷ ç ÷ ê ÷ú
ij çè ÷
ø è ø kj çè ÷
ø
ij ççè ÷÷
ø ê è
÷ú
ø
k =0 ê k =0 ú
k¹ j ê
êë
k¹ j ú
úû
STATE EQUATIONS

• Simplifying:
r r
P t +Dt - P t =-P t × å P Dt + å Pik æççt ö÷÷× Pkj çç Dt ÷÷
æ ö æ ö æ ö æ ö æ ö
ç ÷ ç ÷
ij çè ÷
ø
ij ççè ÷÷
ø ij ççè ÷÷
ø jk çè ÷
ø è ø è ø
k =0 k =0
k¹ j k¹ j

Dividing the equation by Dt , taking Dt ®0 , the State Equations


are:
r r
P!ij ⎛⎜⎝ t ⎞⎟⎠ = −Pij ⎛⎜⎝ t ⎞⎟⎠ ⋅ ∑ λ jk + ∑ Pik ⎛⎜⎝ t ⎞⎟⎠ ⋅λkj
k=0 k=0
k≠ j k≠ j

• Assume that at t = 0, the system is in state i:


r r
P! j ⎛⎜⎝ t ⎞⎟⎠ = −Pj ⎛⎜⎝ t ⎞⎟⎠ ⋅ ∑ λ jk + ∑ Pk ⎛⎜⎝ t ⎞⎟⎠ ⋅λkj
k=0 k=0
k≠ j k≠ j
TRANSITION MATRIX

• Matrix of the transition rates:

⎡ ⎤


−λ00 ! λr0 ⎥

⎢ ⎥
T= ⎢
⎢ " # " ⎥

⎢ ⎥


λ0r ! −λrr ⎥

⎣ ⎦

where
r
l jj = å l jk
k =0
k¹ j
STATE EQUATIONS: MATRIX NOTATION

• As r
l jj = å l jk
k =0
k¹ j
each state equation can be written as:

d P ⎡⎢ X(t)= j ⎤⎥ = λ ⋅P ⎡⎢ X(t)= k ⎤⎥
dt ⎢⎣ ⎥⎦ ∑ kj ⎢⎣ ⎥⎦
k
In matrix notation:
⎡ ⎤
⎡ ⎤ ⎢ ⎡ ⎤ ⎥
⎢ ⎡ ⎤ ⎥ ⎡ ⎤ ⎢ P X (t)=0
⎢ ⎥ ⎥


P ⎢⎢ X (t)=0⎥⎥ ⎥



−λ00 " λr0 ⎥



⎢⎣ ⎥⎦ ⎥

⎣ ⎦ ⎢ ⎥
d ⎢
⎢ !

= !
⎥ ⎢
⎢ # ! ⋅ ⎥


⎢ !


dt ⎢ ⎥
⎢ ⎥
⎢ ⎥


⎡ ⎤
P ⎢⎢ X (t)= r ⎥⎥ ⎥
⎥ λ0r⎢ " −λrr ⎥

⎢ ⎡ ⎤ ⎥

⎢ ⎣ ⎦ ⎥ ⎢



⎢ P ⎢⎢ X (t)= r ⎥ ⎥
⎥⎦ ⎥
⎢⎣ ⎥⎦ ⎢ ⎣
⎢⎣ ⎥⎦
EXAMPLE – 3: SYSTEM WITH TWO COMPONENTS

• System states:

Component 1 Component 2
System States
States States
1 Operational Operational
2 Failed Operational
3 Operational Failed
4 Failed Failed

• Components with constant failure rates li


• No repairs
EXAMPLE – 3: SYSTEM WITH TWO COMPONENTS

• Markov Diagram
EXAMPLE – 3: SYSTEM WITH TWO COMPONENTS

• Objective:
– Find the system probability for each state as a function of time
– Pi (t) : probability that the system will be in state i at time t

• For a series system:

Rs (t) = P1 (t)

• For a parallel configuration:

Rs (t) = P1 (t) + P2 (t) + P3 (t)


EXAMPLE – 3: SYSTEM WITH TWO COMPONENTS

• For state 1:

• The probability that the system is in state 1 at t + Dt is equal to


the probability that the system is in state 1 at t minus the
probability that the system is in state 1 at t times the transition
probability ( λi Δt ) to state 2 or to state 3

• λ1Δt : conditional transition probability to state 2 in Dt given that


the system is in state 1 at time t (now)
• λ1ΔtP1 (t) : probability that the system is in state 1 at t and make
a transition to state 2 in the time interval Dt
EXAMPLE – 3: SYSTEM WITH TWO COMPONENTS

• Solving these equations for state 1:

• For state 2:

• For state 3:
EXAMPLE – 3: SYSTEM WITH TWO COMPONENTS

• State probabilities:

• System reliability:
– Series system:

– Parallel system:
EXAMPLE – 4: STANDBY SYSTEM WITH SWITCHING
FAILURES*
• An active generator has a failure rate of 0.01/day. An older
standby generator has a failure rate of 0.001/day while in
standby and a failure rate of 0.1/day when on-line. There is a
10% chance of a switching failure. Determine:
– System reliability for a planned 30-day use
– System MTTF
– If there is no switching failures, find the system MTTF

*An Introduction to Reliability and Maintainability Engineering, Second Edition, Second


Edition, Waveland Pr Inc, 2009, C. Ebeling
EXAMPLE – 4: STANDBY SYSTEM WITH SWITCHING
FAILURES*

P1(t) = e −( λ1 + λ2s )t

(1− p)λ1 ⎡ − λ2t


P2 (t) = e − e−( λ1 + λ2s )t ⎤

λ1 + λ2s − λ2 ⎣ ⎦

P3(t) = e − λ1t
−e
−( λ1 + λ2s )t

(1− p)λ1 ⎡ 1 1 ⎤
MTTF = λ1 + ⎢ − ⎥
1 λ1 + λ2s − λ2 ⎣ λ2 λ1 + λ2s ⎦
EXAMPLE – 4: STANDBY SYSTEM WITH SWITCHING
FAILURES*
Maintenance Types

Source: Rausand, M., Hoyland, A., System Reliability Theory, 2nd ed., Wiley, New York, 2004.
Overview
Physical Asset
Management

It should determine
Maintenance
Strategies

They can be Using

Mathematical
Corrective Preventive Modeling
Proactive

Including
For example
Restrictions
Decision Objectives
Inspection Variables
Scheduling For Example
Implies to For inspections, it means
wait for It should decrease Number
. of
Minimization
failure rates Maximize / Inspections
Intervals between Reach a level of
Failures Inspections

Costs

Impact Availability
Various Mean Times

• MTTR: mean time to repair


• MDT: mean downtime (includes detection, diagnose, logistic times, etc.)
• MUT: mean up time
• MTBF: mean time between consecutive failures

Source: Rausand, M., Hoyland, A., System Reliability Theory, 2nd ed., Wiley, New York, 2004.
Downtime

• Unplanned downtime:
– Caused by an item failures and internal and external random events
• Human errors
• Sabotage
• Flooding

• Planned downtime:
– Caused by planned preventive maintenance, planned operations (change of
tools), holidays
– Scheduled planned downtime (e.g., PM)
– Unscheduled planned downtime: initiated by CM, incipient failures, etc.
TRANSIENT SOLUTION

• Using the Laplace transform:


– The Laplace transform of Pj çè t ÷ø is P*j (s)
æ ö

– The Laplace transform of the derivative is


⎛ ⎞
L P! j (t)⎟⎟ = sP*j (s)− Pj (0)


⎝ ⎠

• Laplace transform of the system state equations:


⎡ ⎤ ⎡ ⎤
−λ00 " λr0
⎡ ⎤ ⎡ ⎤

⎢ sP0*(s)

⎥ P0(0)
⎢ ⎥





⎢ P0*(s) ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥


! ⎥

− ⎢

! = ! #



⎢ ! ⋅ ! ⎥





⎢ ⎥


sPr*(s)


Pr (0)

⎢⎣ λ0r "

⎥⎦

⎢ −λrr

Pr*(s)
⎥ ⎢ ⎥
⎣ ⎦ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎢⎣ ⎥⎦
TRANSIENT SOLUTION

• If the system is in state 0 at t = 0:


æ ö
æ
P 0 =1
ç
ö
÷ Pj çç 0÷÷ = 0
0 çè ÷
ø è ø

• Summing up the state equations:


r
å Pi*(s) = 1s
i=0

• As 1/s is the Laplace transform of a unitary function (for t ³ 0 )

r
å Pi (t) =1
i=0
EXAMPLE – 5

• Consider a system with two possible states:


– State 1: system is operational
– State 0: system has failed
l

1 0

• Memoryless property: times to transition are exponentially


distributed:
– Repair time is exponentially distributed with rate µ
– Mean Time to Repair (MTTR) is 1/µ
EXAMPLE – 5

• State equations:
⎡ ⎤ ⎡ ⎤
− µ λ ⋅ P0(t) = P!0(t) ⎥⎥⎥
⎡ ⎤ ⎢
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢
⎢ ⎥ ⎢
⎢ ⎥


⎢⎣
µ −λ P1(t)


⎥⎦
P!1(t) ⎥⎥






⎣ ⎦ ⎢⎣ ⎥⎦

• Given that
P0(0) = 0 P1(0) =1

• Laplace transform of the state equations is:


é ù é ù
* *
P0 (s)ú ê sP0 (s) úú
-µ l ×
é ù ê ú ê
ú=ê
ê ú ê
ê ú ê ú
ê
êë
µ -l ú
úû
ê
ê
P1 (s)ú êsP1 (s) -1úú
* ú ê *
êë úû êë úû
EXAMPLE – 5

• Solving:
-µ P0*(s) + l P1*(s) = sP0*(s)
µ P0*(s) -l P1*(s) = sP1*(s) -1

• Thus
P0*(s) = 1s - P1*(s)

• Then

P1*(s) = 1 + µs × 1
l +µ +s l +µ +s
EXAMPLE – 5

• Writing the previous equation in terms of partial fractions:

P1*(s) = l × 1 + µ ×1s
l +µ l +µ +s l +µ

• The inverse of the Laplace transform:

P1(t) = l e-(l +µ )t + µ
l +µ l +µ

which is the Instantaneous Availability: A(t)


EXAMPLE – 5

1.0
0.9
0.8
λ = 0.01/h
0.7
µ = 0.001/h
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0 100 200 300 400 500
Time [h]
EXAMPLE – 5

• Mean Availability:

A = tlim
®¥
A(t)
• Thus,

A = P1 = tlim P (t ) = µ
®¥ 1 l +µ

• As MTTF = 1/l and MTTR = 1/µ

A= MTTF
MTTF + MTTR
IRREDUCIBLE MARKOV PROCESS

• State j is reachable from state i if for some t > 0, the transition


rate is lij > 0

• Markov process is irreducible if every state is reachable from


every other state

• In an irreducible Markov process, the limits


lim P t = Pj ; j =0,1,…,r
⎛ ⎞
t→∞ j ⎜⎝ ⎟⎠

always exist and are independent of the initial state of the


process (system) at time t = 0
STEADY-STATE PROBABILITIES

• An irreducible Markov process converges to a condition in


which the probability that the system is in state j is

Pj = Pj æçè ¥ö÷ø = tlim P æ ö


çt ÷
®¥ j ç ÷
è ø

these are the Steady-State or Asymptotic Probabilities

• Note that if Pj ⎜⎝ t ⎟⎠ tends to a constant value when t ®¥


⎛ ⎞

lim ! ⎛⎜ t ⎞⎟ =0
P
t→∞ j ⎝ ⎠
ASYMPTOTIC SOLUTION

• The steady-state probabilities P0,…,Pr must satisfy the


following matrix equation:
⎡ ⎤
⎢ ⎡ ⎤ ⎥
⎡ ⎤ ⎢ P X (t)=0
⎢ ⎥ ⎥
⎢ −λ00 ! λr0 ⎥ ⎢ ⎢⎣ ⎥⎦ ⎥ ⎡ ⎤










0 ⎥⎥

⎢ " # " ⋅ ⎥



" ⎥

= " ⎥⎥



⎢ λ0r ! −λrr



⎢ ⎡ ⎤ ⎥
⎥ ⎢

0 ⎥⎦




⎢ P ⎢⎢ X (t)= r ⎥ ⎥
⎥⎦ ⎥
⎢ ⎣
⎢⎣ ⎥⎦

• The asymptotic probabilities are obtained from r of r+1 linear


algebraic equations (see matrix) and from the fact the sum of
the state probabilities is always equal to 1

• Note that the initial state of the system does not influence the
steady-state probabilities
VISIT FREQUENCY TO A STATE

• Number of visits (transitions to or from) a state j per unit time

• Unconditional probability of a transition from state j to state k in


the time interval (t,t +Dt]

⎛ ⎞
P ⎜⎜ ( X (t + Δt)= k)∩( X (t)= j)⎟⎟ =
⎝ ⎠
⎛ ⎞
P ⎜⎜ X (t + Δt)= k | X (t)= j⎟⎟ ⋅P( X (t)= j)= Pjk (Δt)⋅Pj (t)
⎝ ⎠

• The frequency of departures from state j to state k is

⎛ ⎞
P ⎜⎜ ( X(t + Δt)= k)∩( X(t)= j)⎟⎟ Pjk (Δt)⋅Pj (t)
υ dep (t)= lim ⎝ ⎠
= lim = Pj (t)⋅λ jk
jk Δt→0 Δt Δt→0 Δt
VISIT FREQUENCY TO A STATE

• In the steady-state:

υ dep
jk
lim Pj (t)⋅λ jk = Pj ⋅λ jk
= t→∞
• Total frequency of departures from state j in the steady-state is
r
υ j = Pj ⋅ ∑ λ jk = λ jj ⋅Pj
dep
k=0
k≠ j

• Total frequency of arrivals into state j in the steady-state is


r
υ arr
j
= ∑ Pk ⋅λkj
k=0
k≠ j
VISIT FREQUENCY TO A STATE

• From the state steady-state equations:


r
l jj × Pj = å lkj × Pk
k =0
k¹ j

• Thus, in the steady-state, the frequency of departures from


state j is equal to the frequency of arrivals into state j

• Then, the frequency of visits to a state j is


r
u j = å lkj × Pk = l jj × Pj
k =0
k¹ j
MEAN DURATION OF A VISIT

• When the process arrives at a state j, the system will stay in


this state a time T j until the process departs from that state

• The total departure rate from state j is


r
l jj = å l jk
k =0
k¹ j

• As the transition rates are constant:


– The sojourn time in state j is exponentially distributed with rate l jj

• The mean duration of a visit (or mean sojourn time) in state j is


Tj = 1
l jj
INSTANTANEOUS AVAILABILITY

• Instantaneous Availability is the probability that the system is in


any state in G, representing system functioning according to
some specified criteria, at time t:
A(t)= ∑ Pk (t)
k∈G

• Instantaneous Unavailability is the probability that the system is


in a state of a set F, representing system failure, at time t:

U (t) =1- A(t)


MEAN AVAILABILITY

• If the system reaches the steady-state:


– Mean availability can be obtained from the asymptotic state
probabilities

• Mean Availability:

A= ∑ Pk
k∈G

• Mean Unavailability:

U =1− A= ∑ Pk
k∈F
FAILURE FREQUENCY AND REPAIR FREQUENCY

• Failure frequency: wF
– Expected number of visits to states in F from states in G per unit
time calculated after a long time

• Repair frequency: wR
– Expected number of visits to states in G from states in F per unit
time calculated after a long time
MEAN DURATION OF A SYSTEM FAILURE

• Mean time from when the system enters a failed state (F) until it
is repaired or restored into an operating state (G)

• In the steady-state, the system unavailability corresponds to the


failure frequency multiplied by the mean duration of a system
failure:

U = ω F ⋅TF
MTBF AND MTTF OF THE SYSTEM

• Mean Time Between System Failures (MTBF):


– It is the mean time between consecutive transitions from a
functioning state (G) into a failed state (F)

MTBF = w1
F

• Mean Time To Failure (MTTF):


– It is the mean time until system failure when the system initially is
in a specified functioning state
EXAMPLE – 6

• Consider a system with two components in series:


– When one component fails, the other one is taken out of operation
until the failed component is repaired
– After a component is taken out of operation, it is not exposed to
any stress
• For each component:
– Failure rate: λi, i = 1, 2
– Repair rate: μi, i = 1, 2
• Find:
– Steady-state probabilities
– Mean availability
– System failure frequency
– MTTF
– MTBF
LAPLACE TRANSFORMS

• Let f(t) be a function defined on (0, ∞). The Laplace transform


f(s) of the function f(t) is defined by


f (s) = ∫ e− st f (t) dt
0

where s is a real number.

• The function f(t) is called the Inverse Laplace transform of f(s):

f (t) = L−1[ f (s)]


LAPLACE TRANSFORMS

• Some properties:

L[ f (t) + g(t)] = L[ f (t)] + L[g(t)]


L[α f (t)] = α L[ f (t)]
⎡ df (t) ⎤
L⎢ ⎥ = sL[ f (t)] − f (0)
⎣ dt ⎦
1
L ∫ f (u) du ⎤ = L[ f (t)]
⎡ t

⎢⎣ 0 ⎥⎦ s
LAPLACE TRANSFORMS

• Some Laplace transform:


f (t),t ≥ 0 f (s) = L[ f (t)]
1
1
s
n!
tn n+1
, for n = 0,1,2,…
s
Γ(α + 1)
tα α +1
, for α > −1
s
1
eα t
s−α
n!
eα t t n
(s − α ) n+1

Das könnte Ihnen auch gefallen