Sie sind auf Seite 1von 9

Packet Networks Modelling by BCMP theorem

by Faruk Hadziomerovic, Ph. D. Sarajevo School of Science and Technology fhadzi@yahoo.com Abstract The BCMP theorem is a mathematical tool that exactly models open and closed queuing networks for four types of nodes under assumption of Poisson arrivals. Therefore, it is very suitable for modeling stored and forward networks that is the packet networks like Internet. In addition the BCMP models multiple classes of arrivals that nicely correspond to different classes of packets (services). Internet traffic consists of four classes of packets: conversational (voice), streamline (video), interactive (data), and background (signaling). Out of four BCMP node types, a special type of interest is the Processor Sharing (PS) node. PS node is an ideal type of node for Internet integrated services router like PLMS because it can be tuned to fulfill QoS for each service class. Although PS can not be implemented it can be closely approximated by Generalized Processor Sharing (GPS) algorithm, as shown by Parekh and Gallager [1]. Our paper shows how the BCMP can be used to model (find bottlenecks and host-to-host delay statistics), for each service class. It is of particular significance for the VoIP since the voice packets have very stringent delay and jitter requirements. The modeling results would be useful to network administrators to properly tune PS routers with the aim of fulfilling QoS for each of the four classes of Internet service. Keywords: Internet performance, BCMP, VoIP, jitter. Introduction In this day and age the Internet speed is of utmost importance because this is the only way to successfully deploy Voice over IP (VoIP) and other interactive services. Classical SS7 network resolved this problem by hierarchical network architecture and connection oriented circuits. Internet, being haphazard (network of networks) architecture and in addition store-and-forward lacks both of these features. Therefore, the only way to successfully implement interactive services is to speed it up. And that is where the network performance plays an important role. One way to optimize the network performance is by modeling and simulation. This paper presents the mathematical modeling approach using the BCMP theorem. BCMP theorem [2] exactly models store and forward networks under assumptions of Poisson input traffic and negative exponential service times. The BCMP is a generalization of the Jacksons theorem [3] that is applicable to single type of nodes and single class of customers. Let us see how BCMP networks resemble the Internet type of networks (which we will interchangeably call subnetworks). Internet subnetwork is shown in Figure 1. Our focus is a subnetwork. i.e. store-and-forward network of routers. In relation to the subnetwork, LANs connected to each router are traffic generators (sources) or traffic recipients (sinks). Therefore, we can model store-and-forward network as a network of serving

stations (routers) that receive local traffic from outside and sink their traffic to outside of the store-and-forward network, Figure 2. hosts Ethernet

router router 1

j N Ring hosts

router

subnetwork router Ethernet

A hosts Figure 1. Internet as network of networks.

0,1r r

1 0,1s 0,ir 0,is 0,js p i


ir,js

p p

1s,0

1r,0

ir,0

p j N

js,0

Figure 2. Open Multiclass BCMP Network. The Model Now we are modeling the subnetwork of Figure 1 with the BCMP network of serving stations in Figure 2. Serving stations in Figure 2 are routers in Figure 1. Links between serving stations are links between routers. Labels above links represent the traffic (packets). In our case we have four types of packets: voice, video, data, and signaling. For the BCMP these are customer classes labeled by index letter. For instance, index r running from 1 to 4 represents one of the 4 classes. Lets now consider node (router) i in Figure 2. It receives the traffic from the outside node #0 represented by 0,ir, that is from

its own LAN (Ethernet or Ring in Figure 1). This is the r-th class type. It also receives 0,1s packets of s-th class from its LAN. The router (serving station) queues (stores) those packets (customers), process (serves) them, and forwards them to the next router (router j in Figure 2), labeled as pir,js, or receives them by its own LAN labeled as pir,0. pir,0 means that out of all class r packets processed by node i the fraction (probability) pir,0 is destined for its own LAN, while the fraction (probability) pir,js is destined for the node (router) j, etc. The BCMP theorem allows the class to change its attribute during a transition from one serving station to another. In this example, packets leaving the station (router) i as a class r entered the station j as a class s. In internet, this doesnt make sense and we will not use this feature any more. It is mentioned here just to demonstrate the generality of the BCMP theorem. Similarly, in Figure 2 there are links (not shown) for all other classes. The network is characterized by the input traffic (generated by LAN hosts) represented by 0,i and probability matrix pi,jr that is the probability that packet of class r leaving node i goes to node j. Now, the BCMP theorem works under the assumption that external traffic i.e. 0,i is Poisson and serving station (router) belongs to the one of the four type of nodes: multi-server (type 1), egalitarian processor-sharing (type 2), pre-emptive-resume LCFS (type 3), and server-per-job (type 4), see also [4]. The only type 2 node makes sense as a router for Internet. And big time so. Type 2 is the ideal type of router for the same reasons that it is an ideal type for the multitasking scheduler. However, type 2 node can not be implemented since the outgoing link can not be shared. Once the router starts sending the packet its transmission can not be interrupted (with the packet with possibly higher priority) until the end of a transmission. Theoretically, the packet could be interrupted by so called piggy-backing which should lead to the LCFS node type, however, it is not practical and therefore not used. That leads to our second approximation by which we model the processor sharing scheduling within the type 2 node (router). This approximation was detailed by Parekh and Gallager [1] leading to the PGPS (Packet-by-packet General Processor Sharing) scheduling within the router. Since the router can not pre-empt the packet during a transmission the PGPS algorithm chooses which packet is to be send next out of packets waiting in the queue. As far as the packet sojourn time (the time spent in a service station) is concerned the PGPS is very close and sometimes better than the PS. The other two approximations are Poisson arrivals and negative exponential service time (message length). The first assumption is reasonably correct for signaling dominated traffic [7], while later needs further verifications. Classes of customers (types of packets) differ by the average incoming rate 0,ir and the average service time ir. The BCMP theorem Having all definitions and assumptions from the above, the BCMP theorem gives the network probability as the product form of station probabilities, equation (21). That means the serving stations are statistically independent of each other and can be studied in isolation. In the Appendix we give the BCMP proof only for the type 2 node and that is what we need. To characterize each station, i, we need its input traffic i and service times for each class. For a given speed of the outgoing link the service time depends of the packet length. It is usually defined through the r = 1/Tsr, where Tsr is the average

transmission time for the packet of class r i.e. the average number of bits in a packet divided by the speed of an outgoing link. We say that every packet carries its Tsr. That being given the problem reduces to finding an input traffic into the node per class r, for each node. Looking at Figure 2 we can write:

js = 0, js + ir pir , js
i ,r

for

r, s = 1,2,3,4. (1)

if we allow the class change during a transmission, or:

jr = 0, jr + ir p i , jr
i ,r

for

r = 1,2,3,4.

(2) if not (our case), where jr is the average number of packets per sec of class r into the node j, and 0,jr is the average number of packets per sec of class r coming to the node/router j from its own LAN. Since (2) depends only of class r, index r can be removed (3) giving for each class:

j = 0 , j + i p i , j
i

(3) or in a matrix form:

p11 p12 ... p1N p p ... p 21 22 2 N [12 .... N ] = [0102 ....0 N ] + [12 .... N ] . . p N 1 p N 2 . p NN (4) for each class. Taking pii = 0 for each i we have:

= 0 p

where

1 p12 p13 ... p1N p 1 p ... p 23 2N 21 p ... ... p N 1 p N 2 ........1

(5)

Calculation of the router utilization

For example take: p= 1.0000 -0.1000 -0.1000 1.0000 -0.1000 -0.2000 -0.1000 -0.2000 then: 1.1364 0.2273 = 0 0.2273 0.2273

-0.2000 -0.2000 1.0000 -0.3000

-0.3000 -0.3000 -0.3000 1.0000

0.3220 1.2311 0.3977 0.3977

0.4647 0.4647 1.2981 0.5288

0.5769 0.5769 0.5769 1.3462 (6)

and knowing Tsr for this class we can find ir = i * Tsr that is the node (router) i utilization for class r. Summing over all classes we get an aggregate utilization of node i as i = rir.
Calculation of the sojourn time within the router

Since the BCMP nodes are statistically independent the M/M/1 serving station is a reasonable approximation for the BCMP node. Kleinrock [5] gives the formula for M/M/1 CDF:

S ( y ) = 1 e r (1 ) y
(7) where y sojourn time for class r, r = 1/Tsr service rate for class r, and is a total router utilization. This is the negative exponential distribution with pdf:

s( y ) = r (1 )e r (1 ) y
(8) and its Laplace transform (of sojourn time for class r within the node i):

L i (s ) =

r (1 ) r (1 ) + s

(9)
Calculation of the total delay Total packet delay, , from host A to host B is the sum of sojourn times of the routers through which the packet went. The Laplace transform of the distribution of the sum of stochastic variables is the product of the Laplace transforms of sojourn time distributions giving:

L (s ) =
i

ri (1 i ) ri (1 i ) + s
(10)

Formula (10) can be split into the sum:

L (s ) =
i

ci ri (1 i ) + s

(11) which is the sum of weighted negative exponential pdfs. Therefore, the pdf of the total delay is: s ( y ) = ci e ri (1 i ) y
i

(12) and the corresponding CDF: S (y) =


i

ci 1 e (1 i ) ri (1 i )

)
(13)

with the total average delay time:


s =
i

1 ri (1 i )

(14) If the number of sojourn nodes, i, is random there is also a closed-form formula for this case, see for instance Trivedi [6].
Calculation of a jitter distribution Ideally, given the source host A and the destination host B, every packet should have the same total delay. However, this is not the case. First, because of the random nature of sojourn times within a passing nodes, and second, because packets do not necessarily follow the same path. The jitter, g, is defined as a displacement from the average value: g = s where is the total delay. Therefore:

P[ g y ] = P[ s y < y + s ] = P[ s + y ] P[ s y ] = S ( s + y ) S ( s y )
(15) For example let us calculate an average delay and a jitter distribution for the voice packet class sent from host A to host B in Figure 1. The packet passes through three routers: 1, j, N. Total routers load can be obtained from (6), however, for the sake of space we will assume: 1 = 90%, i = 60% and N = 80%. Assume also the average voice packet service times are Tsr = 1/r = 10 msec for all routers (all routers have the same processors

and same outgoing links). Then (14) gives the total delay average s = 10/0.1 + 10/0.4 + 10/0.2 = 175 ms, (12) gives the delay pdf: s(y) = 0.0267 exp(-0.01y) + 0.0133 exp(0.04y) 0.04 exp(-0.02y), and (15) gives the CDF S(y) = 1 2.67exp(-0.01y) 0.33exp(-0.04y) + 2exp(-0.02y), Figure 3. The jitter distribution is plotted in Figure 4. For instance, a probability that the jitter is within 75 msec from the average delay is 0.5 meaning if packets with jitter less than 75 msec are acceptable, 50% of the arriving packets will be rejected.
0.005 0.0045 0.004 0.0035 0.003 pdf 0.0025 0.002 0.0015 0.001 0.0005 0 0 50 100 150 200 250 300 350 400 450 total delay (msec) pdf (12) CDF (13) 0.5 0.4 0.3 0.2 0.1 1 0.9 0.8 0.7

0.008 0.007 0.006

1 0.9 0.8 0.7

0.005
0.6

0.6 0.5 0.4 jitter pdf 0.3 0.2 0.1 0 175 CDF

pdf

CDF

0.004 0.003 0.002 0.001 0 jitter CDF (15)

0 500

25

50

75

100

125

150

jitter (msec)

Figure 3. Total delay distribution.


Conclusion

Figure 4. Jitter distribution.

In this paper we have shown how to design the packet data store-and-forward network like Internet using mathematical modeling, the BCMP theorem in particular. In our model the Internet routers are equivalent to the BCMP PS nodes. This is justified by the PGPS scheduling algorithm. We have further expanded the methodology to calculate the distribution of the packet delay time and the packet jitter time between any pair of hosts within the network under the assumption that all packets of the given class (like VoIP) use the same route. The methodology can be expanded to take into account path randomness between two hosts that is the topic of further research. Numerical example illustrates the use of our methodology. 1. Abhay K. Parekh and Robert Gallager: A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single-Node Case, IEEE Trans. on Networking, Vol. 1. No. 3, June 1993. 2. Basket F., Chandy K.M., Muntz K.M., Palacios F.G.: Open, Closed, and Mixed Networks of Queues with different Classes of Customers, JACM 1975. 3. Jackson J. L.: Network of Waiting Lines, Op. Research, 1957. 4. E. Gelenbe, I. Mitrani: Analysis and Syntesis of Computer Systems, Academic Press 1980. 5. Leonard Kleinrock: Queueing Systems Vol. 1, John Wiley 1975. 6. Kishor Trivedi: Probability and Statistics with Reliability and Computer Science Applications, 2-nd Ed., John Wiley 2002. 7. F. Hadziomerovic: Messaging System Characterization, Technical Report 94-0018, Bell Northern Research, February 1995. 7

Appendix
MULTICLASS TYPES OF NODES

Classes are distinguished by routing probabilities defined for each class r as pi,jr and service requirement at station j by jr. Customers are allowed to change the class going from one station to another. There are four station types which preserve the Markovian property of the network: multiserver, egalitarian processor-sharing, pre-emptive-resume LCFS, and server-per-job. We give the proof for egalitarian processor-sharing node type.
EGALITARIAN PROCESSOR-SHARING (TYPE 2)

This type of station can be seen as a single server whose service time per customer depends of number of customers in a station. Every customer obtains an equal infinitesimally small slice of service in a round-robin fashion, hence processor-sharing name. Figure 5 shows time slices allocated to each customer. Processor sharing discipline is equivalent to the server-per-customer discipline, where the serving rate of each server is slowed proportionally to the number of customers present, as shown in Fig.3 right.
No. of jobs in Time slices order system 1 4 7 9 10 1 2 5 8 2 3 6 3 3 3 3 three jobs

2 2 two jobs

one job

Figure 5. Allocation of processor slices to jobs (left), and equivalent servers (right). The network will have Markov property if the departure process is Poisson. For this type of station we have: P[departure in t] = P[departure in t|busy]*P[busy] + P[departure in t|idle]*P[idle]
= n* n *

= , which is the proof for single class customers. For multiple classes, the processor is shared as in Fig.4., where kr is the number of customers of class r present at the station.
k1 1 k 1 k 2 k k 2 2 k 2 k

Figure 6. Allocation of server among kr customers of class r, for two classes. Total number of customers in a station for R classes is:

r=1 . State of the station is described by a vector (k1,k2,...,kR), where kr is a number of customers of class r present in the station. Because of Markovian property, we can now apply global balance equation to a state (k1,k2,...,kR):

k = kr

p(k1 , k2 ,... ,k R)
R

r + kr r r=1 r=1
k
R r=1

= p(k1 , ...,kr - 1 ,... ,k R)*I kr0 *r + p(k1 , ...,kr+1,... ,k R)* kr+1 * r/ k+1
r=1

(16) We can now split (16) into local balance equations: k p(k1 ,... , kr,...k R)* r = p(k1 ,... ,kr+1,...k R)* r+1 * r , k+1 (17) and:

p(k1 ,... ,kr,...k R)*kr * r = p(k1 ,... ,kr-1,...k R)*r , k


which are in fact the same equations giving the result: p(k1 ,... ,kr,...k R) = p0 *k! * where: r = r , and = r . r r=1
R R

for kr>0.
(18)

r=1

k r r . k r!

(19)

Notice that local balance equations (17) and (18) compensate the arrival with the departure of the customer belonging to the same class. To find p0 we sum up all p(k1,k2,...,kR) and remind the formula: k = 1 +2 +...+ R k = It finally follows: p(k1 ,... ,kr,...k R) = 1- *k!*
r=1 R

k 1+k 2+...+ k R=k

k! k 1*k 2*...* k R R k1 !k2 !... kR! 1 2 (20) k r r


k r!

(21) And since serving stations are independent the network solution is the product of (21) over all service stations within the network.

Das könnte Ihnen auch gefallen