Sie sind auf Seite 1von 8

Measure and Model P2P Streaming System by Buffer Bitmap

Yishuai Chen, Changjia Chen, Chunxi Li


School of Electrical and Information Engineering, Beijing Jiaotong University
chenyishuai@gmail.com, changjiachen@sina.com, cxl@telecom.njtu.edu.cn

Abstract Compared with the huge advancement in


development and deployment of miscellaneous P2P
The correct evaluation of P2P streaming system streaming systems, there is still huge space for the
models needs the validation in real world system. research and development of appropriate theoretical
However, there is lack of systematic and integrated model. However, most of existing work in this area
measurement method for real world P2P streaming started with assumptions and ended with performance
system. In this paper, we propose a P2P streaming evaluation by simulation. There is still lack of
network measurement method based on a peer's buffer appropriate measurement methods to validate both the
occupancy probability. Our method is based on the assumptions and the results of the model in real world
fixed duration buffer property of commercial P2P P2P streaming system. This problem has hampered the
streaming systems. We prove the measured buffer advancement of P2P streaming network related
occupancy probability reflects the chunk propagation research because the correct evaluation of models and
process in the P2P network. We then propose a P2P right application of their results require validation in
streaming chunk propagation model and verify it in the real world systems. However, simple observation of
commercial P2P streaming network using our phenomenon in real world network does not help
measurement method. Our measurement method is because the model is always abstracted from the
useful for measuring and analyzing miscellaneous P2P concrete details of implementation and can only get
streaming systems. And our model and parameter general conclusions. We need more original angle of
estimation are useful for existing P2P simulators to view and more in-depth measurement and data
choose correct parameters and are meaningful for processing methods to finish this work.
researchers to understand the real meaning behind the In this paper, based on our measurement finding of
parameters. the synchronization between peer’s cache rejection and
media server’s chunk upload on chunk offset in
1. Introduction commercial P2P steaming system [3], we firstly
explore the characteristics of the dynamic of the buffer
Recently, P2P live streaming systems have received length in commercial P2P streaming system. We find
a lot of attention. On the industrial side, commercial the buffer length kept stable when the playback rate
P2P live streaming systems are prevailing. For does not change. And, when the playback rate changes,
example, PPLive [1], a popular P2P online video it smoothly transits to the new value. This stable
broadcasting and advertising network, has had 75 characteristic makes the continuous measurement and
million global installed base, 20 million monthly active observation of a peer’s buffer bitmap series
users and more than 600 channels [2]. On the academic meaningful.
side, more and more research work has been reported We then design the P2P streaming system
in this area. For example, in August 2007, a special measurement method based on the peer’s buffer
P2P streaming and IP-TV workshop was organized in occupancy probability distribution. In detail, we firstly
Sigcom 2007 to discuss P2P streaming related research continuously track a peer’s buffer bitmap and obtain a
topics, and in December 2007, a special IEEE JSAC matrix-like buffer bitmap array, then obtain the buffer
volume was organized to report the latest advances in occupancy probability distribution of this matrix. We
P2P streaming systems find all peers in same P2P streaming network have
similar buffer occupancy probability distributions.
1

This work was supported by China 973 2007CB307101, China Therefore, we infer that the buffer occupancy
NSFC 60672069 and China NSFC 60772043 probability distribution of a peer in fact reflects some
general characteristics of the P2P network. We then the capacity growth of P2P streaming system and
establish the mapping relationship between the chunk obtained the server-peer transition time in fixed
position in the buffer and the chunk arriving time since request rate.
its upload at the media server. With this relationship, For the P2P streaming system measurement, [9]
we establish the mapping relationship between the measured the traffic characteristics of controlled peers
buffer occupancy probability distribution and a of PPLive in Oct 2005 and Dec 2005 and found the
chunk’s propagation process in the P2P network. We high rate of change of parents. [10] inferred the
also verify this relationship in our PPLive network-wide quality of PPLive by actively crawling
measurement trace. peers’ buffer maps. [11] investigated the overlay-based
We then model the chunk propagation process in characteristics of PPLive by crawling peers’ partner
P2P streaming network by splitting it into two phases lists. [12] measured PPLive’s user behavior, playback
and also verify the obtained model with the measured quality and connection characteristics.
buffer occupancy probability distribution in PPLive.
The result shows the correctness of the model and the 3. Dynamic of PPLive buffer length
potential of our measurement method.
This paper is organized as follows. In Section 2 we As a typical P2P live streaming system, PPLive
introduce related work. In Section 3, we analyze the network consists of media server and peers. After the
dynamic of buffer length by mathematical model and media server uploads a media chunk into the P2P
verify the result in our trace. In Section 4, we present network, peers interact with each other to propagate
our measurement of buffer occupancy probability and the chunk to the total network. The media server and
then prove its mapping relationship to the chunk peers frequently send their buffer status information to
propagation process in the network. Section 5 is the their partner peers for them to request chunk
chunk propagation model and its verification in accordingly. By continuously tracking the buffer status
PPLive’s trace with our measurement method. Section information from a peer or media server, we can track
6 concludes the paper. the dynamic of their buffer, e.g. the sliding of the
buffer head, the media server’s chunk upload rate, etc.
2. Related work We model the P2P network and peer’s local buffer
as a virtual buffer as shown in Figure 1. Its end is at
For the P2P streaming system modeling, [4] the media server and head is at the head of peer’s local
presented a mathematical model based on the sliding buffer. When a chunk is uploaded into the network by
peer buffer behavior, i.e. each peer maintains a buffer the media server, it is injected into the buffer at the end.
which acts as a sliding window into the stream of Then it is propagated in the network until it reaches the
chunks distributed by the server. It also introduced the peer. After it reaches peer, it slides towards the buffer
notion of buffer occupancy probability and established head and finally reaches the buffer head and is rejected
the model based on this notion. With the established from the buffer. Therefore, this virtual buffer includes
model, it discussed different data-driven downloading the P2P network and peer’s local buffer.
strategies. Our measurement finding of the stable
buffer behavior validates their sliding window Output Input
Virtual Buffer
assumption in real world P2P streaming system. [5]
developed a stochastic fluid model and investigated the
affect of peers’ upload capacity and joining/departing Figure 1. Virtual buffer consists of P2P
rate for P2P streaming system. [6] introduced the network and peer local buffer
notion of production for P2P streaming system and
investigated the possibly achievable system production We analyze the dynamic of the length of this virtual
with different protocols. [7] presented an in-depth buffer as below.
quantitative analysis of the proposed CDN(content Let s(t) be the media server’s chunk upload service
distribution network)-P2P mixed streaming network. It curve. When server uploads a chunk with ID k to the
split the chunk propagation process in P2P network network at time t, we say s(t) = k. Let f(t) be the peer’s
into 2 phases and obtained the mathematical solution chunk rejection curve. When peer rejects a chunk with
of the system handoff time. With the inspiration of this ID k from its buffer at time t, we say f(t) = k. f(t) is
2-Phase modeling method, we establish the P2P chunk usually called the offset of the buffer.
propagation model and verify it in real world P2P Let r(t)=ds(t)/dt be the chunk upload rate of the
streaming network. [8] derived an equation to describe media server. Because the media server usually
uploads chunks at their playback rate, we also call r(t) 2). The time required for the change finishing is
as chunks’ playback rate. Similarly, we use equal to the buffer duration τ. Because different peers
g(t)=df(t)/dt to denote peer’s buffer rejection rate. have different buffer duration, they finish the change
Let q(t) be the buffer length at time t and let CH(t) of the buffer length at different time. The longer the
and CT(t) be the ID of the chunk at the buffer head and buffer length before the change, the later the change
buffer end at time t respectively. The Buffer Duration finishes, and hence the longer the time interval of
τ(t) is defined as the time to empty the buffer if there is change.
no any new chunk input after time t. In other words, 3). When t = t0 +τ, all peers finish the change of the
the buffer duration at time t is the time interval τ(t) buffer length. The finishing time is exactly the
such that changing time of the buffer rejection rate.
CH(t+τ(t))=CT(t) We also study the dynamic of the buffer length on
A chunk rate based (CRB) buffer rejection strategy the offset domain. The “offset” here is the chunk ID of
[3] rejects a chunk from the buffer with the rate in the buffer head, i.e. f(t).
which it is injected into the buffer. In other words, Assume media server starts uploading chunk at t =
every chunk leaves the buffer with the rate in which it 0. The ID of the first uploaded chunk is s0. When r(t)
comes. changes from a to a+b at t=t0, i.e.
We firstly prove following theorem. s(t)=s0+at+b(t−t0)u(t−t0)
Theorem 1. A CRB buffer is a fixed-duration Define the chunk with ID s0+at0 as the jumping
buffer, i.e. τ(t) is a constant. chunk of the service curve. Denote s*= s0+at0.
Proof: Because f(t)=s(t−τ)=s0+a(t−τ)+b(t−t0−τ)u(t−t0−τ),
Obviously, we have we have qτ(f)=s(t)−f(t)=[a+bu(t−t0−τ)]τ+b(t−t0)[u(t−t0)
dq(t)/dt= r(t)−g(t) −u(t−t0−τ)], i.e. the change of buffer length starts from
With r(t) = g(t+τ(t)) for a CRB buffer, we have t0 and ends at t=t0+τ.
dq(t)/dt= g(t+τ(t)) −g(t) (1) Denote f(t0) as f0, we have
Since the time interval for a media server to upload f0=s0+a(t0−τ).
the chunks from CH(t) to CT(t) is the same as the time Denote f(t0+τ) as f1, we have
interval for a peer to drain chunks from CH(t) to CT(t) f1=s0+at0=s*.
from its buffer in a CBR strategy, we have Between f0 and f1, we have
q (t ) = ∫tt +τ g (ν )dν f=s0+a(t−τ) and t=(f−s0)/a+τ
Taking divertive of it, we have With qτ(f)=aτ+b(t−t0), substitute t into qτ(f), we
dq (t ) dt = (1 + dτ dt ) g (t + τ ) − g (t ) have qτ(f)=aτ+ b[f−(s0+a(t0−τ))/a = aτ+b(f−f0)/(f1−f0))
In summary, we have
Substituting it into equation (1), we have:
dτ(t)/dt=0 ⎧⎪aτ , f = f 0−
qτ ( f ) = ⎨aτ + bτ ( f − f 0 ) ( f1 − f 0 ) , f 0+ ≤ t ≤ f1−
Thus we have proved ■
⎪⎩(a + b)τ , f ≥ f1+
We validated this fixed-duration buffer behavior in
our measured PPLive trace. The detailed result was And f0=s0+a(t0−τ) = s0+at0−aτ = s*−aτ= s*−qτ(f0−)
reported in [3]. Therefore we can obtain following characteristics of
We study the dynamic of the buffer length when r(t) the dynamic of the buffer length changes on the buffer
changes from a to a+b at t = t0. i.e. r(t)=a+bu(t−t0). u(t) offset.
is the unit step input. 1). All peers’ changes of buffer length are ended at
On the time domain, with r(t)=a+bu(t−t0), we have same buffer offset, i.e. at s*.
dqτ(t)/dt=bu(t−t0)−bu(t−t0−τ). 2). When peer’s buffer length starts change, the
The buffer length can be written as: buffer offset is smaller than the ID of the chunk who
⎧qτ (t 0 ), t = t 0− changes the playback rate.
⎪ 3). The longer the buffer length before the change,
qτ (t ) = ⎨qτ (t 0 ) + b(t − t 0 ), t 0+ ≤ t ≤ (τ + t 0 ) − the smaller the buffer offset when change starts, and
⎪q (t ) + bτ , t ≥ (τ + t ) + the larger the offset change range.
⎩ τ 0 0

Therefore, we can obtain following characteristics Above characteristics of the dynamic of the buffer
of the dynamic of the buffer length on the time domain. length can be verified in the measured PPLive trace.
1). All peers start changing their buffer length at the The verification result is shown in Figure 2 and Figure
same time when r(t) changes, i.e. at t0. 3. Figure 2 shows the dynamic of 3 peers’ buffer
length and their cache rejection rates on the time
domain. Figure 3 shows the dynamic on the buffer Such stability makes the continuous observation and
offset. The media server’s chunk upload rate curve is measurement of a peer’s buffer bitmap meaningful. In
also shown as reference. We can easily find the real world P2P streaming system, r(t) doesn’t change
characteristics in them. frequently. For example, the measurement result in
Figure 2 shows the duration of r(t) = 8 is about 20min
and the duration of r(t) = 10 is longer. Therefore, the
time when buffer length keeps stable is long enough to
capture enough bitmap samples to get its statistical
characteristic. The detail of our measurement method
is demonstrated below.
During the system works in the stable status, i.e.
when r(t) doesn’t change, we periodically request
peers’ buffer bitmaps. Because the buffer size is fixed
during this time, the obtained bitmaps have same
length. We then align them at the buffer head and
order them in the sampling time. We finally obtain a
bitmap matrix as shown in Figure 4.

Chunk Position

Figure 2. Peer buffer length, playback rate and


offset rate changes in time domain

Time
Figure 4. Peer buffer bitmap matrix

In the matrix shown in Figure 4, each column


represents a buffer bitmap sample. In each column,
each box represents a chunk in the buffer. The color of
the box indicates if the corresponding chunk is
received by the peer at the sampling time. If the chunk
has arrived, the box’s color is white, otherwise, its
color is gray.
For further analysis, we introduce following
definitions:
Chunk position - m: It is the distance of a chunk box
Figure 3. Peer buffer length, playback rate and to the top of the matrix, i.e. the relative position of a
offset rate changes in offset domain chunk to the buffer end.
Random indicating function of chunk occupancy at
4. Measure P2P streaming network by chunk position m at time t - N(m, t): It is 1 if the chunk
buffer bitmap at position m has been received at time t. Otherwise, it
is 0.
From the analysis in Section 3 we clearly observed Occupancy probability - Pr(N(m, t)=1): It is the
the stability of a peer’s buffer size, i.e. it keeps stable probability of the chunk at position m is obtained by
when the playback rate does not change and smoothly the peer at time t.
transits to the new value when the playback rate jumps.
We assume Pr(N(m, t)=1) is independent of t when Assume all peers in same P2P network has similar
system works in stable status. This assumption is Pm distribution, we extend the Pm distribution to the
reasonable because N(m, t) is affected by two aspects total network. Denote Sm(t) as the amount of peers
of factors which all keep stable when system works in whose N(m, t)=1 at time t. Denote M0 as the amount of
stable status. Firstly, it is affected by the sliding of all peers in the P2P network. We have
chunks in the buffer, i.e. as peer rejects chunks from its Pm = Sm(t)/M0
buffer head, chunks slide towards the buffer head in Because Pm is independent of t, we have
the buffer. Because r(t) doesn’t change and the buffer Pm=Sm/M0
sliding speed is equal to the playback rate, the speed of Assume a chunk is uploaded by media server at t=0
buffer sliding also doesn’t change and the sliding is and it arrives at peer at buffer position m at time t.
stable. Secondly, it is affected by the arriving of new Denote the buffer duration as τ and the playback rate
chunks. Because peers usually have stable download as r. With the analysis in Section 3, we have q(t) = rτ.
when they work in stable status, the arriving of new Therefore, the chunk’s sliding rate in the buffer is r
chunks is also stable. Therefore, we can assume and it shall arrive at the buffer head at t=τ. And during
Pr(N(m, t)=1) is independent of t and denote it as Pm. its sliding, it passes r(τ-t) chunk positions. i.e.
We use below method to obtain peer’s Pm curve q(t) = rτ = m + r(τ-t)
from the measured bitmap matrix. For each row m in And
the matrix, we count the columns whose N(m)=1. m = rt
Denote its amount as N1. We also count the columns Denote N(t) as the amount of peers who have
whose N(m)=0 and denote its amount as N0. We then obtained this chunk in the network at time t. Because
get Pm with m=rt at time t. we have N(t)= Srt. Therefore, we have
Pm = N1 (N 0 + N1 ) . Pm = N(m/r)/M0. (2)
After extensive observation of measured Pm curves, Equation (2) discloses the mapping relationship
we found a common phenomenon, i.e. peers in same between the Pm distribution and the chunk propagation
P2P network have similar Pm curves but different P2P process N(t) in P2P streaming network. This mapping
networks have different Pm curves. As an example, relationship is verified in our PPLive trace. We use
Figure 5 shows the Pm curves of peers in two different following verification method. To obtain N(t), for
P2P network. We can find peers in same network have every chunk, we obtain its server upload time ts and its
similar Pm curves, but peers in different networks have first appearance time in peer’s buffer bitmap tr. We use
different Pm curves. We infer that Pm reflects the tr as the chunk’s arriving time at the peer. Therefore,
characteristic of different P2P networks and can be we can get this chunk’s network transmission time to a
used to investigate the P2P network. peer with Ts = tr-ts. With the measured Ts for all peers,
we can calculate this chunk’s N(t)/M0 distribution.
We find chunks in same P2P network have similar
N(t)/M0 distribution. We also find the measured
N(t)/M0 curve is very similar to the measured Pm
curves of peers with the mapping method shown in
equation (2). Figure 6 is an example which shows the
comparison between a chunk’s N(t)/M0 curve and a
peer’s Pm curve in one of our experiments.
We conclude that Pm reflects the characteristic of
chunk propagation process in P2P streaming network.
We then modeled this process and verified our model
using our measured Pm curve.

5. Modeling
We model the chunk propagation process in P2P
streaming network as below.
In P2P streaming system, peers periodically query
Figure 5. Pr(N(m)=1) curves in PPLive
the buffer status of partner peers and the media server.
When a new chunk is found, peers request this chunk
However, what network characteristics Pm reflects?
and start a timer to wait for the chunk. If the chunk is
We re-inspect the abstract of virtual buffer.
received, the timer shall be cleared. If chunk request In phase I, most of peers have not this chunk so
fails, the timer shall time out and peers shall request need request it, but the network’s total upload capacity
again. The request failure may be induced by the loss of this chunk is unable to satisfy these requests.
of the request or response in the network. It may also Therefore, the chunk propagation in this phase is
be induced by the overload or capacity constraint of limited by C(t) and some requests are rejected. We
the partner peer or media server. have
dN(t)/dt = C(t) = bN(t)
Assume chunk is uploaded by the media server at t
= 0. Look the media server as the first node in the
network who has this chunk, we have
N(0) = 1
Therefore we have
N (t ) = e bt
As the chunk propagates in the network and more
and more peers obtain this chunk and start to provide
the downloading of this chunk, the chunk request rate
shall decrease but C(t) shall increase. Finally, C(t)
shall be able to satisfy the requirements of all peers’
requests. Denote this transition time as t0.
Let M(t) denote the amount of peers in the network
who have not the chunk at time t. With λ as their
average chunk request rate, then the total chunk
request rate in the network is λM(t).
Figure 6. Comparision between N(t)/M0 and Pm When t = t0, we have
C(t0) = bN(t0) = λM(t0)
We make following two assumptions for the model. With N(t) + M(t) = M0, we have
The first assumption is the media server acts just like a λM 0
common peer during the chunk propagation process, N (t 0 ) =
b+λ
i.e. its upload capacity for a chunk is just like that of
bM 0
the common peer and peers do not prefer to request M (t 0 ) = (3)
chunks from the media server. This assumption is b+λ
reasonable because the decrease of the load on the Because N (t 0 ) = e bt0 , we have:
media server can bring better scalability. And this b+λ
assumption also conforms to current typical system M0 = ebt0 (4)
λ
configuration in commercial P2P network, e.g. PPLive And
[2].
1 ⎛ λM 0 ⎞
The second assumption is the network size M0 t0 = ln⎜ ⎟
keeps constant during the propagation of a chunk. It is b ⎝b+λ ⎠
reasonable because it is found that only one thousandth After the propagation of a chunk enters into phase
peers depart or join in one second [12]. And as Figure II, the propagation is only limited by peers’ chunk
6 shows, the propagation of a chunk to the total request rate λ. λ is decided by peer’s chunk request
network is usually finished in 20-30s or less. That is to strategy and its retry rate when chunk requests fail.
say, during this time, there are only 4%-6% or less Because no request is rejected now, we have:
peers change. dM (t )
Denote peers’ average upload speed as b chunk/s = −λM (t )
dt
and denote peers’ chunk request rate as λ request/s. With equation (3), we have:
Denote the amount of peers who have obtained the bM 0 −λ (t −t0 )
chunk at time t as N(t) and denote the network’s total M (t ) = M (t0 ) ⋅ e − λ (t − t 0 ) = e
upload capacity of this chunk at time t as C(t). We b+λ
have And
bM 0 −λ (t −t0 )
C(t) = bN(t). N (t ) = M 0 − M (t ) = M 0 − e
The propagation of a chunk in P2P network is split b+λ
into two phases. In summary, we have:
⎧ ebt t ≤ t0 increase of λ is limited compared with that brought by

N (t ) = ⎨ bM 0 increase of b. That is to say, the average upload rate b
M − e − λ ( t −t 0 ) t > t0 is a more dominant parameter affecting the system’s
⎪⎩ 0 b + λ
performance than the request rate λ. Moreover, the
Divide it with M0, we have
increase of chunk propagation time for different b is
⎧ 1 bt obvious when Pm < 0.02, which means there are not
e t ≤ t0
N (t ) ⎪⎪ M many peers acting as sources during that time.
=⎨ 0
M0 ⎪ b −λ (t −t0 ) Therefore, the increase of media server’s upload
1− e t > t0
⎩⎪ b + λ capacity in this beginning phase can bring considerable
Replace equation (4) in it, we have: improvement to the average upload speed b of the total
⎧ λ b (t −t0 ) network. And then the chunk propagation time is
N (t ) ⎪⎪ b + λ e t ≤ t0 improved accordingly.
=⎨
M 0 ⎪1 − b e −λ (t −t0 ) t > t
⎪⎩ b + λ 0 5.1. Model Verification
Replace it into equation (2), we finally have: We verified our model with the measured Pm
⎧ λ b ⎛⎜⎝ mr −t0 ⎞⎟⎠ distribution in PPLive.
⎪⎪ e m ≤ rt0 We firstly measured PPLive’s behavior to estimate
Pm = ⎨ b + λ m ranges for model parameters. The obtained parameter
⎪1 − b e −λ ( r −t0 ) m > rt0
⎪⎩ b + λ ranges are shown in the 2nd column of Table I.
The obtained Pm is shown in Figure 7. Table 1. Model Parameter
Para Estimated 070502 070604
Range
r 10 10 10
M0 1K – 20K 10K 10K
b 1-1.25 1 1.15
λ 0.15-0.4 0.19 0.35

In detail, we measured PPLive’s chunk request retry


behavior by passively sniffing its client’s behavior. We
found its retry probability is 10%-20% and its retry
interval can be 2.5-6s. This relatively long retry
interval makes peer rarely request same chunk from
multiple peers at the same time, therefore decreases the
duplicated receiving of chunks. Because λ is decided
by peers’ chunk request strategy and the retry rate,
assuming most of peers have tried to request the chunk
Figure 7. Model Pm curve in Phase I and most of requests in Phase II are the retry
requests, we estimate λ is near to the request retry rate.
From Figure 7 we can find following inspirations We estimated the value range of λ is 0.15-0.4.
for the evaluation and design of P2P live streaming The amount of peers in a PPLive channel varies for
systems. different channel. It is usually in the range 1K-20K
Firstly, the increase of chunk propagation time with although it can be very large at some special events
M0 is not linear. For example, when M0 increases from [12]. We use 10K here.
10K to 200K, the chunk propagation time is only Peers’ average upload rate b is difficult to measure
increased by about one third. This characteristic by actively crawling. We estimate it with following
demonstrates the scalability of the P2P live streaming method. Firstly, b should be equal to or larger than the
system. average download rate. When system works in stable
Secondly, the decrease of chunk propagation rate is status, peers’ average download rate is equal to the
obvious when b increases. For example, when b media playback rate. Secondly, we assumed the P2P
increases from 0.5 to 1.5, the chunk propagation time network provider drives the utilization of peers’ upload
is decreased by about one third. However, the capacity to a relatively high value for better playback
improvement of chunk propagation time brought by
quality. Therefore, the playback rate should be close to
the system’s average upload speed. We finally estimate 7. References
the average upload rate range is 1-1.2 times of the
playback rate. [1] www.pplive.com
The verification result shows our model reflects the [2] G. Huang “Experiences with PPLive”, [Online document],
characteristics of the measured Pm curve in the real P2P-TV Workshop in conjunction with SIGCOMM 2007,
world P2P live streaming system. Figure 8 shows the Available HTTP: http://www.sigcomm.org/
result of our verification on two traces. The used sigcomm2007/p2p-tv/Keynote-P2PTV-
model parameters are listed in the 3rd and 4th column of GALE_PPLIVE.ppt
Table I respectively. [3] Y. Chen, C. Chen and C. Li, "A Measurement Study of
Cache Rejection in P2P Live Streaming System",
Workshop on Multimedia Network Systems and
Applications in conjunction with ICDCS-2008, June,
2008
[4] Y. Zhou, D. M. Chiu, and John C.S Lui, "A Simple
Model for Analyzing P2P Streaming Protocols", The
fifteenth IEEE InternationalConference on Network
Protocols (ICNP 2007), Bei Jing, China, Oct. 2007
[5] R. Kumar, Y. Liu, and K. W. Ross, "Stochastic fluid
theory for P2P streaming systems," in Proceedings of
INFOCOM, 2007
[6] D. Lou, Y. Mao, and T. H. Yeap, "The production of
peer-to-peer video-streaming networks", P2P-TV
Workshop in conjunction with SIGCOMM 2007
[7] D. Xu, H.K. Chai, C. Rosenberg, and S. Kulkarni.
"Analysis of a Hybrid Architecture for Cost-Effective
Streaming Media Distribution", SPIE/ACM Conf. on
Multimedia Computing and Networking (MMCN'03),
San Jose, CA, Jan. 2003
Figure 8. Verification of Models [8] Y. Tu, J. Sun, and S. Prabhakar. "Performance Analysis
of a Hybrid Media Streaming System", SPIE/ACM Conf.
6. Conclusion on Multimedia Computing and Networking (MMCN'04),
San Jose, CA, Jan. 2004
We proposed a P2P streaming network [9] S. Ali, A. Mathur and H. Zhang "Measurement of
Commercial Peer-To-Peer Live Video Streaming",
measurement method which is based on the buffer
Workshop in Recent Advances in Peer-to-Peer Streaming,
bitmap. We proved its feasibility by modeling and August, 2006.
analyzing the dynamic of buffer length. We proved it [10] X. Hei, Y. Liu, and K. W. Ross, “Inferring Network-
reflects the chunk propagation process of P2P network. Wide Quality in P2P Live Streaming Systems”, [Online
With this inspiration, we established a P2P streaming document], 2007, Available HTTP:
chunk propagation model and verified it in real world http://eeweb.poly.edu/faculty/yongliu/docs/index-
P2P live streaming system with our measurement buffermap.pdf,
method. This verification also proved our measurement [11] L. Vu, I. Gupta, J. Liang, K. Nahrstedt. "Measurement
method has its potential to help us analyzing and of a Large-scale Overlay for Multimedia Streaming",
High Performance Distributed Computing (Poster -
researching P2P streaming system better.
HPDC 2007)
Why the parameters λ of two traces are different [12] X. Hei, C. Liang, J. Liang, Y. Liu and K. W. Ross, “A
is an interesting question. On the other side, the Measurement Study of a Large Scale P2P IPTV System”,
underlying mechanism to implement λ also has [Online document], Nov 2006, Available HTTP:
important effect on the performance of P2P streaming http://cis.poly.edu/~ross/papers/P2PliveStreamingMeasur
system, e.g. the quality of local media playback. We ement.pdf
shall do more in-depth investigation on it in the future.

Das könnte Ihnen auch gefallen