Beruflich Dokumente
Kultur Dokumente
This work was supported by China 973 2007CB307101, China Therefore, we infer that the buffer occupancy
NSFC 60672069 and China NSFC 60772043 probability distribution of a peer in fact reflects some
general characteristics of the P2P network. We then the capacity growth of P2P streaming system and
establish the mapping relationship between the chunk obtained the server-peer transition time in fixed
position in the buffer and the chunk arriving time since request rate.
its upload at the media server. With this relationship, For the P2P streaming system measurement, [9]
we establish the mapping relationship between the measured the traffic characteristics of controlled peers
buffer occupancy probability distribution and a of PPLive in Oct 2005 and Dec 2005 and found the
chunk’s propagation process in the P2P network. We high rate of change of parents. [10] inferred the
also verify this relationship in our PPLive network-wide quality of PPLive by actively crawling
measurement trace. peers’ buffer maps. [11] investigated the overlay-based
We then model the chunk propagation process in characteristics of PPLive by crawling peers’ partner
P2P streaming network by splitting it into two phases lists. [12] measured PPLive’s user behavior, playback
and also verify the obtained model with the measured quality and connection characteristics.
buffer occupancy probability distribution in PPLive.
The result shows the correctness of the model and the 3. Dynamic of PPLive buffer length
potential of our measurement method.
This paper is organized as follows. In Section 2 we As a typical P2P live streaming system, PPLive
introduce related work. In Section 3, we analyze the network consists of media server and peers. After the
dynamic of buffer length by mathematical model and media server uploads a media chunk into the P2P
verify the result in our trace. In Section 4, we present network, peers interact with each other to propagate
our measurement of buffer occupancy probability and the chunk to the total network. The media server and
then prove its mapping relationship to the chunk peers frequently send their buffer status information to
propagation process in the network. Section 5 is the their partner peers for them to request chunk
chunk propagation model and its verification in accordingly. By continuously tracking the buffer status
PPLive’s trace with our measurement method. Section information from a peer or media server, we can track
6 concludes the paper. the dynamic of their buffer, e.g. the sliding of the
buffer head, the media server’s chunk upload rate, etc.
2. Related work We model the P2P network and peer’s local buffer
as a virtual buffer as shown in Figure 1. Its end is at
For the P2P streaming system modeling, [4] the media server and head is at the head of peer’s local
presented a mathematical model based on the sliding buffer. When a chunk is uploaded into the network by
peer buffer behavior, i.e. each peer maintains a buffer the media server, it is injected into the buffer at the end.
which acts as a sliding window into the stream of Then it is propagated in the network until it reaches the
chunks distributed by the server. It also introduced the peer. After it reaches peer, it slides towards the buffer
notion of buffer occupancy probability and established head and finally reaches the buffer head and is rejected
the model based on this notion. With the established from the buffer. Therefore, this virtual buffer includes
model, it discussed different data-driven downloading the P2P network and peer’s local buffer.
strategies. Our measurement finding of the stable
buffer behavior validates their sliding window Output Input
Virtual Buffer
assumption in real world P2P streaming system. [5]
developed a stochastic fluid model and investigated the
affect of peers’ upload capacity and joining/departing Figure 1. Virtual buffer consists of P2P
rate for P2P streaming system. [6] introduced the network and peer local buffer
notion of production for P2P streaming system and
investigated the possibly achievable system production We analyze the dynamic of the length of this virtual
with different protocols. [7] presented an in-depth buffer as below.
quantitative analysis of the proposed CDN(content Let s(t) be the media server’s chunk upload service
distribution network)-P2P mixed streaming network. It curve. When server uploads a chunk with ID k to the
split the chunk propagation process in P2P network network at time t, we say s(t) = k. Let f(t) be the peer’s
into 2 phases and obtained the mathematical solution chunk rejection curve. When peer rejects a chunk with
of the system handoff time. With the inspiration of this ID k from its buffer at time t, we say f(t) = k. f(t) is
2-Phase modeling method, we establish the P2P chunk usually called the offset of the buffer.
propagation model and verify it in real world P2P Let r(t)=ds(t)/dt be the chunk upload rate of the
streaming network. [8] derived an equation to describe media server. Because the media server usually
uploads chunks at their playback rate, we also call r(t) 2). The time required for the change finishing is
as chunks’ playback rate. Similarly, we use equal to the buffer duration τ. Because different peers
g(t)=df(t)/dt to denote peer’s buffer rejection rate. have different buffer duration, they finish the change
Let q(t) be the buffer length at time t and let CH(t) of the buffer length at different time. The longer the
and CT(t) be the ID of the chunk at the buffer head and buffer length before the change, the later the change
buffer end at time t respectively. The Buffer Duration finishes, and hence the longer the time interval of
τ(t) is defined as the time to empty the buffer if there is change.
no any new chunk input after time t. In other words, 3). When t = t0 +τ, all peers finish the change of the
the buffer duration at time t is the time interval τ(t) buffer length. The finishing time is exactly the
such that changing time of the buffer rejection rate.
CH(t+τ(t))=CT(t) We also study the dynamic of the buffer length on
A chunk rate based (CRB) buffer rejection strategy the offset domain. The “offset” here is the chunk ID of
[3] rejects a chunk from the buffer with the rate in the buffer head, i.e. f(t).
which it is injected into the buffer. In other words, Assume media server starts uploading chunk at t =
every chunk leaves the buffer with the rate in which it 0. The ID of the first uploaded chunk is s0. When r(t)
comes. changes from a to a+b at t=t0, i.e.
We firstly prove following theorem. s(t)=s0+at+b(t−t0)u(t−t0)
Theorem 1. A CRB buffer is a fixed-duration Define the chunk with ID s0+at0 as the jumping
buffer, i.e. τ(t) is a constant. chunk of the service curve. Denote s*= s0+at0.
Proof: Because f(t)=s(t−τ)=s0+a(t−τ)+b(t−t0−τ)u(t−t0−τ),
Obviously, we have we have qτ(f)=s(t)−f(t)=[a+bu(t−t0−τ)]τ+b(t−t0)[u(t−t0)
dq(t)/dt= r(t)−g(t) −u(t−t0−τ)], i.e. the change of buffer length starts from
With r(t) = g(t+τ(t)) for a CRB buffer, we have t0 and ends at t=t0+τ.
dq(t)/dt= g(t+τ(t)) −g(t) (1) Denote f(t0) as f0, we have
Since the time interval for a media server to upload f0=s0+a(t0−τ).
the chunks from CH(t) to CT(t) is the same as the time Denote f(t0+τ) as f1, we have
interval for a peer to drain chunks from CH(t) to CT(t) f1=s0+at0=s*.
from its buffer in a CBR strategy, we have Between f0 and f1, we have
q (t ) = ∫tt +τ g (ν )dν f=s0+a(t−τ) and t=(f−s0)/a+τ
Taking divertive of it, we have With qτ(f)=aτ+b(t−t0), substitute t into qτ(f), we
dq (t ) dt = (1 + dτ dt ) g (t + τ ) − g (t ) have qτ(f)=aτ+ b[f−(s0+a(t0−τ))/a = aτ+b(f−f0)/(f1−f0))
In summary, we have
Substituting it into equation (1), we have:
dτ(t)/dt=0 ⎧⎪aτ , f = f 0−
qτ ( f ) = ⎨aτ + bτ ( f − f 0 ) ( f1 − f 0 ) , f 0+ ≤ t ≤ f1−
Thus we have proved ■
⎪⎩(a + b)τ , f ≥ f1+
We validated this fixed-duration buffer behavior in
our measured PPLive trace. The detailed result was And f0=s0+a(t0−τ) = s0+at0−aτ = s*−aτ= s*−qτ(f0−)
reported in [3]. Therefore we can obtain following characteristics of
We study the dynamic of the buffer length when r(t) the dynamic of the buffer length changes on the buffer
changes from a to a+b at t = t0. i.e. r(t)=a+bu(t−t0). u(t) offset.
is the unit step input. 1). All peers’ changes of buffer length are ended at
On the time domain, with r(t)=a+bu(t−t0), we have same buffer offset, i.e. at s*.
dqτ(t)/dt=bu(t−t0)−bu(t−t0−τ). 2). When peer’s buffer length starts change, the
The buffer length can be written as: buffer offset is smaller than the ID of the chunk who
⎧qτ (t 0 ), t = t 0− changes the playback rate.
⎪ 3). The longer the buffer length before the change,
qτ (t ) = ⎨qτ (t 0 ) + b(t − t 0 ), t 0+ ≤ t ≤ (τ + t 0 ) − the smaller the buffer offset when change starts, and
⎪q (t ) + bτ , t ≥ (τ + t ) + the larger the offset change range.
⎩ τ 0 0
Therefore, we can obtain following characteristics Above characteristics of the dynamic of the buffer
of the dynamic of the buffer length on the time domain. length can be verified in the measured PPLive trace.
1). All peers start changing their buffer length at the The verification result is shown in Figure 2 and Figure
same time when r(t) changes, i.e. at t0. 3. Figure 2 shows the dynamic of 3 peers’ buffer
length and their cache rejection rates on the time
domain. Figure 3 shows the dynamic on the buffer Such stability makes the continuous observation and
offset. The media server’s chunk upload rate curve is measurement of a peer’s buffer bitmap meaningful. In
also shown as reference. We can easily find the real world P2P streaming system, r(t) doesn’t change
characteristics in them. frequently. For example, the measurement result in
Figure 2 shows the duration of r(t) = 8 is about 20min
and the duration of r(t) = 10 is longer. Therefore, the
time when buffer length keeps stable is long enough to
capture enough bitmap samples to get its statistical
characteristic. The detail of our measurement method
is demonstrated below.
During the system works in the stable status, i.e.
when r(t) doesn’t change, we periodically request
peers’ buffer bitmaps. Because the buffer size is fixed
during this time, the obtained bitmaps have same
length. We then align them at the buffer head and
order them in the sampling time. We finally obtain a
bitmap matrix as shown in Figure 4.
Chunk Position
Time
Figure 4. Peer buffer bitmap matrix
5. Modeling
We model the chunk propagation process in P2P
streaming network as below.
In P2P streaming system, peers periodically query
Figure 5. Pr(N(m)=1) curves in PPLive
the buffer status of partner peers and the media server.
When a new chunk is found, peers request this chunk
However, what network characteristics Pm reflects?
and start a timer to wait for the chunk. If the chunk is
We re-inspect the abstract of virtual buffer.
received, the timer shall be cleared. If chunk request In phase I, most of peers have not this chunk so
fails, the timer shall time out and peers shall request need request it, but the network’s total upload capacity
again. The request failure may be induced by the loss of this chunk is unable to satisfy these requests.
of the request or response in the network. It may also Therefore, the chunk propagation in this phase is
be induced by the overload or capacity constraint of limited by C(t) and some requests are rejected. We
the partner peer or media server. have
dN(t)/dt = C(t) = bN(t)
Assume chunk is uploaded by the media server at t
= 0. Look the media server as the first node in the
network who has this chunk, we have
N(0) = 1
Therefore we have
N (t ) = e bt
As the chunk propagates in the network and more
and more peers obtain this chunk and start to provide
the downloading of this chunk, the chunk request rate
shall decrease but C(t) shall increase. Finally, C(t)
shall be able to satisfy the requirements of all peers’
requests. Denote this transition time as t0.
Let M(t) denote the amount of peers in the network
who have not the chunk at time t. With λ as their
average chunk request rate, then the total chunk
request rate in the network is λM(t).
Figure 6. Comparision between N(t)/M0 and Pm When t = t0, we have
C(t0) = bN(t0) = λM(t0)
We make following two assumptions for the model. With N(t) + M(t) = M0, we have
The first assumption is the media server acts just like a λM 0
common peer during the chunk propagation process, N (t 0 ) =
b+λ
i.e. its upload capacity for a chunk is just like that of
bM 0
the common peer and peers do not prefer to request M (t 0 ) = (3)
chunks from the media server. This assumption is b+λ
reasonable because the decrease of the load on the Because N (t 0 ) = e bt0 , we have:
media server can bring better scalability. And this b+λ
assumption also conforms to current typical system M0 = ebt0 (4)
λ
configuration in commercial P2P network, e.g. PPLive And
[2].
1 ⎛ λM 0 ⎞
The second assumption is the network size M0 t0 = ln⎜ ⎟
keeps constant during the propagation of a chunk. It is b ⎝b+λ ⎠
reasonable because it is found that only one thousandth After the propagation of a chunk enters into phase
peers depart or join in one second [12]. And as Figure II, the propagation is only limited by peers’ chunk
6 shows, the propagation of a chunk to the total request rate λ. λ is decided by peer’s chunk request
network is usually finished in 20-30s or less. That is to strategy and its retry rate when chunk requests fail.
say, during this time, there are only 4%-6% or less Because no request is rejected now, we have:
peers change. dM (t )
Denote peers’ average upload speed as b chunk/s = −λM (t )
dt
and denote peers’ chunk request rate as λ request/s. With equation (3), we have:
Denote the amount of peers who have obtained the bM 0 −λ (t −t0 )
chunk at time t as N(t) and denote the network’s total M (t ) = M (t0 ) ⋅ e − λ (t − t 0 ) = e
upload capacity of this chunk at time t as C(t). We b+λ
have And
bM 0 −λ (t −t0 )
C(t) = bN(t). N (t ) = M 0 − M (t ) = M 0 − e
The propagation of a chunk in P2P network is split b+λ
into two phases. In summary, we have:
⎧ ebt t ≤ t0 increase of λ is limited compared with that brought by
⎪
N (t ) = ⎨ bM 0 increase of b. That is to say, the average upload rate b
M − e − λ ( t −t 0 ) t > t0 is a more dominant parameter affecting the system’s
⎪⎩ 0 b + λ
performance than the request rate λ. Moreover, the
Divide it with M0, we have
increase of chunk propagation time for different b is
⎧ 1 bt obvious when Pm < 0.02, which means there are not
e t ≤ t0
N (t ) ⎪⎪ M many peers acting as sources during that time.
=⎨ 0
M0 ⎪ b −λ (t −t0 ) Therefore, the increase of media server’s upload
1− e t > t0
⎩⎪ b + λ capacity in this beginning phase can bring considerable
Replace equation (4) in it, we have: improvement to the average upload speed b of the total
⎧ λ b (t −t0 ) network. And then the chunk propagation time is
N (t ) ⎪⎪ b + λ e t ≤ t0 improved accordingly.
=⎨
M 0 ⎪1 − b e −λ (t −t0 ) t > t
⎪⎩ b + λ 0 5.1. Model Verification
Replace it into equation (2), we finally have: We verified our model with the measured Pm
⎧ λ b ⎛⎜⎝ mr −t0 ⎞⎟⎠ distribution in PPLive.
⎪⎪ e m ≤ rt0 We firstly measured PPLive’s behavior to estimate
Pm = ⎨ b + λ m ranges for model parameters. The obtained parameter
⎪1 − b e −λ ( r −t0 ) m > rt0
⎪⎩ b + λ ranges are shown in the 2nd column of Table I.
The obtained Pm is shown in Figure 7. Table 1. Model Parameter
Para Estimated 070502 070604
Range
r 10 10 10
M0 1K – 20K 10K 10K
b 1-1.25 1 1.15
λ 0.15-0.4 0.19 0.35