Beruflich Dokumente
Kultur Dokumente
This work was supported by China 973 2007CB307101, China 2. Related work
NSFC 60672069 and China NSFC 60772043
Peer buffer size is a frequently mentioned system called chunk. Trunks are uploaded to network by a
parameter in design of P2P live streaming systems. In media server one by one, each with a sequence number.
Coolstreaming [3], 120s is used as all peers’ buffer Peer uses this sequence number to assemble the
size because experiment results show that the time lag received trunks and then send them to the local media
between nodes is unlikely higher than 1 minute. In player for playback. The sequence number is usually
Anysee [4], different buffer sizes are used for peers in incremented continuously and can be looked as the
different layers, e.g. 20ms for peers in 1st layer, 41.6ms trunk’s offset to the start point of program, so it is
for peers in 2nd layer, etc. [5] usually uses 1 minute as usually called offset of the chunk. The chunk with
buffer size. In [6], it is found that buffering can smaller offset has earlier playback time than the chunk
dramatically improve performance of P2P streaming with bigger offset has.
system and different buffer sizes, e.g. 0, 30, 60 and The peer’s local buffer in P2P live streaming
120 seconds, are simulated and evaluated. It is system has two functionalities. One is buffer, i.e. it
observed that a 30-second lag is sufficient to obtain holds chunks for reordering and assembling. The other
almost all potential gains from buffering. All these is cache, i.e. it caches data for accessing of partner
work focus on the size of peer’s local buffer. In this peers. For simplicity, we just call it peer buffer.
paper, what we discussed and modeled is a fixed-
duration virtual buffer which includes the P2P network Buffer window
and peers’ local buffer.
The traffic characteristics of controlled peers of Cache part Buffer/Cache part
PPLive were measured in Oct 2005 and Dec 2005 [7].
The high rate of change of parents was found at that
1111111111111…………1111111111 00101101010001
time. The problem of remotely monitoring network- trunk
wide quality was examined in [8] by analyzing the rejected
Media
crawled peer buffer maps. The overlay-based Assembled new
buffer Ro buffer
characteristics were studied in [9] by analyzing the head Point trunk end
crawled peer partner lists. In [10], it was observed that arrived
buffer size in PPLive varied from 7.8 MBytes to
17.1Mbytes by downloading media file from its local Figure 1. Structure of peer‘s local buffer
streaming server after physically disconnecting the PC The structure of peer’s local buffer is shown in
from network. The authors inferred that PPLive figure 1. In this figure, the Media Assembled Point
adaptively allocated buffer size according to the (MAP) points to the last chunk that has been sent to
streaming rate and the buffering time period specified media player. Peer’s local buffer is separated by MAP
by the media source but did not explore the buffer into two parts. One is the cache part. It includes the
management algorithm. This paper provides the space allocated for chunks with offset smaller than or
answer. equal to MAP. The chunks in this part have been
assembled and sent to media player. Therefore, they
3. PPLive peer buffer structure are only cached for accessing of other peers. The other
is the buffer/cache part. It includes the space allocated
In P2P live streaming system, people usually watch for chunks with offset bigger than MAP. Chunks in
as they watch TV. There are two characteristics for this part are waiting for assembling so they are
people to watch TV: (1) The content before audiences’ buffered. On the other side, they can also be accessed
selected start point is not cared by audiences. (2) All by other peers, so they are also cached.
audiences in one channel watch the same scene at the The ‘1’ and ‘0’ in Figure 1 indicate if the chunk has
same time and also keep same forwarding pace. been received by peers. Since chunks in cache part
Therefore, the played content becomes out-of-date have been received and sent to media player, they are
immediately and is not required by any audience any always marked with ‘1’. On the other hand, the
more. We call such a watching pattern as instant received chunks in buffer/cache part are buffered
watching pattern. Such a pattern is also a requirement because they are waiting for earlier chunks to finish
of P2P live streaming system and should be satisfied as assembling, so only the received chunks are marked
much as possible. That is to say, the content in P2P with ‘1’. Such a ‘0/1’ bit pattern is usually called
live streaming system only has limited lifetime and can buffer bitmap.
be rejected from peer’s cache safely after some time. When new trunk is found and peer prepares to
The media data in P2P live streaming system is request it, buffer space is allocated for it and the
usually organized, transmitted and cached in unit corresponding bit in buffer bitmap is initialized to ‘0’.
The buffer end then moves forwards if required. When
a new trunk is received, the corresponding ‘0’ becomes 4.2. Basic observation
‘1’. When continuous ‘1’ block beginning from MAP
appears and can be assembled, chunks are assembled We firstly compared the basic trend of Rs and Ro.
into media block and sent to media player for local To calculate Rs at ti, we used following equation based
playback. MAP move forwards. on our observation in [12], i.e.
As chunk becomes out-of-date, it can be rejected e − hi
from the cache. With Belady's minimum cache R s ,i = i (chunk / s ) (1)
120
rejection principle, i.e. the most efficient caching To obtain Ro, we used N samples {Pj, j ∈
algorithm would be to always discard the information
[ k − N2 , k + N2 − 1 ]} to calculate Pj’s slope on tk. After
that will not be needed for the longest time in the
future, the chunk at the head of the buffer should be We selected N = 12. We also did a running average
rejected because it is the oldest chunk in the buffer and with 24 as window size to smooth the curve.
has the least possibility for future request of other Figure 2 shows the Rs obtained from one tracker
peers. Therefore, it is rejected from the buffer and the and the Ro of a random selected peer.
buffer head move forwards.
When peer receives partner peer’s query for its
available chunks list, it returns its buffer bitmap and
the offset of the chunk at the head of buffer. With this
information, partner peer can schedule its requests for
chunks. Peer can also query the tracker for the status of
media server’s upload buffer. Tracker returns the
information of media server’s buffer window, which
includes the offset of the buffer head and the buffer
end, i.e. all trunks between these two offset are
available.
7. Conclusion