Sie sind auf Seite 1von 13

Seminar

Algorithms Of The Internet


Organized by Christian Schindelhauer

Elaboration on Topic 03

Web Caching
by Stefan Luecking (6014415)
stl(at)upb.de

University of Paderborn
Date: August, 4th, 2004

An Overview On Web Caching

The popularity of the Internet has risen dramatically in the last few years. The number of users
is rising, as is the amount of data available in, and transfered through the Internet. Although the
number of servers on the Internet increased as well as the bandwidth between them, many servers
are not able to deal with the large number of data requests by the users, and many connections
are congested by the large amounts of transferred data.
Many of these problems can be addressed by web caching. Caching is a technique widely used
where ever data is accessed. Consider for example the cache of a processor, the cache of a file
system, and so on. It works by buffering data from slower devices in a fast, but relatively small
buffer. Thus the access time to cached data is reduced significantly. By carefully choosing which
data to buffer, it is possibly to achieve a tremendous speed up when accessing data. When using
this concept to buffer data from the Internet it is referred to as web caching. In most cases web
caches work by intercepting the requests of the Internet users. When the data is already cached,
the cache satisfies the user request. Only in case the data is not stored in the cache, it is requested
from the original site, stored in the cache, and afterwards send to the user.
There are several benefits of web caching. When data is requested it usually has to travel between
several computers before it arrives at the users computer. When a cache is placed close to the
user, the data has to travel this way only once as all further requests to it can be served by the
cache, the original site is not bothered anymore. This has several side effects. At first, the load of
the server is diverted upon several caches, thus reducing the response time for the users request.
It also reduces bandwidth usage, as the data has not to be relayed from the originating server to
the cache. When the cache is placed near the user, the travel time of the signal is also reduced.
Therefore, a cache can help in reducing bandwidth usage, computation time, travel time, server
load and latency, especially as it is perceived by the user.
While there are several similarities between traditional caching and web caching, there are even
more differences as web caching is more complex. Consider the example of a processor cache,
which is used to buffer data from the main memory. The data buffered is uniform in size and the
response time of the main memory is always constant. Data that is cached from the web differs
in size, just consider the difference between plain a HTML-Page and a video file. To make things
even more complex, the web cache has to cache data from different sources with the transfer time
of the data varying dramatically, being based on bandwidth, overall traffic and server load.

Several approaches to web caching

There are several ways to realize web caches, each one having its individual advantages and
drawbacks. Which concept is used for realizing a web cache therefore highly depends on the needs
of the operator. In turn, the algorithms used are dependent on the chosen architecture. Here we
will present an overview on a few web caching architectures [BO].

2.1

Proxy Caching

Proxy caching is one of the most popular techniques used for web caching applications. It strongly
resembles the original caching concept. The concept is displayed in figure 1.
In order to use a proxy cache server, the user first has to configure his browser to use the
proxy. This is necessary because requests from the user are sent to the proxy cache server. If the
requested data is already cached, the users request is served by the proxy cache. In case the data
is not cached, it is fetched from the address the user requested, cached, and afterwards returned
to the user.
Using proxy caching is fairly simple and requires only minimum effort. This is opposed by

several constraints. At first, every user has to configure his browser accordingly to the proxy
cache server, although this might be of decreasing importance as work is in progress on selfconfiguring browsers. The second drawback is that the proxy cache is a neuralgical point, as all
user requests are relayed by it. When the proxy fails, all clients using it are forced to reconfigure
their browsers. The third problem is that a proxy cache server does not scale with increasing
traffic. To cope with this, either a proxy with higher performance has to be used, or traffic has to
be diverted on several proxies.

User A
Document
Source

User B
User C

User A
Document
Source

Proxy Cache
Server

User B
User C

Bandwidth savings
Traffic reduction

Figure 1: The concept of proxy caching (above without, below with proxy caching)

2.2

Reverse Proxy Caching

In case of reverse proxy caching the proxy cache is placed in the vicinity of the content it is used to
cache. It can be used by content providers to reduce the load of web servers by diverting requests
sent to the web servers to the proxies caching the content.

2.3

Transparent Caching

Transparent caching, as the name suggests, is invisible to the user. It works by intercepting data
requests by the user and redirecting them to web caches. An algorithm decides to where the
requests should be redirected, thus diverting the requests and therefore the load upon several
caches. Because this concept works by transparently intercepting HTTP-requests, it has to be
implemented on the switch level or at the router level.

2.4

Adaptive Web Caching

The adaptive web caching concept is built around several independent web caches, communicating
with, and querying each other. Those web caches join to form groups of web caches, with each web
cache possibly belonging to more than one group. The composition of the groups is, as the name
suggests, adaptive. It changes according to the user requests, therefore responding to altering
interests in web content.
2

2.5

Push Caching

Push caching is a concept that dynamically responds to varying user requests. When a server
using push caching detects large numbers of user requests originating from the same source or
direction, it establishes a web cache close to the origin of the user requests. Push caching needs a
server to be able to create web caches at different sites, possibly spread all over the country. To
establish a remote cache, the server needs access to remote computers. Therefore, push caching is
primarily targeted at larger content providers, having several web servers at different places.

2.6

Active Caching

Active caching is a project that was initiated by the University of Wisconsin. An increasing
amount of internet traffic becomes personalized (for example by using cookies). Although it is
possible to cache personalized objects, it is not feasible. Consider a number of people accessing
a personalized object. While the objects are mostly the same, they are slightly modified by the
personalization for every user. This enforces every object to be cached separately. Active caching
uses applets which are located in the cache to personalize documents. Thus, it is sufficient to store
the object once, the personalized aspect of the object is generated at request time. This results in
far better memory usage. As can be seen, active caching targets on a special type of web content
and is no general caching concept.

Proxy Web Caching Algorithms

As already mentioned, proxy caching is very similar to traditional caching mechanisms used previously in other contexts than web caching. We will first point out what a caching algorithm is
intended to do, and how it is actually done. Then we will introduce a number of algorithms and
show how they deal with the problems inherent to web caching.

3.1

General aspects of caching algorithms

The perfect caching algorithm would be the one which could serve any request immediately, without having to reload the requested data. This actually means that all data had to cached before
it would be requested, which could only be accomplished by two means. Either by knowing the
future data requests, which is only possible in a minority of applications, or by storing all data
that is possibly requested, which is not feasible due to the limitations of the cache memory size.
Therefore, efficient usage of cache memory is important, and anticipating future data requests
would be an appreciated feature of every caching algorithm.
All traditional caching algorithms, as well as proxy caching algorithms have a common mode
of operation. They intercept data requests and try to serve them from the cache. If the data requested is not cached yet, this is done and the request is served afterwards. In case the requested
data does not fit into the cache, the algorithm evicts data from the cache until the newly requested
data fits.
The most noticeable difference between caching algorithms lies in the strategy used to evict data.
This is known as the replacement policy or replacement scheme. The choice which documents
should be removed is crucial to the efficiency of the cache, as the cache should minimize the
amount of data necessary to load. This means that ideally only that data should be removed
that is not accessed anymore or that is easy to reload, while data that is accessed in the near
future should kept in the cache. Practically, this is done by assigning an evaluation value to all
cached data entities. This value is computed by an evaluation function, which is unique to every
caching algorithm. The evaluation function is closely related to the replacement policy, as the
data to evict is chosen on the basis of the evaluation values. In most cases, this is the data entity
having the smallest evaluation value. Therefore, most replacement policies are fairly simple. The
3

evaluation function can be more complex, but it is usually limited to condense a few values into
a single evaluation value as we will see later.
A well known and widely used example for a caching algorithm is the Least Recently Used (LRU)
algorithm. It works by assigning a timestamp to every data entity, either when it is cached or
accessed. This can be considered the evaluation function. When data has to be evicted, the one
with the oldest timestamp is chosen by the replacement policy. Another example is the Least
Frequently Used (LFU) algorithm. This algorithm assigns an access counter to every data entity.
When the data is accessed, the respective counter is increased, this is the evaluation function
of the algorithm. The replacement policy always chooses the data entity with the lowest value,
removing the data that was accessed the least often.

3.2

Necessities of Web Caching

Although the conventinal replacement policies mentioned above perform well when applied to processor caches, they are not equally successful when used for web caching. The reason is that they
do not account for the higher complexity of web caching.
For every web caching algorithm it is important to take into account the size of the cached documents as efficient memory usage is crucial to the performance of the cache. Consider an algorithm
evaluating the documents based on the time they need to load. There are two cached documents,
differing significantly in size, with the larger document having a slightly higher evaluation value
than the small document. Thus, when an eviction is necessary, the small document will be evicted,
freeing only a small amount of memory. As the values of the documents are nearly equal, it would
have been advantageous to evict the large document. It can be restored nearly as fast as the small
one, and would have freed up a large amount of cache memory, which could improve the overall
performance significantly.
Depending on the evaluation function, it is possible that the evaluation values of all documents
have to be updated when a document is requested or cached (we will present an example for that
later). As a proxy cache usually has to serve a large number of requests and stores a large number
of documents, this can become computationally expensive. Also, whenever a document is evicted,
all documents have to be compared to each other in order to find the one that should be evicted.
This requires the data structure holding the documents to be traversed. Most algorithms require a
priority queue or multiple linked lists for this in order to reduce seek time from O (k) to O (log (k)),
with k designating the size of the cache. Because of the high load of a proxy cache server this is
considered a severe problem by proxy cache implementors.
To address the above problems, special web caching algorithms were devised. Some of them
and the way they deal with those problems are presented below. In order to show how existing
caching techniques were gradually refined to accomodate for web caching, we will introduce the
GreedyDual algorithm and its variants. Afterwards we will present a randomization technique for
web caching algorithms. Finally, we will present the Lowest Relative Value algorithm, which was
designed from the scratch for web caching applications.

3.3

The GreedyDual Algorithm

The GreedyDual algorithm was proposed by Young [You94]. It can be regarded as a generalization
of the Least Recently Used algorithm, adopted to suit the needs of web caching. The algorithm
works by maintaining an evaluation value H(p) for every cached document p. When a document is
cached its value is set to the cost incurred by caching p, designated c(p). When a document needs
to be removed from the cache, the document with the smallest value H is chosen. This document
4

is evicted, and the value H of every document remaining in the cache is decreased by the value of
the evicted document. Whenever an already cached document p is requested, H(p) is reset to c(p).
The GreedyDual algorithm is interesting in a number of ways. At first, documents are removed
in a fashion similar to LRU. Documents that were accessed a long time ago have their values
reduced, literally aging them as other documents are accessed. In contrast, those documents that
are accessed are reset to their original value. The difference to LRU is that LRU considers only
the time of the last access, as with GreedyDual an eviction is ultimately dependent on the value
of the document and the values of the previously removed documents.
Another interesting aspect is that GreedyDual tries to minimize the overall cost produced
by caching documents. Thus, by appropriate definition of the cost function c, it is possible to
determine which aspect of web caching should be optimized. For example c can be set to reflect
the download time of a document. GreedyDual then tries to minimize the overall download time.
By this means any aspect associated with caching documents can be minimized, although only
one aspect at a time.

3.4

The GreedyDual-Size Algorithm

The GreedyDual algorithm was modified by Cao and Irany [CI] to account for the size of the cached
documents as well. The modified algorithm is known as GreedyDual-Size. The sole difference
between this variant and its ancestor is the composition of the evaluation value H. When a
c(p)
, with s(p) being the size of the document
document p is cached or reaccessed, H(p) is set to s(p)
p, thus taking into account the size of the document. The effect is that when cost-equivalent
documents are cached, the larger ones tend to be removed earlier than the smaller ones (of course
ultimately depending on the accesses to the docuements), increasing efficiency of memory use.

3.5

A Modified GreedyDual-Size Algorithm

Later, the GreedyDual-Size algorithm itself was modified by Cao and Irany [CI]. They intended
to eliminate a drawback of the original GreedyDual-Size and address a common problem of some
proxy web caching algorithms, which is the need to update the evaluation of every stored document when an eviction takes place. This need was eliminated by introducing the concept of the
inflation value, referred to as L.
At the beginning, L is initialized to 0. Whenever a document is cached or reaccessed, its value
c(p)
is set to L + s(p)
. When an eviction occurs, the remaining documents are not re-evaluated, but
L is set to equal the value H of the evicted document. Although the computational overhead is
reduced significantly, the modified algorithm behaves exactly as the original variant.
To show the efficiency of the GreedyDual-Size algorithm, we will show that it is k-competitive in
section 4.

3.6

Randomizing Proxy Caching Algorithms

As we have seen, a common problem of web caching algorithms is their need for the computationally expensive maintenance of data structures when documents are requested or evicted. A
replacement policy that works without a data structure that has to be serviced is the random
replacement scheme [PP01]. Whenever a document has to be evicted, the document is chosen
at random by the algorithm. The benefit of having no data structure to maintain is obvious. A
severe drawback inherent to the algorithm is that every document can be evicted, including the
useful ones. Despite the eviction of documents is speed up, this drawback results in an overall
poor performance of the random replacement scheme.

Psounis and Prabhakar [PP01] had the idea to combine the benefits of the original random replacement scheme with the performance of other web caching algorithms. They proposed a random
replacement scheme that can replace the original replacement scheme of nearly any web caching
algorithm. Eviction is still based on the evaluation function of the original caching algorithm, but
a random element is added to improve performance. The replacement scheme works by maintaining a subset of a predefined size N of the cache. As long as no eviction occurs, the subset is
empty. When an eviction takes place, the subset is filled with samples of documents, drawn at
random from the cache. Now, instead of evicting the least useful of all documents, the least useful
of the documents in the subset is evicted. Afterwards, the subset is partially emptied, keeping
the M least useful documents in the subset. When another eviction occurs, the subset is filled up
by adding N-M samples and the process is repeated. The mode of operation is displayed in figure 2.

Cache
A
(2)

B
(4)

C
(5)

D
(1)

E
(8)

Subset (N=4, M=2)


F
(6)

G
(9)

H
(3)

B
(4)

C
(5)

F
(6)

H
(3)

A
(2)

D
(1)

First eviction: Subset is filled, H is chosen for eviction

A
(2)

B
(4)

C
(5)

D
(1)

E
(8)

F
(6)

G
I
(9) (7)

B
(4)

C
(5)

After eviction, the M least useful documents are kept

A
(2)

B
(4)

C
(5)

D
(1)

E
(8)

F
(6)

G
I
(9) (7)

B
(4)

C
(5)

Next eviction: N-M documents are added, D will be evicted


Figure 2: Mode of operation of the randomized replacement scheme. Documents are represented
by boxes with the character designating the document name, the number the document value.
The benefit of the proposed replacement scheme is that it eliminates the need to traverse the
data structure when an eviction takes place. Only a small number of documents has to be examined
to determine the document to evict, reducing seek time. On the downside, the evaluation function
remains the same. Therefore, also the possible need to constantly re-evaluate the documents
persists. Another drawback of this technique is that the eviction of documents is not optimal,
with respect to the evaluation values of all documents. As only the least useful of the sampled
documents is evicted, there might be a document in the cache that is even less useful. This might
lead to less efficient cache memory usage, compared to an original algorithm that was randomized.
Therefore, the use of this random replacement scheme has to be evaluated concerning the possible
speed up by reducing computational overhead in contrast to less efficient memory usage.

3.7

The Lowest Relative Value Algorithm (LRV)

The LRV algorithm was developed specifically for web caching purposes [RV98]. It is of special
interest because it is no refinement or evolution of a previously known caching strategy. Like
many other caching algorithms, it uses an evaluation function generating a single value for each
6

document, which is the basis for the replacement policy, that evicts always the document with
the lowest value. The novelty of the algorithm is the evaluation function itself. It is a relatively
complex function compared to other web caching algorithms. To exploit document access patterns
and determine the factors that are crucial for caching documents, the authors performed statistical
analysis of web traces. The conclusions drawn from the analysis were the foundation of the LRV
algorithm.
In this paper we can only scratch the surface of the mechanism behind the Lowest Relative
Value algorithm. Therefore, we will concentrate on presenting the evaluation function and giving
an idea of its complexity.
The designers concluded that evicting a document has a benefit, as well as it generates cost.
They also found that the probability of a document of being reaccessed is crucial to its value
for the cache. Therefore, their evaluation function assigns a value V to every document with
C
Pr , where C and B represent benefit and cost of evicting the document and Pr reflects
V = B
the documents probability of being reaccessed. Therefore, the higher the cost or the probability
of a reaccess, the more valuable the document is. On the other hand, the higher the benefit of
evicting the document, the lower the documents value.
Several different metrics can be used to calculate the cost C of a document, e.g. bandwidth or
loading time. The benefit B is usually related to the objects size. As expected, the computation
of Pr , the probability of reaccessing a document, turned out to be more complex as computing Pr
effectively means to anticipate future requests.
In order to find out which factors contribute to Pr , the authors performed a number of statistical analysis on web traces. By examining the document access patterns and the user behavior,
they could make the observations below.
Only a small fraction of all cached documents will be accessed again later
About 50 percent of the documents accessed more than once have a reaccess time larger
than one day
Less than 10 percent of all documents are reaccessed in more than 2-3 weeks
The probability of reaccessing a document depends highly on the number of previous accesses
In some cases, the reaccess pattern to a document is influenced by its size, type and source
With the help of the above observations, the authors were able to identify several parameters that
influence Pr , which are presented here.
The time between consecutive requests to the same document, known as interaccess time
The number of previous accesses
The document size
Several other parameters of lower significance were found, like source and type of a document, as well as the client requesting the document
As the authors wanted to take into account the most important parameters, the computation of
Pr was based on the parameters interaccess time (designated i), the number of previous accesses
to the document (designated t) and the documents size (designated s). The computation is done
by the following formula, its parameters being described below.

Pr (i, t, s) = P|i=1 (i, s)(1 D(t))


if i = 1, otherwise Pr (i, t, s) = P (i) (1 D(t))

From the statistical data, the authors were able to establish a distribution function of the interaccess time, designated D(t). This function can be used to express the dependency of Pr on
time. As this function is computationally expensive, it has to be approximated, which is done by

D(t).
By this function, the interaccess time is contributed to Pr (i, t, s).
The number of previous accesses to a document is contributed by P (i). This is the probability
of a document being reaccessed after having been accessed i times.
The size of a document is taken into account directly when computing Pr by P|i=1 (i, s), this
actually is the size of the document. For a complete and in-depth discussion of the algorithm,the
statistical model and its computation, please see [RV98].

Proofing the Online Optimality of GreedyDual-Size

The GreedyDual-Size algorithm is k-competitive [CI]. We will prove this by comparing the
GreedyDual-Size algorithm to an optimal caching algorithm. Both algorithms are compared to
each other by processing the same document request sequence.
Definitions
scache designates the size of the cache
smin designates the size of the smallest document
Lf inal designates the value of L at the end of the algorithm
Theorem 1
The GreedyDual-Size algorithm is k-competitive, where k =

Scache
Smin .

Idea of proof
We will show that for any fixed but arbitrary document request sequence holds:
The cost of the optimal algorithm is at least Smin Lf inal
The cost of the GreedyDual-Size algorithm is at most Scache Lf inal
This yields Theorem 1.
Cost measurement for the proof
The cost of a caching algorithm is usually considered the cost of all cache misses. For showing
the cost of the algorithms we will use an alternative cost measurement. Instead of charging an
algorithm with the cost of a document when it is cached, we will charge the algorithm with the
cost of a document whenever it is evicted. Because every evicted document has to cached before,
the cost for those documents is the same for both cost measurements. The only difference between
the cost measurements is for those documents that are cached, but never evicted. Fortunately, the
cost of those documents sums up to an additive constant, therefore the proof using the alternative
cost measurement also holds for the standard cost measurement.
The comparison model
We will compare the two algorithms be letting them serve a fixed but arbitrary document request
sequence. When a document is requested, this request is first served by the optimal algorithm.
Hereafter the request is served by the GreedyDual-Size algorithm. Then a new document request
happens, this repeats until the document request sequence is concluded. The model is illustrated
in figure 3.

Proofing the cost of the optimal algorithm


We will begin the proof with two important observations concerning the inflation value L.
8

A document request is first served by the optimal algorithm...


... then by the GreedyDual-Size algorithm

Optimal Alg.
Greedy Alg.
time

Document requests

Figure 3: The comparison model of the proof


L only increases when a document is evicted from the cache of the GreedyDual-Size algorithm.
L never decreases
We will examine a timeframe where the GreedyDual-Size algorithm has a document p in its cache,
that the optimal does not. Such a timeframe can only begin by the eviction of p by the optimal
algorithm. The value of L at the beginning of the timeframe will be designated Lstart , at the end
of the timeframe Lend . The timeframe can end in two ways, either by bringing p back into the
cache of the optimal algorithm, or by removing p from the cache of the GreedyDual-Size algorithm
c(p)
as well, see figure 4. We will consider both possibilities and show that L increases by at most s(p)
during such a timeframe. This is a property we need later in the proof.

Consider the first case, the timeframe ends immediately after the optimal algorithm caches p
again. Obviously p has to be cached already before the timeframe begins.
L Lstart when p is cached as L never decreases with time
H(p) Lstart +

c(p)
s(p)

because of the above observation

The document p can only be requested at the last request of the timeframe, as the timeframe
ends immediately after p is cached again. Therefore, the timeframe ends just before H(p) is
modified, which means that the value H(p) remains the same during the whole timeframe. Also,
the document p is never removed from the cache of the GreedyDual-Size algorithm, as this would
end the timeframe by complete removal of p, which is covered by case two. Nevertheless, it is
possible that another document is removed. For that case, we can make the observations below.
L = H(a) when an arbitrary document a is evicted

Case 1:

Optimal
Greedy

Lstart

Lend
End of timeframe

Case 2:

p
p

Optimal
Greedy

Lstart

Lend

Figure 4: The two possibilities for ending the timeframe


H(a) H(p) as p is never evicted and always the document with the smallest value H is
removed
L H(p) Lstart +

c(p)
s(p)

during the whole timeframe because of the above observation

Therefore, L increases by at most

c(p)
s(p)

in this case.

Consider the second case, the timeframe ending with the eviction of p from the cache of the
GreedyDual-Size algorithm.
H(p) Lstart +

c(p)
s(p) ,

as p is cached before the timeframe begins

H(p) does not change during the timeframe, as p is not requested (otherwise p would be
recached, covered bycase one)
Lend = H(p) Lstart +

c(p)
s(p)

Therefore, L increases by at most

as p is evicted, by the above observations


c(p)
s(p)

in this case.

Conclusion
We have shown that when L increases, this is at most by
below.

c(p)
s(p) .

We also can make the observations

1. smin s(p), by definition


2.

c(p)
s(p)

3.

P c(p)

s(p)

c(p)
smin ,

4. Lf inal

by observation (1).
c(p)
smin ,

P c(p)
s(p)

observation (2), summed over all evicted documents p.

as L is increased by at most

10

c(p)
s(p)

for every evicted document p.

5. Lf inal

c(p)
smin ,

follows from observation (3) and (4).

P
From the last observation we can draw the conclusion that smin Lf inal
c(p). This shows
that the summed cost of all documents evicted by the optimal algorithm, which is the cost of the
optimal algorithm, is at least smin Lf inal . This concludes the proof of the optimal algorithms cost.
Proof of the cost of the GreedyDual-Size algorithm
Consider a document p that is evicted by the GreedyDual-Size algorithm. We will examine a
timeframe which begins as p is cached and ends when p is evicted. Let Lstart denote the value
of L when p is cached and Lend the value of L when p is evicted. We can make the observations
below.
1. H(p) = Lstart +

c(p)
s(p) ,

as p is cached when the timeframe begins

2. H(p) Lstart +

c(p)
s(p) ,

as it might be requested during the timeframe

3. Lend = H(p) Lstart +


4. Lend Lstart

c(p)
s(p) ,

c(p)
s(p)

as the timeframe ends with the eviction of p

which follows from the above observations

For every document p and its timeframe we draw an interval on the real line from Lstart to Lend ,
therefore the length of the interval is Lend Lstart . The interval is closed on the left and open on
the right side, and is assigned a weight of s(p). By observation four, we know that the cost produced for this interval by evicting p is bounded by (Lend Lstart ) s(p), which is its length times
its weight. As we draw an interval for every evicted document, the cost of the GreedyDual-Size
algorithm is bounded by the sum of all intervals over their length times their weight.
Consider an arbitrary point on the real line. When an interval covers this point, its document is
still in the cache. This means that for any point the summed weights of all intervals covering this
point is at most scache , as the intervals weight is their documents size, and the summed size of all
documents in the cache can not exceed scache . As all intervals lie in the range of 0 to Lf inal the
GreedyDual-Size algorithms overall cost is bounded by scache Lf inal .
This concludes the proof of the online optimality of GDS.

11

References
[BO]

Greg Barish and Katia Obraczka. World wide web caching: Trends and techniques. USC
Information Sciences Institute.

[CI]

Pei Cao and Sandy Irani. Cost-aware www proxy caching algorithms. University of
Wisconsin-Madison and University of California-Irvine.

[PP01] Konstantinos Psounis and Balaji Prabhakar.


scheme. In IEEE INFOCOM, 2001.

A randomized web-cache replacement

[RV98] Luigi Rizzo and Lorenzo Vicisano. Replacement policies for a proxy cache. UCL-CS
Research Note, 1998.
[You94] Neal Young. The k-server dual and loose competitiveness for paging. Algorithmica, 11,
June 1994. Rewritten version of On-line caching as cache size varies, in The 2nd Annual
ACM-SIAM Symposium on Discrete Algorithms, 241-250, 1991.

12

Das könnte Ihnen auch gefallen