Sie sind auf Seite 1von 60

Probabilistic Approximation of Metric Spaces and its Algorithmic Applications

Yair Bartal
Presented by Edva Roditty

Metric spaces

A set of points A distance function d(x,y) : d(x,y) = 0 only if x=y d(x,y) d(x,z) + d(z,y) d(x,y) = d(y,x)

Metric Spaces

Many optimization problems can be defined in terms of some metric space. Our goal is to find the worst case performance ratio of an algorithm compared to an optimal algorithm.

Motivation

probabilistic approximation of metric spaces is important in the case of on-line problems. Many problems defined in terms of metric spaces almost entirely open in terms of their randomized competitive ratio. Sufficient to give algorithm for some class of simple metric spaces in order to obtain upper bounds for any metric space.

Why do we need to approximate ?metric spaces

It is easier to get good bounds on the performance ratio for some class of simple metric spaces.

:From metric spaces to graphs

Any finite metric space can be represented by a weighted connected graph . The graph G=(V,E,w) represents a metric space M (over the set V) where w(e) is the weight of the edge e. dM(u,v) = dG(u,v). dG(u,v)- the sum of edge weights over a shortest path between u and v.

)Example: (Migration problem

Let a network be represented as a weighted graph G. A set of files resides in different nods. Each node is a processor.

cost of accessing to file F initiated by a processor v - distance between v to the processor holding the file F. A file may be migrated from one processor to another .cost - D times the distance between the two processors. the goal is to minimize the total cost.

Migration problem:
F15

3 3 F1,F2 2 4 F3,F5 F9,F4 ,F6 F7,F8 5 F12

probabilistic approximation

Given a metric space M over a finite set V of n points we define the distance between u and v as dM(u,v) .

We say that a metric space N over V, dominates a metric space M over V, if for every u ,v V, dN(u,v) dM(u,v) .

A set of metric spaces S over V, -probabilistically-approximates a metric space M over V, if every metric space in S dominates M and there exists a probability distribution over metric spaces N in S such that for every u ,v V , E(dN(u,v)) dM(u,v).

Motivation for -probabilistically:approximate M by S

If every metric space M is -probabilisticallyapproximated by S and the performance ratio of randomized algorithms for S is at most , then the performance ratio of randomized algorithms for any metric space is at most .

:Proof

Define algorithm A for M The algorithm chooses at random a metric space N S. Run the algorithm AN on N with performance ratio .

Define algorithm B for M. Define algorithm BN for N that behaves like B. S probabilistically approximates M: E(BN()) B() E(A()) E(AN())

AN has performance ratio : E(A()) E(AN()) E(BN()) B()

The probabilistic metric approximation we a set of tree metrics that provides a :will talk about
polylogarithmic-probabilistic-approximation for any metric spaces.

we will view a metric space M as uniform metric space.

The two properties above are combined in a class of tree-metric spaces. We call it khierarchically well-separated trees (k-HST).

What

is k-HST?

:K-HST

A rooted weighted tree. The edge weight from any node to each one of its children is the same. The edge weights along any path from the root to a leaf are decreasing by a factor of at least k.

:Example

k3

k3

k2

?What do we have so far

Many optimization problems can be defined in terms of some metric space. It is easier to get good bounds on the performance ratio for some class of simple metric spaces. Any finite metric space can be represented by a weighted connected graph. We want to -probabilistically approximate this weighted connected graph by k-HST .

:A few definitions
An l-partition of G=(V,E,w) is a collection of subsets of vertices P={V1 ,V2 ,,Vs } such that: For all 1 i s Vi V & Vi = V For all 1 i ,j s such that i j Vi Vj = Let C(Vi) be the subgraph of G induced by Vi (called a cluster). For every 1 i s diam(C(Vi)) l diam(G) the maximum distance between a pair of vertices.

An example: 6-partition

The partition:

The graph: 5 3 1 4

u v V

An (r,p,)-probabilistically partition of G is a probability distribution D ,over the set of (rp)partitions P of G (0 r diam(G)), such that: max{x(u,v) r / d(u,v)} x(u,v) the probability under D that u & v belong to different clusters.

If P is -forcing then: for every u,vV if d(u,v) / r then x(u,v)=0 .

We will prove the following :theories

Let G be a weighted graph. For any 1 r diam(G) there exists a 1/n forcing (r,2ln n +1,2)-probabilistic partition of G. If there exist (r,p,)-probabilistic partition of any sub graph of G, then G is -probabilistically-approximated by the set of k-HSTs of diameter diam(T)= O(diam(G)), =O(pklogk (diam(G))).

Constructing probabilistic :partitions


How

can we construct a 1 /n forcing probabilistic partition?

We join the endpoints of all edges of weight w(e) r/n into a single vertex. We will construct probabilistic partitions for the resulted graph with clusters of diameter at most 2r ln n. We will insert the edges back.

The graph

The partition

After adding the edges

About the algorithm that constructs the The construction partition stages (at each :probabilistic proceeds in

stage one cluster) The tth stage - subgraph Ht of G induced by a vertex set Ut V Initialization: U0 =V H0 =G

:The algorithm
Repeat until Ut = :

Choose an arbitrary node vt from Ut. If the connected component of Ht including vt has diameter 2r lnn then Ht is defined to be the next cluster- Ct. Else - choose radius z at random (according to some probability distribution) from [0,r lnn].

Define the subgraph induced by BzH (vt) = {u V(Ht)|d(u,vt) z} to be the next cluster.
t

Set Ut+1 = Ut \ V(Ct) Ht+1 = the subgraph of G induced by Ut+1. proceed.

:Analysis

We showed that the partition is 1/n-forcing

The bound on the radius (p) of the partition is obvious from the description.

We need to bound x(u,w).

):Bounding x(u,w

Fix the stage t Let v=vt be the vertex chosen. Let z be the random radius . Assume d(u,v) d(v,w) & let d(x,y)=min{d(x,y),r lnn{

At: the event that u, w are in Ut. MtX :the event d(v,u) z < d(v,w). MtN :the event z < d(u,v). Xt :the event that (u,w) is in none of the clusters Cj or j t.

All the events conditional on At

MtX Ct

MtN

Ct v u w v u w

)Pr(X0) = x(e

Pr(Xt ) = Pr(MtX ) + Pr(MtN )Pr(Xt+1) By induction we will get : Pr(Xt ) (2- t/(n-1)) (d(u,w)/r) and thus Pr(X0) 2(d(u,w)/r)

Probabilistic approximation by HSTs The k-HST construction is done recursively

Define G1 = G With every stage i of the recursion associated Gi and a parameter ri=diam(Gi)/pk 1 i t t-depth of the recursion

:The algorithm

At the i-th stage we compute an (ri,p,)probabilistic partition of the graph Gi . Let the clusters of this partition be C1,C2,Cs Recursively compute the k-HSTs for the clusters, setting Gi+1 = Cj j=1,2,, s

G:

At the ith stage we compute an (ri,p,)probabilistic partition of the graph Gi .

C1 C2 C3

C4

Let the computed trees be T1i+1,T2i+1 ,..Tsi+1 with the corresponding roots q1i+1,,qsi+1 we now construct the k-HST ,Ti , for Gi by adding a node qi as the root of the tree We will attach all the above trees to qi letting their roots be his children.

1/2diam(G)

1/2diam(G)
1/2diam(G) 1/2diam(G)

The weight on the edges from qi to each of its children is set to diam(Gi).

The last step of the recursion is at depth t when Gt consists of a single vertex.

?Why is it k-HST
Diam(Gi+1) pri p(diam(Gi)/pk)=1/k diam(Gi) And therefore it is k-HST!

In addition we get diam(Gi) (diam(G))/Ki1 ( and thats why the depth of the recursion t logkdiam(G)

Bounding the diameter of the :tree we created

(Ti) the maximum length path from a root to a leaf in the k-HST constructed in the ith stage. We prove : (Ti) (1+1/k-1)diam(Gi) The proof is by induction: at the last stage the claim is trivial we assume for i+1 and prove for i

(Ti) = max1js (Tji+1) + diam(Gi) max1js (1+1/(k-1))diam(Cji+1) + diam(Gi) (1+1/(k-1))1/k diam(Gi) + diam(Gi) = (1+1/(k-1)) diam(Gi) thus, diam(Ti) 2(Ti) (1+1/(k-1)) diam(Gi)

Reminder: A set of metric spaces S over V, -probabilistically-approximates a metric space M over V, if every metric space in S dominates M and there exists a probability distribution over metric spaces N in S such that for every u ,v in V , E(dN(u,v)) *dM(u,v) .

dTi(u,v) dGi(u,v) for any u,v in GProof by induction i

Case of single vertex trivial

Assume for i+1 and prove for i:

:Case1
there exists 1js such that u &v belong to the same cluster Cji+1

dTi(u,v) = dT (u,v) dC (u,v) dGi(u,v)


j i+1 j i+1

:Case 2
u and v belong to different clusters: dTi(u,v) 2 diam(Gi)= diam(Gi) dGi(u,v)

Upper bounds on )E(dTi(u,v))/dGi(u,v

We prove by induction that for some h t, E(dTi(u,v)) pk(1+1/k-1)(h-i) dGi(u,v) assume for i+1 . Prove for i. if the shortest path between u and v is contained in some single cluster Cji+1 we get from the induction:

E(dT (u,v) ) pk(1+1/k-1)(h-i-1) dC (u,v) =


j i+1 j i+1

pk(1+1/k-1)(h-i-1) dGi(u,v)

E(dTi(u,v)) (1-xi(u,v)) pk(1+1/k-1)(h-i-1) dGi(u,v) + xi(u,v)diam(Ti)

xi(u,v) the probability that the shortest path between u and v is not contained in any single cluster.

:After calculations

E(dTi(u,v)) pk(1+1/k-1)(h-i) dGi(u,v) h t logk diam(G) the bound is O(pk logk diam(G))

The probabilistic partitions are forcing : If P is -forcing then: Reminder


for every u,v in V if d(u,v)/r then x(u,v)=0

Let l be the smallest such that dGl(u,v)/rl > If i < l xi(u,v) = 0 Means: the distance between u and v in Ti equal to their distance in Ti+1

By induction we can obtain E(dTi(u,v)) pk(1+1/k-1)(h-l) dGi(u,v) Consider the smallest value i>l such that u and v belong to Gi diam(Gi) dGi(u,v) dGl(u,v) diam(Gi)/ diam(Gl) dGl(u,v)/pkrl > /pk

On the other hand :diam(Gi) diam(Gl) / ki-l

i-l-1 logk min{p/, diam(G)}

By setting an appropriate h we obtain the bound O(pk logk min{p/, diam(G)} )

:Applications

Migration problem: Let a network be represented as a weighted graph G. A set of files resides in different nods. Each node is a processor.

cost of accessing to file F initiated by a processor v - distance between v to the processor holding the file F. A file may be migrated from one processor to another .cost - D times the distance between the two processors. the goal is to minimize the total cost.

:Applications

Migration problem:
F15

3 3 F1,F2 2 4 F3,F5 F9,F4 ,F6 F7,F8 5 F12

Processor v can accommodate in its local memory up to Kv files. Let m=vV Kv be the total memory in the network. Given a network represented by a k-HST (k2) then there exists a randomized O(log m) competitive algorithm for this problem. Given a network G there exists a randomized O(logm logn log(min{n,diam G})) competitive algorithm for this problem.

Das könnte Ihnen auch gefallen