Sie sind auf Seite 1von 10

Divide and Conquer Strategy

This strategy can be applied when the problem can be divided into sub-problems whose solutions can be combined to construct the solution of the problem. The steps are : 1) Divide : Obtain solution directly if possible, otherwise divide the problem into subproblems. 2) Conquer : Solve the sub-problems recursively. 3) Combine : Build the required solution from the solutions of the sub-problems Recursion continues till a sub-problem becomes small enough for direct solution. The recursive algorithm can be implemented non-recursively for greater efficiency. When the sub-problems are of equal size a recurrence equation can be set up for the time spent on computation. A theorem called the Master theorem can be used to solve such equations and thereby we can obtain asymptotic bounds for the problem. Master theorem : Let T(n) = aT(n/b) + f(n) where n/b means either floor(n/b) or ceil(n/b). Let = log b a. Then 1) If f(n) is O( n ) for some > 0 then T(n) is (n). 2) If f(n) is (n) , then T(n) is (n log(n)). 3) If f(n) is (n+ ) for some > 0 and a f(n/b) <= c f(n) for some c < 1 and n >= n0 then T(n) is (f(n)). ********** Prove that in 3) above f(n) is (n+) is superfluous i.e. f(n/b) <= c(f(n) for some c < 1 implies that f(n) is (n-) **********. *********** Prove the Master Theorem. *********

Binary Search : Search for x in a sorted array. binsearch( a[i..j],x) {if (j <i ) return not found k = (i+j)/2
1

if ( x = a[k]) return k if (x < a[k]) return binsearch(a[i..k-1) return binsearch(a(k+1..j) } T(n) = T(n/2) +f(n) where f(n) is (1) Here a = 1, b = 2 and hence = 0. Therefore f(n) is (n ). Hence T(n) is (log(n)).

Merge Sort : This is a divide and conquer method for sorting an array. The array is divided into two parts differing in length by at most 1. Both parts are sorted recursively and then merged to give the sorted array. Merge(a[],l,m,r) // merges a[1..m] and a[m+1..r] { b -> scratch array of length r l + 1 k = 1; I = i; j = m+1; while (i <= m and j <= r) if ( a[i] < a[j] ) b[k++] = a[i++]; else b[k++] = a[j++]; while ( i <= m) b[k++] = a[i++]; while (j <= r) b[k++] = a[j++]; copy b[1..r-l+1] to a[l..r]; delete b ; } (n)

Then msort[a[l..r]) { if ( r <= l) return; m = (l+r)/2;


2

msort (a[m+1..r]; msort (a[l..m]; merge(a,l,m,r); }

T(n/2) T(n/2) (n)

T(n) = 2T(n/2) + f(n) where f(n) is (n). Here a = 2, b = 2. Therefore =1 and f(n) is (n) and hence is (n). Therefore T(n) is (nlog(n)) i.e. (nlog(n)). Example : Merge sort of 21 34 14 17 12 7 9 22 6 27 3 30 43 24 16 50 18 32 12 21 34 14 17 12 7 9 22 6 27 | 3 30 43 24 16 50 18 32 12 21 34 14 17 12 | 7 9 22 6 27 | 3 30 43 24 16 | 50 18 32 12 21 34 14 | 17 12 | 7 9 | 22 6 27 | 3 30 43 | 24 16 | 50 18 | 32 12 21 | 34 14 | 17 | 12 | 7 | 9 | 22 | 6 27 | 3 | 30 43 | 24 | 16 | 50 |18 |32| 12| | 34 | 14 | | 6| 27 | | 30 | 43 |

21 | 14 34 | 17 | 12 | 7 | 9 | 22| 6 27 | 3 | 30 43|24 | 16 | 50 | 18 | 32 | 12 | 14 21 34 | 12 17 | 7 9 | 6 22 27| 3 30 43 | 16 24 | 12 18 32 50 | 12 14 17 21 34 | 6 7 9 22 27 | 3 16 24 30 43 | 12 18 32 50 6 7 9 12 14 17 21 22 27 34 | 3 12 16 18 24 30 32 43 50 3 6 7 9 12 12 14 16 17 18 21 22 24 27 30 32 34 43 50

A divide and conquer method for matrix multiplication : Let n be even and the two matrices to be multiplied be n x n. We divide the matrices into n/2 x n/2 matrices to get (a b)(ef) (r s)

(cd )(gh) = (tu)

Then r = ae + bg , s = af + bh, t = ce + dg, u = cf + dh. Thus we can get r,s,t,u and hence the product by using 8 products and 4 sums of n/2 x n/2 matrices . Since addition of matrices (n2), for n is equal to a power of 2, T(n) = 8 T(n/2) + f(n) where f(n) is (n2). Here a = 8, b= 2. Therefore = 3, f(n) is (n2) i.e. (n-1). Therefore T(n) is (n) i.e. (n3) : no improvement over ordinary method. Strassens method : Strassen considered the n/2 x n/2 matrices p1 = a(f-h) p2 = (a+b)h p3 = (c+d)e p4 = d(g-e) p5 = (a+d)(e+h) p6 = (b-d)(g+h) p7 = (a-c)(e+f) Then it can be easily seen that r = p5 + p4 p2 + p6 s = p1 + p2 t = p3 + p4 u = p5 + p1 p3 p7 Thus r,s,t,u and hence the product can be obtained by 7 multiplications and 18 additions of n/2 x n/2 matrices. Thus for n equal to even powers of 2 T(n) = 7 T(n/2) + f(n) where f(n) is (n2). Here a = 7, b = 2 and hence = log27 > 2 and < 2.81. f(n) is (n2) i.e. (n-(-2)). Therefore T(n) is (nlog7) ie O(n2.81). Exercise 3.1: How do you handle the case of arbitrary m x n matrix ? Derive the complexity in terms of p = max(m,n). (Note that the above is for a square nxn matrix and that too when n is a power of 2.) *****************
4

Polynomials : A polynomial in an unknown x is an expression of the form p(x) = c 0 + c1x + c2 x2 + . cn xn If cn is not zero and for ck for k > n are zero then n is called the degree of the polynomial. cis are called the coefficients of the polynomial and thus the polynomial p can be stored as an array c[0n]. Evaluation of a polynomial : Given a value for x, the polynomial p(x) can be evaluated by : eval(p,n,x) {v = 0; for i = n downto 0 v = p[i] + v*x; return v; } (n)

Multiplication of polynomials : A simple algorithm for the product of two polynomials a(x), b(x) of degree m and n respectively is : Mul(a[0..m], b[0..n], c[0..m+n]) {c[0..m+n] = 0; for i = 0 to m for j = 0 to n c[i+j] += a[i]*b[j] } (mn), (n2) if m is (n).

A Divide and Conquer algorithm : If a(x) and b(x) are polynomials of degree n then we can write

a(x) = p(x) + xq(x) b(x) = r(x) + xs(x) where p,q,r,s are of degree <= n/2, for example 4 + 3x +7x2 + 6x3 + 8x4 = 4 + 3x +7x2 + x3(6 + 8x) 2 + 4x + 8x2 + 7x3 + 9x4 + 3x5 = 2 + 4x +8x2 + x3 (7 + 9x + 3x2 ) Then a(x)b(x) = p(x)r(x) + x(r(x)q(x) + p(x)s(x)) + x2q(x)s(x) = c1(x) + xc2(x) + x2c3(x) c1,c2,c3 can be computed with 4 multiplications and one addition. The shift operations xc2(x) and x2c3(x) can be computed in (n) time. Two more additions needed to compute a(x)b(x). Since addition of polynomials is O(n) T(n) = 4T(n/2) + f(n) where f(n) is O(n). Here a=4, b=2, = log4 = 2, f(n) is O(n ) i.e. O(n-1). Hence T(n) is (n) i.e. (n2). It can be shown that c1(x), c2(x) and c3(x) can be computed with 3 multiplications, three additions and two shift operations. Therefore T(n) = 3T(n/2) + f(n) where f(n) is O(n). Here a = 3, b=2 and = log(3) > 1. Therefore f(n) is O(n) is (O(n-(-1)). Therefore T(n) is (n) i.e. (n log3 ) which is better than (n2).
2-1

Exercise 3.2 : Show how c1,c2,c3 can be computed with 3 multiplications, 3 additions and two shifts of polynomials of degree n/2. ***************** Substitution method : In some cases and specially when the size of the sub-problems are not the same the Substitution method can be a good method for solving the recurrences leading to the determination of the complexity. This method has essentially two steps : 1) Guess the form of the solution. 2) Use mathematical induction to determine the constants and show that the solution works. The method is powerful but can be applied only when the solution can be guessed easily. A simple example : Pre, or In, or Post order traversal of a binary tree

We guess that the worst case complexity T(n) is (n) where n is the number of nodes which is somewhat obvious. We can do this more precisely using the substitution method with the guess (n). Then we have to prove that T(n) is <= kn assuming that T(l) <= kl for l < n and determine k so that the proof goes thru. We get the recurrence T(n) <= max0 <= l <= n-1 (T(l) + T(n l --1) ) + k Substituting for T(i) and T(n I --1) we get : T(n) <= max0 <= i <=n-1 (kl + k(n l --1)) + k = kn k + k If we choose k >= k then T(n) <= kn as required Thus T(n) is O(n). Similarly we can prove that T(n) is (n). Hence T(n) is (n).

Quick sort : This is an algorithm for sorting an array by a divide and conquer approach. An element x in the array is chosen and a partition procedure divides the array into two parts, the left part <= x and the right part >= x. The two parts are at least one shorter. These are then recursively sorted to get the sorted array. The partition procedure is (n) but the master theorem cannot be applied since the two parts may not be of equal length. In the extreme case the length of the bigger part reduces only by one and there are n calls to the partition procedure giving a worst case complexity O(n2). For some methods of choosing the pivot element a sequence of longer and longer arrays can be constructed so that time taken is >= Kn2 and thus the worst case complexity in such case is (n2) and hence (n2). The average case complexity can be shown to be (n log(n)). Again we can show more precisely that the complexity is (n2). Our guess for the upper bound is kn2 and we get the recurrence T(n) <= max 1<=l <= n-1 (T(l) + T(n l))+ kn = max 1 <= l <= n -1(kl2 + k(n-l)2) + kn = kn2 + kn 2k min1 <= l <=n l(n-l) = kn2 + kn -2k(n-1) Setting k >= k, T(n) <= kn2 + kn 2kn + 2k = kn2 k(n 2) <= kn2 for n >= 2. Hence T(n) is O(n2).

Our guess for lower bound is k1n2. Then the recurrence for the lower bound of the worst case complexity is : T(n) >= max 1 <= 1 <= n-1 k1 (l2 + (n l)2 ) + kn >= k1n2 + kn 2k1min 1 <= l <=n-1 l(n-l) = k1n2 + kn 2k1(n-1) Setting k1 <= k/2 , T(n) > = k1n2. Hence T(n) is (n2) and therefore T(n) is (n2).

DFT (Discrete Fourier Transform) : For a complex vector a = (a0,a1..an-1) of length n the DFT = (0,1,..n-1) is defined by : j = k=0..n-1 wnjk ak where wn = e2 i /n = (cos(2 /n) , sin(2 /n)) Very useful in many scientific applications e.g. signal processing, speech recognition etc. Let pa(x) be the polynomial given by the coefficients a[0,1,..n-1]. Then j = pa(wnj). Using eval we can compute DFT in (n2) time. Inverse DFT : If we have we can compute a by : ai = (1/n) k=0..n-1wn-jkk in (n2) time. Fast Fourier Transform (FFT) : A Divide and Conquer algorithm for computing the DFT in (n log(n)) time. It can be proved that the following recursive algorithm computes the DFT correctly for n a power of 2. FFT(a[],[],n) { if n=1 = a; return; g[0..n/2-1] = a[0,2,4,..n-2]; h[0..n/2-1] = a[1,3,5,..n-1]; FFT(g,,n/2); FFT(h,,n/2); 2 T(n/2)
8

(n)

wn = e2i/n ; w = 1 for k = 0 to n/2 1 { k = k + w k; k+n/2 = k w k w = w wn


}

(n)

} T(n) = 2T(n/2) + f(n) where f(n) is (n), a=2, b=2, =1, f(n) is (n). Therefore T(n) is (nlog(n)) i.e. (n log(n)). Similarly we can have IFFT for the inverse Fourier Transform which is also (nlog(n)). Convolution of vectors u = (u0,u1,..un-1) and v = (v0,v1,..vn-1)is defined to be the vector u x v given by (u x v)i = j=0..n-1uj v(i - j) mod n = j=0..i uj vi j + j = i + 1n 1 uj vn (j i) Thus for n= 4 (u x v)0 = u0 v0 + u1 v3 + u2 v2 + u3 v1 (u x v)1 = u0 v1 + u1 v0 + u2 v3 + u3 v2 etc.

We have the following useful theorems. We omit the proofs. Theorem : The DFT of the convolution is the product of the DFTs. Theorem : Let u = (u0,u1,..ud1) and v = (v0,v1,..vd2) be the coefficients of two polynomials of degree d1, d2 respectively. Let n >= 2 max(d1+1,d2+1). Extend u and v to length n padding zeros on the right. Then (u x v)i for i = 0 to d1+d2 gives the coefficients of the product polynomial. A (nlog(n)) method for multiplication of two complex polynomials g[], h[] of degree d1 and d2 : 1) Take n to be the smallest power of 2 such that n >=2 max(d1+1,d2+1). 2) Pad g[], h[] to zeros on the right upto the index n-1.
9

(n)

3) FFT(g,,n); FFT(h,,n) 4) for i = o to n-1 i = i * i 5) IFFT(,a,n): Total (nlog(n))

(nlog(n)) (n) (nlog(n))

a[0..d1+d2] gives the coefficients of the product polynomial.

10

Das könnte Ihnen auch gefallen