Sie sind auf Seite 1von 4

Advanced Algorithms, Fall 2012

Prof. Bernard Moret

Homework Assignment #1
due Sunday night, Sept. 30

Group members
Afroze Baqapuri Nicolas Stucki Thadd e Tyl e

Question 1. Implement a queue using two ordinary stacks and analyze the amortized cost of each InsertQueue and Delete-Queue operation. (Assume the cost of each Pop or Push operation is 1). Now implement a double-ended queue with two ordinary stacks and give a counterexample to show that the amortized cost of its four operations (Insert-Back, Insert-Front, Delete-Back, and Delete-Front) is no longer O(1). A counterexample is a sequence of operations that takes time growing faster than the number of operations in the sequence. Indicate which single operation(s) are such that their removal makes the the amortized cost of the remaining three operations O(1). Solution 1. Simple queue To implement the queue with two stacks we use the rst stack as an enqueueing stack and the second will be used as a dequeueing stack. All elements will be dequeued by popping it out from the dequeueing stack and all elements will be enqueued by pushing it into the enqueueing stack. If the dequeueing stack is empty and we want to delete the next element we need to do an additional step before dequeueing, this new step consist of popping each element of the enqueueing stack and pushing it into the dequeueing stack. The operation I NSERT-Q UEUE is clearly O(1) because it consists of only one PUSH operation. On the other hand the D ELETE -Q UEUE operations can be O(1) if the dequeueing stack is not empty or O(n) if it is necessary to transfer all the n elements form the enqueueing stack to the dequeueing stack before deleting the element. Therefore it is trivial to see that the I NSERT-Q UEUE operetion O(1) in an amortised analysis. because it is O(1) in any case. To prove that the D ELETE -Q UEUE is operation O(1) we will show that n calls to that operation will produce a cost of only O(n) instead of the O(n2 ) that would be expected by the non-amortised analysis. To do so we will count the operation from the perspective of each element during its life in the queue. So we assume that there are n calls to D ELETE -Q UEUE and we assume that any element that is in the queue was inserted by a I NSERT-Q UEUE operation. Therefor every element is or has been in the enqueuing stack. We also know that any element has to be dequeued from the dequeueing stack. So each deleted element has to be passed form the enqueueing to the dequeueing stack and this only happens once. Therefore each of the n elements will produce (at most) two PUSH and two POP operation, this means at most 4n operations for the n elements. Therefore the D ELETE -Q UEUE has an O(1) amortised cost.

Solution 1. Double-ended queue To implement the double-ended queue is similar to the implementation of the simple queue. We will implement the same functions from the simple queue changing the names of the stacks to front (instead of enqueueing) an back (instead of dequeueing) and will implement a second version of these operations with the stacks inverses. The costs of these operations are the same of those form the equivalent operations from the single queue. I NSERT-BACK and I NSERT-F RONT are O(1); D ELETE -BACK and D ELETE F RONT are O(n) (or O(1) if the deleted end of the stack is not empty). To show that these operations cant be amortised to O(1) we will show that n calls to operation will not necessarily be O(n). For that we will show on case that is amortised to O(n2 ) by calling D ELETE -F RONT and D ELETE -BACK operations. We assume that all elements of the queue are in only one of the stacks, this means that the other stack is completely empty. Now the problem arises if we are calling D ELETE -F RONT and then D ELETE -BACK repeatedly (in a cyclic manner) until the queue is empty. What happened here is that each time a delete operation is called the all the elements are on the other stack. This will create a situation where every call to a delete operation will have to transfer all the elements from one stack to the other. This means that all calls to delete operations will be O(n) and therefore the total time for the n calls will be O(n2 ). With this we showed a sequence of operations that wont be amortised the single operations to O(1). To make the operations O(1) in a amortised analysis we have to delete one of the two delete operations. This will leave us with the same case that we had with the simple queue plus the operation that pushes directly to the dequeueing stack. But this new operation only simplies things because now if an element is inserted it could have at most 4 operations (the ones from the simple queue analysis) or at most 2 operations (PUSH into and POP out of the stack that has a delete operation). Therefore the analysis of the simple queue still holds up and all three operations of the semi-double-endend queue will be amortised to O(1). Question 2. Suppose that, instead of using sums of powers of two, we represent integers as sums of Fibonacci numbers. In other words, instead of an array of 0/1 bits, we keep an array of 0/1 ts, where the ith least signicant t indicates whether the sum includes the ith Fibonacci number Fi . For example, the tstring 101110F represents the number F6 + F4 + F3 + F2 = 8 + 3 + 2 + 1 = 14. Verify that a number need not have a unique tstring representation (in contrast to bitstrings) and describe an algorithm to implement increase and decrease operations on a single tstring in constant amortized time. Solution 2. An example of a number that has more than one tstring representation is that of the number 2, which can either be 11F or 100F . Let a t string representation of a number n be ...Fi ...F3 F2 F1 F0 where Fi {0, 1}. We will now consider the increase operation. If either F0 or F1 is 0, we ip it to a one and the increase operation is done. Otherwise, if both F0 and F1 are set to 1, then we will nd the rst case (starting from the right) where Fk = 0, Fk+2l+1 = 1 and Fk+2(l+1) = 1 (for k > 0 and l from 0 to the maximum integer we can nd, call it n) and ip them to Fk = 1, Fk+2l+1 = 0 and Fk+2(n+1) = 0. For instance, 01011111F get an n of 1. The result of applying this transformation yields 01101001F . Using the denition of Fibonacci numbers, we notice that this does not 2

change the number. Then, we start the procedure over (tail-call recursion). It is important to note that after every iteration the number of consecutive ones on the right decrees. This will eventually end when F0 or F1 become 0. To prove the amortisation, let the potential (i) be the number of ones in the tstring representation. At worst, the transformation goes from, say, 0111111F to 1010101F . You notice that the real cost is k/2 + 3 where k is the number of consecutive ones on the right. Indeed, half of the bits get ipped, plus one. The change in potential = (k/2 1) so the amortised running time is k/2 + 3 (k/2 1) = 4. Therefore the amortised cost to increase one is O(1) over n increases. The algorithm to decrease one is very similar to the increase operation. If either F0 or F1 is one, we ip it to a zero and the decrease operation is done. Otherwise, if both F0 and F1 are set to 0, then we will nd the rst case (from the left) where Fk = 1, Fk+2l+1 = 0 and Fk+2(l+1) = 0 (for k > 0 and l from 0 to the maximum integer we can nd, call it n) and ip them to Fk = 0, Fk+2l+1 = 1 and Fk+2(n+1) = 1. Using the denition of Fibonacci numbers, we notice that this does not change the number. Then, we start the procedure over (tail-recursion yet again). It is important to note that after every iteration the number of consecutive zeros on the right decrees. This will eventually end when F0 or F1 become one. To prove the amortisation, let the potential (i) be the number of zeros in the t of i. The number of operations taken to decrease one is k/2 + 3 where k is the number of consecutive ones on the right. The change in potential = (k/2 1) so the amortised running time is k/2 + 3 (k/2 1) = 4. Therefore the amortised cost to decrease one is O(1) over n decreases. Question 3. Using arrays, as in a simple queue or a binary heap, creates a problem: when the needs grow beyond the allocated array size, there is no support for simply increasing the array. Therefore consider the following solution. When the next insertion encounters a full array, the array is copied into one twice its size (and the old array returned to free storage) and the insertion then proceeds. When the next deletion reduces the number of elements below one quarter of the allocated size, the array is copied into one half its size (and the old array returned to free storage) and the deletion then proceeds. Assume that returning an array to free storage takes constant time, but that creating a new array takes time proportional to its size. Prove that, in the context of a data structure that grows or shrinks one element at a time, the amortized cost per operation of array doubling and array halving is constant. Why not halve the array as soon as the number of elements falls below one half of the allocated size? Solution 3. Let n be the number of elements in the array, and m be its capacity (or size). Obviously, m n. Inserting elements costs a constant time k1 , except when m = n, in which case the cost is actually 2k2 n+k1 , since we combine the allocation of an array of double the capacity and an insertion. However, lets use the cost-accounting analysis. We may claim that all insertions take a constant time of 2 k, with k := k1 + 2 k2 . In order to insert the element that required a 3

reallocation of the array when inserting in an array of size n, we had to insert at least n elements 2 (half of the previous size of the array, ie, the point where it previously had to increase the size of the array). This, in itself, causes a real cost of k1 n, and an amortized cost of 2 k n. When inserting all those elements, plus the element that requires a reallocation, the amortized total cost is 2 k n + 2 k, and the real total cost is (k1 + 2 k2 ) n + 2 (k1 + k2 ). The amortized total cost is trivially higher than the real total cost, reguardless of the value of n. As a result, the amortized cost per insertion is actually 2 k, which is (1). Similarly, we will consider the case of decrease operations. At worse, we just inserted that one element that made the array double in size up to 4 n, and we are now removing n + 1 elements from the array, forcing it to be halved. The actual cost of each removal is k1 (constanttime), except for that last one, which is k2 2 n + k1 , due to the halving process. Indeed, upon reaching a quarter of the size of the array (n), the array is halved to 2 n. Yet again, lets use the cost-accounting analysis. We may claim that all deletions take a constant time of 2 k, with k := k1 + k2 . Upon removing those n + 1 elements, we observe a real cost of k1 n + k2 2 n + k1 . On the other hand, the amortized cost is 2 k n + 2 k, which is clearly bigger. As a result, the amortized cost per deletion is actually 2 k again, ie, (1) (constant time). It is important to note that after each growth or shrink of the array, the array will be half-full (or half-empty). This will assure that with combinations of the two operations will work as well because the numbers there will always be the necessary distance of to the next growth or shrink operation since the last operation. The reduction of size isnt made when the number of elements falls below one half of the allocated size because that solution doesnt amortise to constant time. The problem arises when we just reduced the array and then we want to add two elements it will take O(n) because it will need to increase the size again. If we add those two elements and then we remove them in a repeated manner, then we will have 2n operations but it will take O(n2 ) to do it.

Das könnte Ihnen auch gefallen