Sie sind auf Seite 1von 15

Course Code Course Title Assignment Number Maximum Marks Weightage Last Dates for Submission

: : : : : :

MCS-031 Design and Analysis of Algorithms MCA(3)/031/Assign/2012 100 25% 15th April, 2012

There are four questions in this assignment, which carries 80 marks. Rest 20 marks are for viva-voce. Answer all the questions. You may use illustrations and diagrams to enhance the explanations. Please go through the guidelines regarding assignments given in the MCA Programme Guide for the format of presentation. The examples, whenever asked to be given, should be different from those that are discussed in the course material.
Question 1: (a) What is Randomized Quicksort? Analyse the expected running time of Randomized Quicksort, with the help of a suitable example. Ans: Randomized Quick sort:
In traditional Quick Sort, we always pick the first element as the pivot for partitioning. So, the worst case runtime 2 is O (n ) while the expected runtime is O (n log n) over the set of all input. In randomized Quick Sort, we pick randomly an element as the pivot for partitioning to avoid the worst case 2 scenario of O (n ). So, the expected runtime of any input is O (n log n). Randomized Quick Sort is a probabilistic algorithm Algorithm concept: RandomQuicksort Input: A set of numbers S, S = {S1, S2, , Sn } Begin If |S| > 1 Then Select an element X from S at random Partition S into 3 parts: L, X, R RandomQuickSort (L) RandomQuickSort (R) End Complete Algorithm: RandomizedQiiickSort Input: a number array numbers[ ], left: a lower bound, right: a upper bound Begin if(left>=right) then return; pivot = numbers [ left + rand()% 10]; leftlndex = left; rightlndex = right-1; //pivot = left + rand()%(right-left + 1)

(5 Marks)

while (left <= right) do while((numbers[right] >= pivot) and (left < right)) do right--; done

if (left != right) then begin numbers[left] = numbers[right]; left++; end if; while ((numbers[left] <= pivot) && (left < right)) do left++; done if (left != right) then begin numbers[right] = numbers[left]; right--; end if; done numbers[left] = pivot; pivot = left; left = leftlndex; right = rightlndex; if (left < pivot) then randQuickSort(numbers, left, pivot-1); if (right > pivot) then randQuickSort(numbers, right, pivot-1); end; Example:

+ () () + + ( ) + ( ) + + ()

() = /( ( ) + ( ) + ())
=

(0) = (1)(1) = (1) ()( log )

= 2/ () + ()
=0

Analysis of Expected Running Time of Randomized Quick sort Consider: Probability of each partition is 1/n

(b) Explain the Greedy Structure algorithm. Give an example in which the Greedy technique fails to deliver an optimal solution. Ans:

(5 Marks)

Greedy algorithms are simple and straightforward. They are short sighted in their approach in the sense that they take decisions on the basis of information at hand without worrying about the effect these decisions may have in the future. Greedy Algorithm works by making the decision that seems most promising at any moment. It never reconsiders this decision, whatever situation may arise later. Unlike Dynamic Programming, which solves the sub problems bottom-up, a greedy strategy usually progresses in a top-down fashion in greedy approach, making one greedy choice after another, reducing each problem to a smaller one

Example Where, Greedy techniques fails to deliver an optimal solution: Making Change Problem : find minimum coins for getting change of an amount Example: Consider there are 3 denominators (Rs.1, Rs.4, and Rs.6) for giving change. Change with minimum coins require for an amount Rs.8. Using Greedy approach: Take a larger coin of Rs.6, change of two Rupees required. Take two coins of Rs.1 Total 3 coins = 6 + 1+ 1 for change of Rs.8 But, this solution is not optimal. Optimal solution is two coins of Rupees 4.

(c) Describe the two properties that characterise a good dynamic programming Problem. (5 Marks) Ans:

Characteristics of a good dynamic programming problem: Number of Stages, decision at each stage The problem can be divided into number of stages. At each stage decision is required Each stage has associated states The states for the capital budgeting problem corresponded to the amount spent at that point in time. The states for the shortest path problem were the node reached. Decision describes transition to next stage The decision of how much to spend gave a total amount spent for the next stage. The decision of where to go next defined where you arrived in the next stage. Given current stage, subsequent decisions must not depend on previously chosen decisions or States In the budgeting problem, it is not necessary to know how the money was spent in previous stages, only how much was spent. In the path problem, it was not necessary to know how you got to a node only that you did. Recursion relates cost/reward: ft(i) = min {cij + ft + 1(j)} Important properties of a problem that characterize a good dynamic programming Problem: Optimal substructure: Optimal solution contains optimal solutions to sub problems without considering what is initial decision Overlapping sub problems: Solutions to sub problems can be stored and reused in a bottom-up fashion
(d) State Traveling Sales Persons problem. Comment on the nature of solution to the problem. Ans:

(5 Marks)

The time complexity of a problem is the number of steps that it takes to solve an instance of the problem as a function of the size of the input (usually measured in bits), using the most efficient algorithm. To understand this intuitively, consider the example of an instance that is n bits long that can be solved in n2 steps. In this example say the problem has a time complexity of n2. Of course, the exact number of steps will depend on exactly what machine or language is being used. To avoid

that problem, the Big O notation is generally used. If a problem has time complexity O (n2) on one typical computer, then it will also have complexity O (n2) on most other computers, so this notation allows us to generalize away from the details of a particular computer. The Travelling Salesperson Problem is an NP-Complete optimization problem. Given a weighted digraph G, find a cycle that goes through every vertex in G exactly once while having minimum cost total cost. There are variants of the problem which ask for a path which visits every vertex but starts and ends at given points, but they are conceptually very similar. The travelling salesperson problem can be solved in O (2nn2) time and O (2nn) space using Dynamic Programming, which is better than the O (n!) time brute force try all paths algorithm in time, but much worse in space. The solution to the variant of the problem where a path rather than a cycle is required is described here, but this algorithm can be tweaked slightly to answer the cycle variation of the problem as well. To solve the problem, use memorization storing the minimum cost to visit for every subset of vertices with a given ending vertex. For example, one such entry would be the minimum cost of visiting the set of vertices {2, 4, 7} ending at vertex 4; another distinct entry is the set of vertices {2, 4, 7} ending at vertex 2. Then it is clear that the answer to the travelling salesman problem corresponds to entry that has visited every vertex in G, starting at the source vertex and ending at the destination vertex. Let C(v, v') be the cost taking the edge from v to v' in G. To find the minimum cost of a given set, S and vertex v, M(S, v) find the minimum of M(S - v, v') + C (v', v) for each v' in S. To enforce the particular choice of starting location, force C ({v}, v) = for each non start vertex v. Essentially, just try to take each edge between the visited set and the ending vertex. The intuition on why memorization works on the problem is as follows. Consider taking some partial path through vertices starting at a, through b and c, and ending at d with the final destination being z. The order in which b and c were visited, either a, b, c, d or a, c, b, d does not matter to the subsequent portion of the path after d, since any path through only remaining vertices reaching z will not be affected by the order of visiting the first four. Thus, it's clear that only the minimum cost for the set a, b, c, d ending at d matters. Since the number of subsets of size n grows strictly slower than the number of permutations of size n, the memorization algorithm is asymptotically faster than the brute force algorithm.

Question 2: Give an analysis of each one of the following with examples: (i) Insertion Sort

Ans: Analysis of Insertion Sort: Algorithm for InsertionSort(A, n) for i=1 to n do v A[i] j=i+1 while j>=0 and A[j] >v do A[j+1] A[j] j j-1 A [j+1] v

Total number of operations can be expressed as:

1 + 1
=1 =1 =1

= [] + 1 = = ( + 1) 2 2 + 2 2

Thus, the complexity of selection sort algorithm is O (n2)

(ii) Ans:

Selection Sort Analysis of Selection Sort: Algorithm for SelectionSort(A, n) for i=1 to n-1 minPosition i for j = 1+1 to n if A[j] <A[minPosition] minPosition = j exchange A[i] A[minPosition] Total number of operations can be expressed as:
1

= 1 1 + 1
=1 =1 1 =1

1 + 1
=1 =+1 =1

= ( + 1)( 1) = 2 1 =
2

= + 1
=1 =1

= [ ] + 1
1

Thus, the complexity of selection sort algorithm is O (n2)

1 2

+ /2

( 1) 2

(iii) Heap Sort Ans: Analysis of Heap Sort: Algorithm for HeapCreate(A, i, n)(Assume n =length |A|) Cost l Left(i) r Right(i) if l n and A[l] > A[i] then largest l else largest I if r n and A[r] > A[largest] then largest r if largest i then exchange A[i] A[largest] HeapCreate(A, largest, n) Total cost of HeapCreate = T(n) = Ci +T(n/2) As per master recurrence method: T(n) = c1.nc + aT(n/b) Case 1: If c < logb a, then T(n) = (n logb a) Case 2: If c = logb a, then T(n) = (nclog n) = (n [logb a] log n) Case 3: If c > logb a, then T(n) = (nc) Here a =1, b=2, c=0 and c < logb a (because 0=0) So, T ( HeapCreate ) = T(n) = (nclog n) =n0 log n = log n Time Complexity of Build_Max_Heap: There are at most /2+1 nodes at height h in a n-node heap. The height of nodes for heap with n nodes goes between 0 and so, the time for Build_Max_Heap is bound by: Cost of HeapCreate on subtree at height is O (h)
log =0

c1 c2 c3 c4 c5 c6 c7 c8 c9 T(n/2)

+1 () = = 2 2^ 2^ = 1 1 2 1 2
2 =0

log

= (2) = ()

=0

Time Complexity of HeapSort:

Algorithm Steps for HeapSort(A) (Assume n=length|A|) Build_Max_Heap For I length|A| down to 2 Do Exchange A[l] A[i] Heap_size[A] Heap_size[A] 1 Max_Heapify(A,1) Total Cost = O(n) + max(n, (n-1),(n-1),(n-1) O (log n)) = O(n) + (n-1) O (log n) = max(O(n) , O (n log n)) = O (n log n)

Cost O(n) n (n-1) 1 (n-1) 1 (n-1) O (log n)

(iv) Ans:

Merge Sort Algorithm Steps for MergeSort(A, p, r) Input: p = lower bound, r = upper bound Begin If p < r Then q (p + r)/2 MergeSort(A, p, q) MergeSort(A, q+1, r) Merge (A, p, q, r) Analysis of Merge Sort: T(n) = (1), for n=1 =2 T(n/2) + (1) + (n) Using masters recurrence method = (n log n)

(v) Ans:

Quick Sort Analysis of Quick Sort: Algorithm Steps for QuickSort(A, l, n) p1 if p < r Then q Partition(A, p, r) QuickSort(A, p, q-1) QuickSort(A, q+1, r) Worst-case of Quick Sort = when one of partition is empty T(n) = T(n-1) + T(0) + (n) T(n) = T(n-1) + (n) = [] = T(n) = (n2) Best case of Quick Sort = When partition split sub-array evenly Each sub-array has n/2 elements T(n) = 2T(n/2) + (n) Using masters theorem, a=2, b=2, c=1, and so, c= logb a T(n) = (n2)

(20 Marks)

Cost 1 1 (n)

Algorithm for Partition (A, p, r): Assume n= size of sub-array A[pr] x A[r] ip1 for j p to r 1 do if A[j] x then i i + 1 exchange A[i + 1] A[r] return i +1 Time complexity of Partition = c + c1.n = (n) Question 3: (a) Consider Best first search technique and Breadth first search technique. Answer the following with respect to these techniques. Give justification for your answer in each case. (i) Ans: Which algorithm has some knowledge of problem space?

Cost 1 1 rp+1=n n1 n1 n1c 1

(10 Marks)

A problem space is a set of states and a set of operators. The operators map from one state to another state. There will be one or more states that can be called initial states, one or more states which we need to reach what are known as goal states and there will be states in between initial states and goal states known as intermediate states. Breadth first search (BFS), as the name implies, searches from the initial state breadth-wise. That is it searches all the states in the tree level by level. Only after exploring all the states in one level it will jump to the next level. Once the solution is found the search stops. The breadth first search is guaranteed to find the solution if one exists.
(ii) Ans: Which algorithm has the property that if a wrong path is chosen, it can be corrected afterwards?

The Best first search is guaranteed to find the correct path even if a wrong path is selected first, since it travels all paths until target is found. Searching takes place as: Best-first search is a search algorithm which explores a graph by expanding the most promising node chosen according to a specified rule. Best-first search as estimating the promise of node n by a "heuristic evaluation function f(n) which, in general, may depend on the description of n, the description of the goal, the information gathered by the search up to that point, and most important, on any extra knowledge about the problem domain heuristic that attempts to predict how close the end of a path is to a solution, so that paths which are judged to be closer to a solution are extended first.

(b) Describe the difference between a Deterministic Finite Automata and Non-Deterministic Finite Automata. In general, which one is expected to have less number of states ? (5 Marks) Ans:

Deterministic Finite Automata Transition Function Number of possible transition for a given state Build process Use of symbol Space Time Number of states More restrictive Specify exactly one and only state that may be entered for a given state and input symbol combination More complicated to build DFA than NFA in some example DFA cannot use symbol O(2|| ), where is a R.E. Consumes more space O(||), where (string) (2n)

Non-Deterministic Finite Automata A NFA's transition function is less restrictive than a DFA's Allow to specify several transition from a given state for the same input symbol Easier to build NFA NFA use symbol to specify new branch without reading any input symbol O(| |) consumes less memory space than NFA O(||.| |) (n)
(5 Marks)

(c) Explain TURING THESIS" in detail. Ans:

Every effectively calculable function (effectively decidable predicate) is general recursive. "Since a precise mathematical definition of the term effectively calculable (effectively decidable) has been wanting, we can take this thesis as a definition of it ". Every calculable can be computed by a TM. Points derived as per Church-Turing thesis. Any mechanical computation can be performed by a Turing Machine There is a TM-n corresponding to every computable problem We can model any mechanical computer with a TM The set of languages that can be decided by a TM is identical to the set of languages that can be decided by any mechanical computing machine If there is no TM that decides problem P, there is no algorithm that solves problem P.

Example: Lambda Calculus is equivalent to TM

Question 4: (a) Write a randomized algorithm to statistic in a set of n elements Select. Ans: Order Statistics (8 Marks)

The ith order statistic in a set of n elements is the ith smallest element The minimum is thus the 1st order statistic The maximum is the nth order statistic The median is the n/2 order statistic; if n is even, there are 2 medians Order statistics specify how many comparisons are required to find the minimum element in a set. Order statistics help to find the minimum and maximum with less than twice the cost.

Selection Problem is used to find Order statistics. In other words, it helps to find the ith smallest element of a set. Randomized Selection is an example of Selection problem. Algorithm: RandomizedSelect (A, p, r, i) If (p==r) then return A[p]; q=RandomizedPartition(A, p, r) //Where randomized partition should be:

k = q - p +1; if (i = = k) then return A[q]; if (i < k) then return RandomizedSelect(A, p, q-1, i ); else return RandomizedSelect(A, q+1, r, i + k);

Analyzing RandomizedSelect Worst case: Partition-always 0 : n-1

T(n)

= T(n-1) + O(n) = O (n2)g

Best Case: For a partition 9:1 T(n) = T(9n/10) + O (n) = O (n) It is found better than sorting Average Case: For upper bound, assume ith element always falls in larger partition () (, ) + ()
=

T(n) = O(n) by substitution

() + ()
=/

(b) Write a recursive procedure to compute the factorial of a number. Ans: Factorial (n: integer) : integer begin fact=1 for (i=2; i<=n; i++) do fact =fact * i; done return fact end

(5 Marks)

(c) Design a Turing Machine that increments a binary number which is stored on the input tape. (7 Marks) Ans: The number is written on the tape before starting the program. Our machine will use four states (0, 1, 2, 3) and a final stable state F. State 0: We look for the number, moving the head to the right until encountering a 1 or a 0. When done we switch to state 1. It takes three instructions: Present State 0 0 0 Read 0 1 Write 0 1 Move R R R Next State 0 1 1 Remarks Keep searching till input is found If input is 0, move right to state 1 If input is 1, move right to state 1

State 1:

We have found the number, we will continue until the first space. Then we move the head backwards one step, to put it in front of the rightmost (least significant) binary digit. Present State 1 1 1 Read Write Move L R R Next State 2 1 1 Remarks If first space is found, move left to state 2 (adding state) Keep searching in right direction till space is found Keep searching in right direction till space is found

0 1

0 1

State 2: We want to increment the binary digit. If it is a 0, we overwrite it with a 1 and switch to next state. If it is a 1, we overwrite it with a 0, and then move the head to the left (on the upper digit) and repeat the state 2. If we have found a space (the most significant digit was a 1), we overwrite it with a 1 and switch to next state.

Present State 2 2 2

Read

Write 1 1 0

Move R R L

Next State 3 3 2

0 1

Remarks If space is found while adding, make it one and go to state 3 for end If read 0, make it 1, go to Right and State 3 If read1, keep on making it 0, and continue searching in Left direction.

State 3: We move the head along the tape to the right of the number for aesthetic reason during final display of the tape. Then , we switch to the final state F.

Present State 3 3 3

Read

Write

Move R R R

Next State F 3 3

0 1

0 1

Remarks If space is found, input bits are finished. So Halt by jumping to final state. Keep input unchanged and keep scanning until space found. Keep input unchanged and keep scanning until space is found

Full Script
An initial tape containing the decimal number 151, i.e. 10010111 in binary # This script allows a Turing machine to increment a binary number. #The initial tape with the number: 10010111 # State 0 0R0 011R1

000R1 #State 1 1 L2 100R1 111R1 #State 2 21R3 201R3 210L2 #State 3 3 RF 300R3 311R3 Here is the sample run of the script. We can see the state of the two machine in paranthesis on the left, and the content of the tape with the current cell surrounded by two |. $./turing.sed inc.tm (0) ||10010111 (0) |1|0010111 (1) 1|0|010111 (1) 10|0|10111 (1) 100|1|0111 (1) 1001|0|111 (1) 10010|1|11 (1) 100101|1|1 (1) 1001011|1| (1) 10010111|| (2) 1001011|1| (2) 100101|1|0 (2) 10010|1|00 (2) 1001|0|000 (3) 10011|0|00 (3) 100110|0|0 (3) 1001100|0| (3) 10011000|| (F) 10011000|| Final state F reachedend processing. $ Binary 10011000 = 152 in decimal.

Das könnte Ihnen auch gefallen