Sie sind auf Seite 1von 21

B.

E- DEGREE EXAMINATION, NOVEMBER / DECEMBER 2006


COMPUTER SCIENCE AND ENGINEERING
CS 1201- DESIGN AND ANALYSIS OF ALGORITHMS
ANSWER KEY
PART-A

1. What is an algorithm design technique?


It is the general approaches to Solving problems algorithmicly,applicable to a variety of
problems from different areas of computing.
General design techniques are:
Brute force
Divide and Conquer
Decrease and Conquer
Transform and Conquer
Greedy Technique
Dynamic Programming
Backtracking
Branch and Bound
2. Compare the order of the growth n! and 2n
Lim n!/2
n->∞
3.what is the tool for analyzing the time efficiency of a non recursive algorithm?
* Set up a sum expressing the number of times the algorithm’s basic operation is executed.
*Standard formulas and rules of sum manipulation,finds either closed form formula for the count or at least
its order of growth..

4.What is algorithm animation?


It is dyanamic algorithm visualization.It shows a continous movie like presentation of an algorithm’s
operations.

5. Compare DFS and BFS


DEPTH-FIRST SEARCH BREADTH-FIRST SEARCH

It proceeds its visiting at an It proceeds its visiting in a concentric manner.


arbitrary vertex.
1. It is also denoted as DFS. It is denoted as BFS.
2. It is convenient to use a stack to It is convenient to use a queue to trace the
trace the operation of depth-first search. operation of breadth-first search
3. When the unvisited vertex
reached for the first time ,it is attached Here also when the unvisited vertex reached for
as a child vertex to the vertex from the first time, it is attached as a child vertex to
which it is being reached. It is known as the vertex from which it is being reached. It is
tree edge. known as tree edge.
4. Edge leading to a previous
visited vertex is called as back edge. Edge leading to a previous visited is called as
5. ALGORITHM: cross edge.
//Input: Graph G=(V,E). ALGORITHM:
//Output: Graph G with its vertices. //Input: Graph G = (V,E).
Count 0 //Output: Graph G with its vertices.
For each vertex v in V do Count  0
If v is marked with 0 For each vertex v in V do
Dfs (v) If v is marked with 0
//via global variable count Bfs(v)
Count  count+1; // via global variable count
For each vertex w in V adjacent to v Count  count+1;
do For each vertex w in V adjacent to v do
If w is marked with 0 If w is marked with 0
Dfs (v). Bfs (V).

6. Find the number of comparisons made by the sequential search in the worst case and best
case
Input size-number of elements=n
Basic operation-key comparison A[i]==k
The number of comparison is same for all arrays of size n represented as:
Sworst (n) =n€‫ﻂ‬

7. How efficient is prim’s algorithm?


If prism’s algorithm is implemented using adjacency matrix,list then the time complexity is O(|
V2|),O(E log2 V) where V is the total number of vertices in a graph and E is the total number of edges.the
same can be implemented with queue representation also.

8. What do you mean by Huffman code?


It is an optimal prefix-tree variable length encoding scheme that assigns
Bit strings to characters based on their frequencies in a given text. This is accomplished by a greedy
Construction of a binary tree whose leaves represent the alphabet characters and whose edge are l
Labeled with 0’s and 1’s.

9. what is state space tree


A rooted tree whose nodes represent partially constructed solutions to the problem in question. It
Terminates a node as soon as it can be guaranteed that no solution to the problem can be obtained
By considering choices that correspond to the node’s descendants.

10.what are the additional items required for branch and bound compare backtracking
technique?
Backtracking:
(i) Traced using depth first search.
(ii) Decision problems can be solved.
(iii) If dead end is reached during solution then backtracking and try another solution.
(iv) Example:
 Knapsack
 Sum of subset

Branch and Bound:


(i)Traced using DFS and BFS .
(ii) Optimization problems can be solved.
(iii) Search is based on bounding values (upper bound or lower bound is computed additionally.
(iv) Example
 Job sequencing
 TSP

PART-B

11.(a).(i) Discuss briefly the sequence of steps in designing and analyzing an algorithm.(10)

 Understand the problem


 Decide on: Computational means, exact vs. approximate solving,
data structures, algorithm design technique
 Design an algorithm
 Prove correctness
 Analyze the algorithm
 Code the algorithm

Understanding the problem:

 The first important step in problem solving approach is that to understand the problem.
 That is, defining the problem statement, initially we must concentrate on what must be
done rather than how to do it.
 An input to an algorithm specifies an instance of the problem the algorithm solves.
 We need to specify the range of instances the algorithm needs to handle.
Ascertaining the capability of computational device:
 Once we completely understand the problem, we need to ascertain the capabilities
of the computational device the algorithm is intended for.
 Majority of algorithm are destined to be programmed for computers closely
resembling Von Neumann machine.
 It can be assumed as a generic one-processor, random-access-machine (RAM).

Choosing between exact and approximate problem solving:

An algorithm that can solve the problem exactly is called an exact algorithm.
The algorithm that can solve the problem approximately is called an approximation
algorithm.
The problems that can be approximately are
1. extracting square roots
2. solving non-linear equations
3. evaluate definite integrals
4. Algorithm for solving a problem exactly is not acceptable because it can
be slow due to is intrinsic complexity of that problem.

Deciding on appropriate data structures:

 Algorithm + Data structures= Program.


 In object oriented programming, data structures remain crucially important
for both design and analysis of algorithm.

Algorithm design technique:

 An algorithm design technique is a general approach to solving problems


mathematically that is applicable to a variety of problems from different areas of
computing.

Methods of specifying an algorithm:


 We typically describe algorithms as programs written in a pseudo code.
 An algorithm is independent of any language or machine whereas a
program is dependent on a language and machine.
 Pseudo code is a way to represent the step by step methods in finding the
solution to the given problem.

11.(a) (ii) .Explain some of the problem types used in the design of algorithm. (6)
The important problem types are:

a. Sorting
b. Searching
c. String processing
d. Graph problems
e. Combinatorial problems
f. Geometric problems
g. Numerical problems

Sorting

 In sorting problem, we arrange the items of a given list in ascending order.


 There must be a relation of total ordering.
 We usually sort a list of numbers, characters from an alphabet, character
strings and records.
 To sort a list of number or records, we need a piece of information, called
key to guide sorting.
 The sorted list helps in searching.
Searching

 The searching problem deals with finding a given value, called a search
key in the given set.
 In searching algorithms, searching has to considered in conjunction with
other operations: addition and deletion from the data set of an item.
 In such situations, data structures and algorithms should be chosen to
strike a balance among the requirements of each operation.
String processing

 A string is a sequence of characters from an alphabet.


 Strings of particular interest are text strings, which comprise letters,
numbers, and special characters; bit strings, which comprise zeros and ones; and
gene sequences, which can be modeled by strings of characters from the four-
character alphabet.
 One particular problem – that of searching for a given word in a text – has
attracted special attention from researchers. They call it string matching.

Graph problem

 A graph can be thought of as a collection of points called vertices, some of


which are connected by line segments called edges.
 The most widely known graph problems of this type are probably the
traveling salesman problem and the graph-coloring problem.
 The graph coloring problem asks us to assign the smallest number colors
to vertices of a graph so that no two adjacent vertices are the same color.

Combinatorial problem

 Combinatorial problems are the most difficult problems in computing


from both the theoretical and the practical standpoints.
 First, the number of combinatorial objects typically grows extremely fast
with a problem’s size, reaching unimaginable magnitudes even for moderate-sized
instances. Second, there are no known algorithms for solving most such problems
exactly in an acceptable amount of time.

Geometric problem

 Geometric algorithms deals with geometric objects such as points, lines,


and polygons.
 Ancient Greeks were very much interested in developing procedures for
solving a variety of geometric problems, including problems of constructing
simple geometric shapes-triangles, circles, and so on - with an unmarked ruler and
a compass.
 The closest- pair problem is self- explanatory; given n points in the
plane, find the closest pair among them.
 The convex hull problem asks to find the smallest convex polygon
that would include all the points of a given set.

Numerical problem

 Numerical problem, another large special area of applications, are


problems that involve mathematical objects of continuous nature: solving
equations and systems of equations, computing definite integrals, evaluating
functions, and so on.
 Such problems typically require manipulating real numbers, which can be
represented in a computer only approximately.

(b).(i). Explain the general frame work for analyzing the efficiency of algorithms. (8)
Analyzing frame work
Time efficiency indicates how fast an algorithm in question runs; space efficiency deals
with the extra space the algorithm requires.
Measuring an input’s size
An algorithm’s efficiency as a function of some parameter n indicating the algorithm’s
input size.
In most cases, selecting such a parameter is quite straightforward.
Units for measuring running time
 The speed of a particular computer, dependence on the quality of program
implementing the algorithm and of the compiler used in generating the machine code, and
the difficulty of clocking the actual running time of the program.
 Number of times each of the algorithm’s operation is executed.
 Basic operation of an algorithm is usually the most time-consuming operation in
the algorithm’s innermost loop
Orders of growth
 A different in running times on small inputs is not what really distinguishes
efficient algorithms from inefficient ones.
 Algorithms that require an exponential number of operations are practical for
solving only problems of very small sizes.
Worst-case, best-case, and average-case efficiencies
 The worst-case efficiency of an algorithm is its efficiency for the worst-case
input of size n, which is an input of size n for which the algorithm runs the longest among
all possible inputs of that size.
 The best-case efficiency of an algorithm is its efficiency for the best-case input of
size n, which is an input of size n for which the algorithm runs the fastest among all
possible inputs of that size.
 Neither the worst-case analysis nor its best-case counterpart yields the necessary
information about an algorithm’s behavior on a “typical” or “random” input. This is the
information that the average-case efficiency.
 Another type of efficiency called amortized efficiency. It applies not to a single
run of an algorithm but rather to a sequence of operations performed on the same data
structure.

(b). (ii) Explain the various asymptotic efficiencies of an algorithm. (8)


To compare and rank orders of growth we use three notations : O(big oh) , Ω(big omega), Ө(big theta).

Informal introduction:
O(g(n)) is the set of all functions with the smaller or same order of growth as g(n) .
E.g. n ε O(n2)
100n+5 ε O(n²), 1/2n(n-1) ε O(n²)
The first two functions are linear and hence have a smaller order of growth than g(n)=n² , the last is a quadratic
and hence has the same order of growth as n².
The second notation, Ω(g(n)) stands for the set of all functions with a large or same order of growth as g(n).
For e.g. n³ ε Ω(n²) , ½ n(n-1) ε Ω(n²) ,
but 100n+5not belongs to Ω(n²)
Finally Ө(g(n)) is the set of all functions that have the same order of growth as g(n).
O-notation:
A function t(n) is said to be in O(g(n)), denoted t(n) ε O(g(n)), if t(n) is bounded above by some constant
multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such
that
t(n)≤cg(n) for all n ≥ n0
for e.g. 100n+5≤100n+n(for all n≥5)=101n≤101n².
c=101 n0=5.
for e.g. 100n+5≤100n+5n(for all n≥1)=105n.
c=105 n0=1.
Graph:

Ω-notation:
A function t(n) is said to be in Ω (g(n)), denoted t(n) ε(g(n)),if t(n) is bounded below by some positive constant
multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such
that
T(n) ≥ cg(n)for all n ≥ n0.
e.g. n³≥n² for all n≥0,
c=1 and n0=0.
Graph:

Ө-notation:
A function t(n) is said to be in Ө(g(n)), denoted t(n)εӨ(g(n)), if t(n) is bounded both above and below by
some positive constant multiplies of g(n) for all large n, i.e., if there exist some positive constant c1,c2 and
some nonnegative integer n0 such that
c2g(n) ≤t(n) ≤c1g(n) for all n≥n0.
For e.g.
½n(n-1)εӨ(n²).
R.H.S
1/2n(n-1)=1/2n²-1/2n≤1/2n² for all n≥0.
L.H.S
1/2n(n-1)=1/2n²-1/2n≥1/2n²-1/2n1/2n (for all n≥2)=1/4 n² .
Hence c2=1/4 , c1=1/2 , n0=2.
Graph:
12.(a) (i). Design a recursive algorithm to compute the factorial function F(n)=n! For an
arbitrary non negative integer n and also derive the recurrence relation.(10)

Algorithm F(n)
// compute n! Recursively
// Input: A nonnegative integer n
// Output: The value of n!
If n=0 return 1

else return F(n-1)*n


Analysis

 Input size is equivalent to the element in the array= n


 Basic operation- multiplication
 No additional property
 The recurrence relation is
m (n)=m(n-1) +1 ………..(1)
Substitute n=n-1, m (n-1) = m(n-2)+1
Put in equation (1)
m(n) = m(n-2)+1+1=>m(n-2)+2
Assume n=n-2
m(n-2)= m(n-3)+1
Put in equation (1)
m (n)=m(n-3) +1+2=>m(n-3)+3
assume n=n-3
m (n-3)=m(n-4)+1
put in equation (1)
m(n)=m(n-4)+1+3=>m(n-4)+4
at the I iteration 1<i<n
m(n)=m(n-i)+i
as per the initial condition n-i=0 . i=n
m(n)= m(0) +n
=0+n=n

12.(a) (ii). Discuss the features of animation of an algorithm.(6)


Features of an animation

 Be consistent
 Be Interactive
 Be clear and concise
 Be forgiving to the user
 Adapt to the knowledge level of the user
 Emphasis the visual component
 Keep the user interested
 Incorporate both symbolic and iconic representations
 Include algorithms analysis and comparisons with other algorithms for the same
problem
 Include execution history

12. (b) (i) Design a non recursive algorithm for computing the product of two n *
n matrices and find the time efficiency of the algorithm.(10)

Example 1: Consider the problem of finding the value of the largest element in a list of
n numbers.

Algorithm: Max Element (A [0….n-1])


// Determines the value of the largest element in a given array
// Input: An array A [0….n-1] of real numbers
// output: The value of the largest element in A
maxval <- A[0]
for i<-1 to n-1 do
if A[i]>maxval
maxval<-A[i]
return maxval

Example 2: Consider the element uniqueness problem: check whether all the elements in a list
given array are distinct.
Algorithm: Unique Elements (A [0…n-1]
// Determines whether all the elements in a given array are distinct
// Input: An array A[0….n-1] of real numbers
// Output: Returns “true” if all the elements in A are distinct
// and “false” otherwise
For i<-0 to n-2 do
For j<-i+1 to n-1 do
If A[i]=A[ j] return false
Return true

Example 3: Given two n- by- n matrices A and B, find the time efficiency of the definition-based
algorithm for computing their product C=AB.
Algorithm: Matrix Multiplication (A[0…n-1, 0….n-1],B[0….n-1,0….n-1])

// Multiplies two n-by-n matrices by the definition-based algorithm


// Input: Two n-by-n matrices A and B
//Output: Matrix C=AB
For i<- 0 to n-1 do
For j<-0 to n-1 do
C[i,j]<- 0.0
For k<-0 to n-1 do
C[i,j]<-C[i,j]+A[i,k]*B[k,j]
Return C

 called amortized efficiency. It applies not to a single run of an algorithm but


rather to a sequence of operations performed on the same data structure.
12.(b). (ii).Write short note on algorithm visualization and its applications.(6)

Algorithm visualization is defined as the use of images to convey some useful


information about algorithms.
1. Algorithm operation on different kinds of input.
2. Same input for different algorithms to compare the execution speed.

Two principle variations of algorithm visualization:

1. Static algorithm visualization – shows an algorithm progress


through a series of still images
2. Dynamic algorithm visualization – shows a continuous movie
like representation of an algorithms operation.

Features of an animation

 Be consistent
 Be Interactive
 Be clear and concise
 Be forgiving to the user
 Adapt to the knowledge level of the user
 Emphasis the visual component
 Keep the user interested
 Incorporate both symbolic and iconic representations
 Include algorithms analysis and comparisons with other algorithms for the same
problem
 Include execution history

Applications

1. Education – Seeks to help students learning algorithms.


2. Research – Helps to uncover some unknown features of algorithms.

13.(a) . Set up and solve a recurrence relation for the number of key comparisons made by
the above pseudo code

Assume n is a power of 2, the recurrence relation for the number of key comparisons C(n) is:
C(n)=C(n/2) +C(n/2)+Cmerge (n) for n>1
In general, C (1)=0
Cmerge (n) - No. of key comparisons done in merging
In the worst case, each step, exactly one comparison is done (i.e.) (n-1)
Cworst (n) = 2C(n/2) +n-1 for n>1, Cworst (1)=0
Using Master’s theorem:
T(n)=a T(n/b) +f(n)
Here a=2, b=2 and to find the value of d;
d=1
Here a=b

14. (a) (i).Construct a minimum spanning tree using kruskal’s algorithm with your
own ekample.(10)
The following problem arises naturally in several practical situations: given n points, connected them in
the cheapest possible way so that there will be a path between every pair of points. We can represent the points
by vertices of graph, possible connections by the graph’s edges, and the connection cots by the edge weights.
Then the question can be posed as the minimum spanning tree problem, defined formally as follows.

DEFINITION A spanning tree of a connected graph is its connected acyclic subgraph (i.e., a tree) that contains
all the vertices of the graph. A minimum spanning tree of weighted connected graph is its spanning tree of the
smallest weight, where the weight of a tree is defined as the sum of the weights on all its edges. The minimum
spanning tree problem is the problem of finding a minimum spanning tree for a given weighted connected
graph.

If we were to try an exhaustive-search approach to constructing a minimum spanning tree, we would face two
serious obstacles. First, the number of spanning tree grows exponentially with the graph size (at least for dense
graph). Second, generating all spanning trees for a given graph is not easy; in fact, it is more difficult than
finding a minimum spanning tree for a weighted graph by using one of several efficient algorithms available for
this problem.

Prim’s algorithm constructs a minimum spanning tree through a sequence of expanding subtrees. The initial
subtree in such a sequence consists of a sngle vertex selected arbitrarily from the set V of the graph’s vertices.
On each iteration, we expand the current tree in the greedy manner by simply attaching to it the nearest vertex
not in that tree. The algorithm stops after all the graph’s vertices have been included in the tree being
constructed. Since the algorithm expands a tree by exactly one vertex on each of its iterations, the total number
of such iterations is n-1, where n is the number of vertices in the graph. The tree generated by the algorithm is
obtained as the set of edges used for the tree expansions.
Here the a pseudocode of this algorithm.
ALGORITHM Prim(G)
// Prim’s algorithm for constructing a minimum spanning tree
// Input: A weighted connected graph G=( V,E)
// Output: ET, the set of edges composing a minimum spanning tree of G
VT <- {v0} // the set of tree vertices can be initialized with any vertex
ET <- 0
for i<-1 to |v|-1 do
find a minimum-weight edge e*- (v*,u*) among all the edges (v,u)
such that v is in VT and u in V-VT
VT <-VT U {u*}
ET <- ET U {e*}
return ET

The nature of Prim’s algorithm makes it necessary to provide each vertex not in the current tree with the
information about the shortest edge connecting the vertex to a tree vertex. We can provide such information by
attaching two labels to a vertex: the name of the nearest tree vertex and the length of the corresponding edge.
Vertices that are not adjacent to any of the tree vertices can be given the infinite label indicating their “infinite”
distance to the tree vertices and a null label for the name of the nearest tree vertex. (Alternatively, we can split
the vertices that are not in the tree into two sets, the “fringe” and the “unseen”. The fringe contains only the
vertices that are not in the tree but are adjacent to at least one tree vertex. These are the candidates from which
the next tree vertex is selected. The unseen vertices are all the other vertices of the graph, called “unseen”
because they are yet to be affected by the algorithm). With such labels, finding the next vertex to be added to the
current tree T=(VT,ET) becomes a simple task of finding a vertex with the smallest distance label in set V – VT.
Ties can be broken arbitrarily.

After we have identified a vertex u* to be added to the tree, we need to perform two operations:

Move u* from the set V-VT to the set of tree vertices VT .


For each remaining vertex in V-VT that is connected to u* by a shorter edge than u’s current distance label,
update its labels by u* and the weight of the edge between u* and u, respectively.
The efficiency is depends on the data structures chosen for the graph itself and the for the priority queue of the
set V-VT whose vertex priorities are the distances to the nearest tree vertices.
For example, if a graph is represented by its weight matrix and the priority queue is implemented as an
unordered array, the algorithm’s running time will be in 0(|V|2). Indeed, on each of the |V|-1 iterations, the array
implementing the priority queue is traversed to find and delete the minimum and then to update, if necessary,
the priorities of the remaining vertices.

14.(a) (ii) Explain single L-rotation and of the double RL-rotation with general form.
(6)
 The symmetric single left rotation or L-rotation is the mirror image of the single
R-rotation.
 It is performed after a new key is inserted into the right sub tree of the right child
of a tree whose root had the balance of -1 before the insertion.
 The double right-left rotation is the mirror image of the double LR-rotation and is
left for the exercises.
14.(b) solve the all_pairs shortest path problem for the diagraph with the weight
matrix given below(10).
15.(a) Apply backtracking technique to solve the following instance of the
subset sum problem. S = {1, 3, 4, 5} and d=11.(16)

 The state space tree for the given instance of the subset sum problem S =
{1, 3, 4, 5} and d=11
There is no solution to this instance of the problem.

15. (b) Solve the following instance of the Knapsack problem by the branch-and-bound
algorithm.

Item Weight Value

1 4 $40

2 7 $42

3 5 $25

4 3 $12

The Knapsack’s capacity W=10.


 If we are given with n objects or items and a Knapsack or a bag in which subset of items is to be
placed. Each item has a known weight w1. The Knapsack has a capacity W. Then the profit that is
earned is v1. The objective is to obtain filling of Knapsack with maximum profit earned. But it should
not exceed W of the Knapsack.

 We order the items of a given instances in descending order by their value to weight ratios.

v1 / w1 > v2 /w2 > ….....> vn / wn


We compute upper bound of the tree
ub =v+(W-w) (vi+1 / wi+1)
Consider 4 items as

Item Weight Value Value/Weight

1 4 $40 10

2 7 $42 6

3 5 $25 5

4 3 $12 4

W=Knapsack capacity =10


We compute upper bound
Ub= v + (W-w) (vi+1 / wi+1)
Initially v=0, w=0 and vi+1= v1 =40
wi+1= w1=4

Capacity W=10
Ub = 0+(10-0) (40/4)
=(10) (10)
=100
=$100
Computation at node 0: No items have been closed selected. Both total weight of the items already selected
w and their total value v are equal to 0.
w=0 v=0 vi+1 / wi+1 = v1 / w1 =40/4=40
Capacity W=10
ub=v+(W-w) (vi+1/wi+1)
=0 + (10-0) (10)
=$100
Computation at node 1: It represents left child of the root, represents the subset that include item 1.
Total weight and value of the item already included are 4 and $40 respectively.
w=4 v=40 Capacity W=10
vi+1 /wi+1=next item to item 1
=v2/w2
=6
ub=v+(W-w) vi+1 /wi+1
=40+(10-4) 6
=40+36
=76
Computation at node 2: we assume item 1 is not selected.
v=0 w=0 , capacity W=10
vi+1 / wi+1 = v2/w2=6
ub=v+(W-w) (vi+1/wi+1)
=0+(10-0) (6)
=60
Computation at node 3: subset with item 1 and item 2 respectively.
Total weight w=4+7=11
Value v=40+42=82
vi+1 / wi+1 =v2/w2=5
capacity W=10
But the total weight W exceeds Knapsack capacity. Hence node 3 is terminated immediately.

Computation at node 4: subset with item 1 and without item 2.


w=4 v=40 vi+1/wi+1=v3/w3=5
capacity W=10
ub=40+(10-4) (5)
=$ 70
Computation at node 5: subset with item 1 and without item 2 and now include item 3.
w=4+5=9
v=40+25=65
W=10
vi+1/wi+1 =v4/w4=4
ub=65+(10-9)4
=69
Computation at node 6: subset with item 1 and without item 2 & 3.
w=4
v=40
W=10
vi+1/wi+1= v4/w4=4
ub=40+(10-4)=40+24=64

Computation at node 7: subset with item 1, item 3 and 4 without item 2.


w=4+5+3=12, it exceeds Knapsack weight hence node 7 is immediately
terminated.

Computation at node 8: subset with item 1, item 3 and without item 2 and item 4.
w=9
v=65
W=10
vi+1/wi+1=v5/w5=0
there is no next item
ub=65+(10-1)0=65
it is simply equal to the total value of these two items.

Das könnte Ihnen auch gefallen