Beruflich Dokumente
Kultur Dokumente
6. Find the number of comparisons made by the sequential search in the worst case and best
case
Input size-number of elements=n
Basic operation-key comparison A[i]==k
The number of comparison is same for all arrays of size n represented as:
Sworst (n) =n€ﻂ
10.what are the additional items required for branch and bound compare backtracking
technique?
Backtracking:
(i) Traced using depth first search.
(ii) Decision problems can be solved.
(iii) If dead end is reached during solution then backtracking and try another solution.
(iv) Example:
Knapsack
Sum of subset
PART-B
11.(a).(i) Discuss briefly the sequence of steps in designing and analyzing an algorithm.(10)
The first important step in problem solving approach is that to understand the problem.
That is, defining the problem statement, initially we must concentrate on what must be
done rather than how to do it.
An input to an algorithm specifies an instance of the problem the algorithm solves.
We need to specify the range of instances the algorithm needs to handle.
Ascertaining the capability of computational device:
Once we completely understand the problem, we need to ascertain the capabilities
of the computational device the algorithm is intended for.
Majority of algorithm are destined to be programmed for computers closely
resembling Von Neumann machine.
It can be assumed as a generic one-processor, random-access-machine (RAM).
An algorithm that can solve the problem exactly is called an exact algorithm.
The algorithm that can solve the problem approximately is called an approximation
algorithm.
The problems that can be approximately are
1. extracting square roots
2. solving non-linear equations
3. evaluate definite integrals
4. Algorithm for solving a problem exactly is not acceptable because it can
be slow due to is intrinsic complexity of that problem.
11.(a) (ii) .Explain some of the problem types used in the design of algorithm. (6)
The important problem types are:
a. Sorting
b. Searching
c. String processing
d. Graph problems
e. Combinatorial problems
f. Geometric problems
g. Numerical problems
Sorting
The searching problem deals with finding a given value, called a search
key in the given set.
In searching algorithms, searching has to considered in conjunction with
other operations: addition and deletion from the data set of an item.
In such situations, data structures and algorithms should be chosen to
strike a balance among the requirements of each operation.
String processing
Graph problem
Combinatorial problem
Geometric problem
Numerical problem
(b).(i). Explain the general frame work for analyzing the efficiency of algorithms. (8)
Analyzing frame work
Time efficiency indicates how fast an algorithm in question runs; space efficiency deals
with the extra space the algorithm requires.
Measuring an input’s size
An algorithm’s efficiency as a function of some parameter n indicating the algorithm’s
input size.
In most cases, selecting such a parameter is quite straightforward.
Units for measuring running time
The speed of a particular computer, dependence on the quality of program
implementing the algorithm and of the compiler used in generating the machine code, and
the difficulty of clocking the actual running time of the program.
Number of times each of the algorithm’s operation is executed.
Basic operation of an algorithm is usually the most time-consuming operation in
the algorithm’s innermost loop
Orders of growth
A different in running times on small inputs is not what really distinguishes
efficient algorithms from inefficient ones.
Algorithms that require an exponential number of operations are practical for
solving only problems of very small sizes.
Worst-case, best-case, and average-case efficiencies
The worst-case efficiency of an algorithm is its efficiency for the worst-case
input of size n, which is an input of size n for which the algorithm runs the longest among
all possible inputs of that size.
The best-case efficiency of an algorithm is its efficiency for the best-case input of
size n, which is an input of size n for which the algorithm runs the fastest among all
possible inputs of that size.
Neither the worst-case analysis nor its best-case counterpart yields the necessary
information about an algorithm’s behavior on a “typical” or “random” input. This is the
information that the average-case efficiency.
Another type of efficiency called amortized efficiency. It applies not to a single
run of an algorithm but rather to a sequence of operations performed on the same data
structure.
Informal introduction:
O(g(n)) is the set of all functions with the smaller or same order of growth as g(n) .
E.g. n ε O(n2)
100n+5 ε O(n²), 1/2n(n-1) ε O(n²)
The first two functions are linear and hence have a smaller order of growth than g(n)=n² , the last is a quadratic
and hence has the same order of growth as n².
The second notation, Ω(g(n)) stands for the set of all functions with a large or same order of growth as g(n).
For e.g. n³ ε Ω(n²) , ½ n(n-1) ε Ω(n²) ,
but 100n+5not belongs to Ω(n²)
Finally Ө(g(n)) is the set of all functions that have the same order of growth as g(n).
O-notation:
A function t(n) is said to be in O(g(n)), denoted t(n) ε O(g(n)), if t(n) is bounded above by some constant
multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such
that
t(n)≤cg(n) for all n ≥ n0
for e.g. 100n+5≤100n+n(for all n≥5)=101n≤101n².
c=101 n0=5.
for e.g. 100n+5≤100n+5n(for all n≥1)=105n.
c=105 n0=1.
Graph:
Ω-notation:
A function t(n) is said to be in Ω (g(n)), denoted t(n) ε(g(n)),if t(n) is bounded below by some positive constant
multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such
that
T(n) ≥ cg(n)for all n ≥ n0.
e.g. n³≥n² for all n≥0,
c=1 and n0=0.
Graph:
Ө-notation:
A function t(n) is said to be in Ө(g(n)), denoted t(n)εӨ(g(n)), if t(n) is bounded both above and below by
some positive constant multiplies of g(n) for all large n, i.e., if there exist some positive constant c1,c2 and
some nonnegative integer n0 such that
c2g(n) ≤t(n) ≤c1g(n) for all n≥n0.
For e.g.
½n(n-1)εӨ(n²).
R.H.S
1/2n(n-1)=1/2n²-1/2n≤1/2n² for all n≥0.
L.H.S
1/2n(n-1)=1/2n²-1/2n≥1/2n²-1/2n1/2n (for all n≥2)=1/4 n² .
Hence c2=1/4 , c1=1/2 , n0=2.
Graph:
12.(a) (i). Design a recursive algorithm to compute the factorial function F(n)=n! For an
arbitrary non negative integer n and also derive the recurrence relation.(10)
Algorithm F(n)
// compute n! Recursively
// Input: A nonnegative integer n
// Output: The value of n!
If n=0 return 1
Be consistent
Be Interactive
Be clear and concise
Be forgiving to the user
Adapt to the knowledge level of the user
Emphasis the visual component
Keep the user interested
Incorporate both symbolic and iconic representations
Include algorithms analysis and comparisons with other algorithms for the same
problem
Include execution history
12. (b) (i) Design a non recursive algorithm for computing the product of two n *
n matrices and find the time efficiency of the algorithm.(10)
Example 1: Consider the problem of finding the value of the largest element in a list of
n numbers.
Example 2: Consider the element uniqueness problem: check whether all the elements in a list
given array are distinct.
Algorithm: Unique Elements (A [0…n-1]
// Determines whether all the elements in a given array are distinct
// Input: An array A[0….n-1] of real numbers
// Output: Returns “true” if all the elements in A are distinct
// and “false” otherwise
For i<-0 to n-2 do
For j<-i+1 to n-1 do
If A[i]=A[ j] return false
Return true
Example 3: Given two n- by- n matrices A and B, find the time efficiency of the definition-based
algorithm for computing their product C=AB.
Algorithm: Matrix Multiplication (A[0…n-1, 0….n-1],B[0….n-1,0….n-1])
Features of an animation
Be consistent
Be Interactive
Be clear and concise
Be forgiving to the user
Adapt to the knowledge level of the user
Emphasis the visual component
Keep the user interested
Incorporate both symbolic and iconic representations
Include algorithms analysis and comparisons with other algorithms for the same
problem
Include execution history
Applications
13.(a) . Set up and solve a recurrence relation for the number of key comparisons made by
the above pseudo code
Assume n is a power of 2, the recurrence relation for the number of key comparisons C(n) is:
C(n)=C(n/2) +C(n/2)+Cmerge (n) for n>1
In general, C (1)=0
Cmerge (n) - No. of key comparisons done in merging
In the worst case, each step, exactly one comparison is done (i.e.) (n-1)
Cworst (n) = 2C(n/2) +n-1 for n>1, Cworst (1)=0
Using Master’s theorem:
T(n)=a T(n/b) +f(n)
Here a=2, b=2 and to find the value of d;
d=1
Here a=b
14. (a) (i).Construct a minimum spanning tree using kruskal’s algorithm with your
own ekample.(10)
The following problem arises naturally in several practical situations: given n points, connected them in
the cheapest possible way so that there will be a path between every pair of points. We can represent the points
by vertices of graph, possible connections by the graph’s edges, and the connection cots by the edge weights.
Then the question can be posed as the minimum spanning tree problem, defined formally as follows.
DEFINITION A spanning tree of a connected graph is its connected acyclic subgraph (i.e., a tree) that contains
all the vertices of the graph. A minimum spanning tree of weighted connected graph is its spanning tree of the
smallest weight, where the weight of a tree is defined as the sum of the weights on all its edges. The minimum
spanning tree problem is the problem of finding a minimum spanning tree for a given weighted connected
graph.
If we were to try an exhaustive-search approach to constructing a minimum spanning tree, we would face two
serious obstacles. First, the number of spanning tree grows exponentially with the graph size (at least for dense
graph). Second, generating all spanning trees for a given graph is not easy; in fact, it is more difficult than
finding a minimum spanning tree for a weighted graph by using one of several efficient algorithms available for
this problem.
Prim’s algorithm constructs a minimum spanning tree through a sequence of expanding subtrees. The initial
subtree in such a sequence consists of a sngle vertex selected arbitrarily from the set V of the graph’s vertices.
On each iteration, we expand the current tree in the greedy manner by simply attaching to it the nearest vertex
not in that tree. The algorithm stops after all the graph’s vertices have been included in the tree being
constructed. Since the algorithm expands a tree by exactly one vertex on each of its iterations, the total number
of such iterations is n-1, where n is the number of vertices in the graph. The tree generated by the algorithm is
obtained as the set of edges used for the tree expansions.
Here the a pseudocode of this algorithm.
ALGORITHM Prim(G)
// Prim’s algorithm for constructing a minimum spanning tree
// Input: A weighted connected graph G=( V,E)
// Output: ET, the set of edges composing a minimum spanning tree of G
VT <- {v0} // the set of tree vertices can be initialized with any vertex
ET <- 0
for i<-1 to |v|-1 do
find a minimum-weight edge e*- (v*,u*) among all the edges (v,u)
such that v is in VT and u in V-VT
VT <-VT U {u*}
ET <- ET U {e*}
return ET
The nature of Prim’s algorithm makes it necessary to provide each vertex not in the current tree with the
information about the shortest edge connecting the vertex to a tree vertex. We can provide such information by
attaching two labels to a vertex: the name of the nearest tree vertex and the length of the corresponding edge.
Vertices that are not adjacent to any of the tree vertices can be given the infinite label indicating their “infinite”
distance to the tree vertices and a null label for the name of the nearest tree vertex. (Alternatively, we can split
the vertices that are not in the tree into two sets, the “fringe” and the “unseen”. The fringe contains only the
vertices that are not in the tree but are adjacent to at least one tree vertex. These are the candidates from which
the next tree vertex is selected. The unseen vertices are all the other vertices of the graph, called “unseen”
because they are yet to be affected by the algorithm). With such labels, finding the next vertex to be added to the
current tree T=(VT,ET) becomes a simple task of finding a vertex with the smallest distance label in set V – VT.
Ties can be broken arbitrarily.
After we have identified a vertex u* to be added to the tree, we need to perform two operations:
14.(a) (ii) Explain single L-rotation and of the double RL-rotation with general form.
(6)
The symmetric single left rotation or L-rotation is the mirror image of the single
R-rotation.
It is performed after a new key is inserted into the right sub tree of the right child
of a tree whose root had the balance of -1 before the insertion.
The double right-left rotation is the mirror image of the double LR-rotation and is
left for the exercises.
14.(b) solve the all_pairs shortest path problem for the diagraph with the weight
matrix given below(10).
15.(a) Apply backtracking technique to solve the following instance of the
subset sum problem. S = {1, 3, 4, 5} and d=11.(16)
The state space tree for the given instance of the subset sum problem S =
{1, 3, 4, 5} and d=11
There is no solution to this instance of the problem.
15. (b) Solve the following instance of the Knapsack problem by the branch-and-bound
algorithm.
1 4 $40
2 7 $42
3 5 $25
4 3 $12
We order the items of a given instances in descending order by their value to weight ratios.
1 4 $40 10
2 7 $42 6
3 5 $25 5
4 3 $12 4
Capacity W=10
Ub = 0+(10-0) (40/4)
=(10) (10)
=100
=$100
Computation at node 0: No items have been closed selected. Both total weight of the items already selected
w and their total value v are equal to 0.
w=0 v=0 vi+1 / wi+1 = v1 / w1 =40/4=40
Capacity W=10
ub=v+(W-w) (vi+1/wi+1)
=0 + (10-0) (10)
=$100
Computation at node 1: It represents left child of the root, represents the subset that include item 1.
Total weight and value of the item already included are 4 and $40 respectively.
w=4 v=40 Capacity W=10
vi+1 /wi+1=next item to item 1
=v2/w2
=6
ub=v+(W-w) vi+1 /wi+1
=40+(10-4) 6
=40+36
=76
Computation at node 2: we assume item 1 is not selected.
v=0 w=0 , capacity W=10
vi+1 / wi+1 = v2/w2=6
ub=v+(W-w) (vi+1/wi+1)
=0+(10-0) (6)
=60
Computation at node 3: subset with item 1 and item 2 respectively.
Total weight w=4+7=11
Value v=40+42=82
vi+1 / wi+1 =v2/w2=5
capacity W=10
But the total weight W exceeds Knapsack capacity. Hence node 3 is terminated immediately.
Computation at node 8: subset with item 1, item 3 and without item 2 and item 4.
w=9
v=65
W=10
vi+1/wi+1=v5/w5=0
there is no next item
ub=65+(10-1)0=65
it is simply equal to the total value of these two items.