Beruflich Dokumente
Kultur Dokumente
SYLLABUS
UNIT I
Algorithms, Performance Analysis- Time Complexity And Space Complexity, Asymptotic
Notation-Big Oh, Omega And Theta Notations, Complexity Analysis Examples. Data StructuresLinear And Non Linear Data Structures, Adt Concept, Linear List Adt, Array Representation,
Linked Representation, Vector Representation, Singly Linked Lists -Insertion, Deletion, Search
Operations, Doubly Linked Lists-Insertion, Deletion Operations, Circular Lists. Representation
Of Single, Two Dimensional Arrays, Sparse Matrices And Their Representation.
UNIT II
Stack And Queue Adts, Array And Linked List Representations, Infix To Postfix Conversion
Using Stack, Implementation Of Recursion, Circular Queue-Insertion And Deletion, Dequeue
Adt, Array And Linked List Representations, Priority Queue Adt, Implementation Using Heaps,
Insertion Into A Max Heap, Deletion From A Max Heap, Java.Util Package-Arraylist, Linked
List, Vector Classes, Stacks And Queues In Java.Util, Iterators In Java.Util.
UNIT III
SearchingLinear And Binary Search Methods, Hashing-Hash Functions, Collision Resolution
Methods-Open Addressing, Chaining, Hashing In Java.Util-Hashmap, Hashset, Hashtable.
Sorting Bubble Sort, Insertion Sort, Quick Sort, Merge Sort, Heap Sort, Radix Sort,
Comparison Of Sorting Methods.
UNIT IV
Trees- Ordinary And Binary Trees Terminology, Properties Of Binary Trees, Binary Tree Adt,
Representations, Recursive And Non Recursive Traversals, Java Code For Traversals, Threaded
Binary Trees. Graphs- Graphs Terminology, Graph Adt, Representations, Graph
Traversals/Search Methods-Dfs And Bfs, Java Code For Graph Traversals, Applications Of
Graphs-Minimum Cost Spanning Tree Using Kruskals Algorithm, Dijkstras Algorithm For
Single Source Shortest Path Problem.
UNIT V
Search Trees- Binary Search Tree-Binary Search Tree Adt, Insertion, Deletion And Searching
Operations, Balanced Search Trees, Avl Trees-Definition And Examples Only, Red Black Trees
Definition And Examples Only, B-Trees-Definition, Insertion And Searching Operations, Trees
In Java.Util- Treeset, Tree Map Classes, Tries(Examples Only),Comparison Of Search Trees.
Text Compression-Huffman Coding And Decoding, Pattern Matching-Kmp Algorithm.
TEXTBOOK:
1.
(R13)
Topic in
Syllabus
Lecture No.
Suggested
Books
Remarks
Unit -1
1
Introduction
Introduction to Data
Structures
L1
T1 CH1
Algorithms,
Performance
Analysis
L2
T1- CH2
Asymptotic
Notation
L3
T1 CH3
Data
Structures
L4
T1 CH5
Representations
of Data
Structures
Array Representation,
Linked
Representation,
Vector
Representation,
L5
T1- CH5
Singly Linked
Lists
Insertion, Deletion,
Search Operations,
L6
T1- CH6
Doubly Linked
Lists
Insertion, Deletion,
Search Operations
L7
T1- CH6
Array
Representation
Single, Two
Dimensional Arrays
L8
T1- CH8
Sparse
Matrices And
Their
Representation
.
L9
T1- CH8
10
Revision of Unit
1
Revision of Unit 1
L10
Unit -2
11
Stack Adt
L11
T1- CH9
12
Queue Adt
L12
T1- CH9
13
Representations
of Stacks and
Queues
L13
T1- CH9
14
Conversions
Infix To Postfix
Conversion Using
Stacks
L14
T1- CH9
15
Implementatio Implementation Of
n Of Recursion Recursion
L15
T2- CH5
16
L16
T2- CH5
17
Dequeue Adt
L17
T2- CH8
18
Priority Queue
Adt
Implementation Using
Heaps
L18
T1- CH13,T24
19
Heaps
L19
T1- CH13
20
Revision of Unit
2
Revision of Unit 2
L20
Unit 3
21
Searching
L21
T2- CH3
22
Hashing
Introduction to hashing
and Hash Functions
L22
T1- CH11
23
Collision
Resolution
Methods
Open Addressing,
Chaining
L23
T1- CH11
24
Open
Addressing
L24
T1- CH11
25
Chaining
L25
T1- CH11
26
Hashing In
Java.Util
Hashmap, Hashset,
Hashtable
L26
T1- CH11
27
Sorting
Techniques
L27
T2- CH9
28
Sorting
Techniques
29
30
L28
T2- CH9
Comparison Of
Comparison Of Sorting
Sorting Methods. Methods.
L29
T2- CH9
Revision of Unit
3
L30
Revision of Unit 3
Unit- 4
31
Trees
Introduction to Trees
L31
T1- CH12
32
Ordinary And
Binary Trees
Terminology
Properties Of Binary
Trees, Binary Tree
Adt
L32
T1- CH12
33
Representations
of Binary Trees
L33
T1- CH12
34
Traversals of
Binary Trees
L34
T1- CH12
35
L35
T1- CH12
36
Threaded
Binary Trees
L36
T1- CH12
37
Graph Adt,
Representation
s
Graph Traversals
L37
T1- CH17
38
Search
Methods
L38
T1- CH17
39
L39
T1- CH17
40
Revision of Unit
4
L40
Unit 5
41
Search Trees
Introduction to Binary
Search Tree
L41
T1- CH15
42
Binary Search
Tree Adt
Insertion, Deletion
And Searching
Operations
L42
T1- CH15
43
Balanced
Search Trees
Avl Trees-Definition
And Examples
L43
T1- CH16
44
Red Black
Trees
Definition And
Examples
L44
T1- CH16
45
B-Trees
Definition, Insertion
And Searching
Operations
L45
T1- CH16
46
Trees In
Java.Util
L46
T1- CH16
47
Tries
Compressed Tries
L47
T2-13
48
Comparison Of Comparison Of
Search Trees
Search Trees
L48
T1- CH16
49
Text
Compression
L49
T2- CH13
50
Revision of Unit
5
Revision of Unit 5
L50
51
Splay Trees
L51
Lecturer
Handouts
52
Advanced Data
Structures
L52
Lecturer
Handouts
Assignment Questions
Unit 1
1.
2.
3.
4.
5.
6.
7.
Unit 2
1.
2.
3.
4.
5.
6.
7.
Unit 3
1.
2.
3.
4.
5.
6.
7.
Unit 4
1. Write about Binary Trees
N@ru Mtech Cse(1-1) Study Material
(R13)
2.
3.
4.
5.
6.
Shortest
Path
Unit 5
1.
2.
3.
4.
5.
6.
UNIT 1
Contents:
1. Explain in detail about an Algorithm
An algorithm is a finite set of instructions that is followed accomplishes a particular task. (or) an
algorithm is a step-by-step process of solving a particular problem. The algorithm must satisfy
the following criteria.
Input:
Output:
The algorithm should be finite that means after a finite number of steps it should
terminate.
To compute the space complexity we use two factors: constant and instance
characteristic. The space requirement s(p) can be given as
S(p)= c + sp
Where c is constant i.e., fixed part and it denotes the space of inputs and outputs. This space is an
amount of memory space taken by instruction, variable and identifiers.
The term Space Complexity is misused for Auxiliary Space at many places.
Auxiliary Space is the extra space or temporary space used by an algorithm.
Space Complexity of an algorithm is total space taken by the algorithm with respect to the input
size. Space complexity includes both Auxiliary space and space used by input.
For example, if we want to compare standard sorting algorithms on the basis of space, then
Auxiliary Space would be better criteria than Space Complexity. Merge Sort uses O(n) auxiliary
space, Insertion sort and Heap Sort use O(1) auxiliary space. Space complexity of all these
sorting algorithms is O(n)
Space complexity is a measure of the amount of working storage an algorithm needs. That means
how much memory, in the worst case, is needed at any point in the algorithm. As with time
complexity, we're mostly concerned with how the space needs grow, in big-Oh terms, as the size
N of the input problem grows.
For example,
int sum(int x, int y, int z) {
int r = x + y + z;
return r;
}
requires 3 units of space for the parameters and 1 for the local variable, and this never changes,
so this is O(1).
int sum(int a[], int n) {
int r = 0;
for (int i = 0; i < n; ++i) {
r += a[i];
}
return r;
}
requires n units for a, plus space for n, r and i, so it's O(n)
2.2 Time Complexity:
N@ru Mtech Cse(1-1) Study Material
(R13)
The time complexity of an algorithm is the amount of computer time required by analgorithm to
run to completion.
There are two types of computing time- compile time and rum time. The timecomplexity is
generally computed using run time or execution time.
It is difficult to compute the time complexity in terms of physically clocked time. Forinstance in
multiuser system, executing time depends on many factors such as systemload, number of other
programs running, instruction set used.
The time complexity is therefore given in terms of frequency count.
Frequency count is basically a count denoting number of times of execution ofstatement.
Ex:
Algorithm sum(A,B,C,m,n)
frequency
total
For i:= 1 to m do
m+1
m+1
For j:= 1 to n do
m(n+1)
mn
mn
m(n+1)
2mn+2m+1
So the time complexity of above algorithm is mn by neglecting the constants.
3) Explain in detail about Asymptotic Notations?
Measuring Efficiency is depend upon the following
When algorithm is applied to a large data set, will finish relatively quickly.
Speed and memory usage
Measuring speed-we measure algorithm speed in terms of Operations relative to input size.
Big O Notation
Definition: Let f(x) and g(x) be two functions; We say thatf(x) O(g(x))
if there exists a constant c, Xo > 0 such that f(x)c*g(x) for all X Xo.
f (x) is asymptotically less than or equal to g(x)
Big-O gives an asymptotic upper bound.
y
g(x)
f(x)
C*g(x)
xo
Big-Omega Notation:
Definition: Letf (x)andg(x) be two functions; We say thatf(x) (g(x))
if there exists a constant c, Xo 0such that f(x) c*g(x) for all X Xo
f(x) is asymptotically greater than or equal to g(x)
y
xo
f(x)
C*g(x)
Big Notation:
Definition: Let f(x) and g(x) be two functions ; We say thatf(x) (g(x)) if there exists a constant
c1, c2, Xo > 0; such that for every integer x x0 we have c1g(x) f(x) c2g(x)
F(x) is asymptotically equal to g(x)
F(x) is bounded above and below by g(x)
N@ru Mtech Cse(1-1) Study Material
(R13)
g(x)
f(x)
C2*g(x)
List ADT
List is a collection of elements arranged in sequential manner. Hence a list is a sequence of zero
or more elements of given type, the form of list is
a1, a2, a3, a4 ,..an (n>=0)
where
n - Number of elements in the list
a1 - First element in the list
an- Last element in the list
Lists can be represented in two ways:
erase(index) - delete the element based on the given index. Elements with higher
index have their index reduced by 1.
insert(index, x) - insert x at the given index, elements with the index >= index
have their index increased by 1.
(R13)
We know that in the array implementation of lists, the sequential organization is provided
implicitly by its index. We use the index for accessing and manipulation of array elements.
One major problem with the arrays is that size of an array must be specified precisely
at the beginning. This may be difficult task in many practical applications. Other problems
are due to the difficulty in insertion and deletion at the beginning of the array since it takes
time O(n).
A completely different way to represent a list is to make each item in the list part of a
structure that also contains a link to the structure containing the next item as shown in the
following fig, this type of list is called a linked list, because it is a list whose order is given by
links from one to the next.
Types of Linked Lists:
Basically there are four types of linked list
single linked list
doubly linked list
circular linked list
circular doubly linked list
Single Linked List: the simplest kind of linked list is a single linked list, which hasone link per
node. This link points to the next node in the list or to a null value if it is the finalnode.
Doubly Linked List: a more sophisticated kind of linked list is a doubly linked list or
two way linked list. Each node has two links:
One of them points to previous node or points to a null value if it is the first node.
The other points to the next node or points to a null value if it is a final node.
Circular Linked List: in this, the first and the final nodes are linked together. In fact in
a singly circular linked list each node has one link. Similarly, to an ordinary singly linked list,
except that the next link of the last node points to the first node.
Circular Doubly Linked List: in this, each mode has two links, similarly to doubly
linked list, except that the previous link of the first node points to the last node and the next
link of the last node points to the first node.
Advantages:
Memory is allocated dynamically.
Insertion and deletion is easy.
Data is deleted physically.
A list or sequence is an abstract data type that implements a finite ordered collection of values,
where the same value may occur more than once. An instance of a list is a computer
representation of the mathematical concept of a finite sequence; the (potentially) infinite analog
of a list is a stream. Lists are a basic example of containers, as they contain other values. Each
instance of a value in the list is usually called an item, entry, or element of the list; if the same
value occurs multiple times, each occurrence is considered a distinct item. Lists are distinguished
from arrays in that lists only allow sequential access, while arrays allow random access.
The name list is also used for several concrete data structures that can be used to implement
abstract lists, especially linked lists.
The so-called static list structures allow only inspection and enumeration of the values. A
mutable or dynamic list may allow items to be inserted, replaced, or deleted during the list's
existence.
Linear List Array Representation
L = (a, b, c, d, e)
Store element i of list in element[i].
}
Object tmp = list[head].data;
if( list[head].next == -1 ) // if the list contains one node,
head = -1; // make list empty.
else
head = list[head].next;
count--; // update count
returntmp;
}
public Object deleteAfter(Object x)
{ int p = find(x);
if( p == -1 || list[p].next == -1 )
{ System.out.println("No deletion");
return null;
}
int q = list[p].next;
Object tmp = list[q].data;
list[p].next = list[q].next;
count--;
returntmp;
}
public void display()
{ int p = head;
System.out.print("\nList: [ " );
while( p != -1)
{ System.out.print(list[p].data + " "); // print data
p = list[p].next; // advance to next node
}
System.out.println("]\n");//
}
publicbooleanisEmpty()
{ if(count == 0) return true;
else return false;
}
publicintsize()
{ return count; }
}
Testing ArrayListClass
classArrayListDemo
{
public static void main(String[] args)
{
ArrayListlinkedList = new ArrayList(10);
linkedList.initializeList();
linkedList.createList(4); // create 4 nodes
linkedList.display(); // print the list
System.out.print("InsertFirst 55:");
linkedList.insertFirst(55);
linkedList.display();
System.out.print("Insert 66 after 33:");
linkedList.insertAfter(66, 33); // insert 66 after 33
linkedList.display();
Object item = linkedList.deleteFirst(); System.out.println("Deleted node: " + item);
linkedList.display();
System.out.print("InsertFirst 77:");
linkedList.insertFirst(77);
linkedList.display();
item = linkedList.deleteAfter(22); // delete node after node 22
System.out.println("Deleted node: " + item);
linkedList.display();
System.out.println("size(): " + linkedList.size());
N@ru Mtech Cse(1-1) Study Material
(R13)
}
}
b) Linked Representation
Let L = (e1,e2,,en)
Each element ei is represented in a separate node
Each node has exactly one link field that is used to locate the next element in the
linear list
The last node, en, has no node to link to and so its link field is NULL.
This structure is also called a chain.
A LinkedListClass
classLinkedListimplements List
{
classNode
{ Object data; // data item
Node next; // refers to next node in the list
Node( Object d ) // constructor
{ data = d; } // next is automatically set to null
}
Node head; // head refers to first node
Node p; // p refers to current node
int count; // current number of nodes
public void createList(int n) // create 'n' nodes
{
p = new Node(11); // create first node
head = p; // assign mem. address of 'p' to 'head'
for(inti = 1; i< n; i++ ) // create 'n-1' nodes
p = p.next = new Node(11 + 11*i);
count = n;
}
public void insertFirst(Object item) // insert at the beginning of list
{
p = new Node(item); // create new node
p.next = head; // new node refers to old head
head = p; // new head refers to new node
count++;
}
public void insertAfter(Object item,Object key)
{
p = find(key); // get location of key item
if( p == null )
System.out.println(key + " key is not found");
else
{ Node q = new Node(item); // create new node
q.next = p.next; // new node next refers to p.next
p.next = q; // p.next refers to new node
count++;
}
}
N@ru Mtech Cse(1-1) Study Material
(R13)
list.displayList();
list.insertAfter(66, 33); // insert 66 after 33
list.displayList();
Object item = list.deleteFirst(); // delete first node
if( item != null )
{ System.out.println("deleteFirst(): " + item);
list.displayList();
}
item = list.deleteAfter(22); // delete a node after node(22)
if( item != null )
{ System.out.println("deleteAfter(22): " + item);
list.displayList();
}
System.out.println("size(): " + list.size());
}
}
6) Explain in detail about Single Linked List with a java program?
The simplest kind of linked list is a single linked list, which has one link per node. This link
points to the next node in the list or to a null value if it is the final node.
A linked list is a data structure consisting of a group of nodes which together represent a
sequence. Under the simplest form, each node is composed of a datum and a reference (in other
words, a link) to the next node in the sequence; more complex variants add additional links. This
structure allows for efficient insertion or removal of elements from any position in the sequence.
Linked lists are among the simplest and most common data structures. They can be used to
implement several other common abstract data types, including lists (the abstract data type),
stacks, queues, associative arrays, and S-expressions, though it is not uncommon to implement
the other data structures directly without using a list as the basis of implementation.
The principal benefit of a linked list over a conventional array is that the list elements can easily
be inserted or removed without reallocation or reorganization of the entire structure because the
data items need not be stored contiguously in memory or on disk. Linked lists allow insertion
and removal of nodes at any point in the list, and can do so with a constant number of operations
if the link previous to the link being added or removed is maintained during list traversal.
On the other hand, simple linked lists by themselves do not allow random access to the data, or
any form of efficient indexing. Thus, many basic operations such as obtaining the last node of
the list (assuming that the last node is not maintained as separate node reference in the list
structure), or finding a node that contains a given datum, or locating the place where a new node
should be inserted may require scanning most or all of the list elements.
Program:
import java.lang.*;
import java.util.*;
public class Node
N@ru Mtech Cse(1-1) Study Material
(R13)
{
Node data;
node next;
}
public class SinglyLinkeList
{
Node start;
int size;
public SinnglyLinkedList()
{
start=null;
size=0;
}
public void add(Node data)
{
if(size=0)
{
start=new Node();
start.next=null;
start.data=data;
}
else
{
Node currentnode=getnode(size-1);
Node newnode=new Node();
newnode.data=data;
newnode.next=null;
currentnode.next=newnode;
}
size++;
}
public void insertfront(Node data)
{
if(size==0)
{
Node newnode=new Node();
start.next=null;
start.data=data;
}
else
{
Node newnode=new Node();
newnode.data=data;
newnode.next=start;
}
size++;
}
public void insertAt(int position,Node data)
{
N@ru Mtech Cse(1-1) Study Material
(R13)
if(position==0)
{
insertatfront(Node data);
}
else if(position==size-1)
{
insertatlast(data);
}
else
{
Node tempnode=getNodeAt(position-1);
Node newnode= new Node();
newnode.data=data;
newnode.next=tempnode.next;
size++;
}
}
public Node getFirst()
{
return getNodeAt(0);
}
public Node getLast()
{
return getNodeAt(size-1);
}
public Node removeAtFirst()
{
if(size==0)
{
System.out.println("Empty List ");
}
else
{
Node tempnode=getNodeAt(position-1);
Node data=tempnode.next.data;
tempnode.next=tempnode.next.next;
size--;
return data;
}
}
Node data=start.data;
start=start.next;
size--;
return data;
}
}
public Node removeAtLast()
{
if(size==0)
{
System.out.println("Empty List ");
}
N@ru Mtech Cse(1-1) Study Material
(R13)
else
{
Node data=getNodeAt(size-1);
Node data=tempnode.next.data;
size--;
return data;
}
}
public string void main(string[] args)
{
LinkedList l1=new LinkedList();
BufferReader bf=new BufferReader(new InputStreamReader(Sysyem.in));
System.out.println("1->Add Element 2->Remove Last 3->Insert Front 4->Insert at
position 5->REmove Front 6-> Remove At Last 7->Exit ");
string choice;
choice.readline();
int choiceNum= Integer.parseInt(choice);
string str;
switch(choiceNum)
{
case 1: System.out.println("Enter the element to be inserted ");
str =bf.readline();
l1.add(str);
break;
case 2: System.out.println("Linked List before removing element : "+l1);
System.out.println("Linked List after removing element : ");
l1.removeLast();
break;
case 3: System.out.println("Linked List before inserting element at first: "+l1);
System.out.println("Enter the element to be inserted at first: ");
str.readline();
l1.insertfront(str);
System.out.println("Linked List after inserting at first : "+l1);
break;
case 4: System.out.println("Linked List before inserting element at particular position: "+l1);
System.out.println("Enter the element position & element : ");
str.readline();
string str1;
str1.readline();
l1.insertAt(str,str1);
System.out.println("Linked List after inserting at particular Position :"+l1);
break;
case 5: System.out.println("Linked LIst before removing front element: "+l1);
System.out.println("Linked List after removing front element : ");
l1.removeAtFirst();
break;
case 6: System.out.println("Linked List before removing last element: "+l1);
System.out.println("Linked List after removing last element :");
l1.removeAtLast();
default:break;
}
}
N@ru Mtech Cse(1-1) Study Material
(R13)
}
7) Explain in detail about Double Linked List with a java program?
A more sophisticated kind of linked list is a doubly linked list or two way linked list. Each node
has two links:
One of them points to previous node or points to a null value if it is the first node.
The other points to the next node or points to a null value if it is a final node.
a doubly-linked list is a linked data structure that consists of a set of sequentially linked records
called nodes. Each node contains two fields, called links, that are references to the previous and
to the next node in the sequence of nodes. The beginning and ending nodes' previous and next
links, respectively, point to some kind of terminator, typically a sentinel node or null, to facilitate
traversal of the list. If there is only one sentinel node, then the list is circularly linked via the
sentinel node. It can be conceptualized as two singly linked lists formed from the same data
items, but in opposite sequential orders.
The two node links allow traversal of the list in either direction. While adding or removing a
node in a doubly-linked list requires changing more links than the same operations on a singly
linked list, the operations are simpler and potentially more efficient (for nodes other than first
nodes) because there is no need to keep track of the previous node during traversal or no need to
traverse the list to find the previous node, so that its link can be modified.
Program:
class Link
{
public long dData;
// data item
public Link next;
// next link in list
public Link previous;
// previous link in list
// ------------------------------------------------------------public Link(long d)
// constructor
{ dData = d; }
// ------------------------------------------------------------public void displayLink()
// display this link
{ System.out.print(dData + " "); }
// ------------------------------------------------------------} // end class Link
class DoublyLinkedList
{
private Link first;
// ref to first item
private Link last;
// ref to last item
// ------------------------------------------------------------public DoublyLinkedList()
// constructor
{
first = null;
// no items on list yet
last = null;
}
// ------------------------------------------------------------N@ru Mtech Cse(1-1) Study Material
(R13)
// insert at front
theList.insertLast(11);
theList.insertLast(33);
theList.insertLast(55);
// insert at rear
theList.displayForward();
first node of the list; in that case the list is said to be circular or circularly linked; otherwise it is
said to be open or linear.
In the case of a circular doubly linked list, the only change that occurs is that the end, or "tail", of
the said list is linked back to the front, or "head", of the list and vice versa.
Performance
1. The advantage is that we no longer need both a head and tail variable to keep track of
the list. Even if only a single variable is used, both the first and the last list elements can
be found in constant time. Also, for implementing queues we will only need one pointer
namely tail, to locate both head and tail.
2. The disadvantage is that the algorithms have become more complicated.
Basic Operations on a Circular Linked List
Insert Inserts a new element at the end of the list.
Delete Deletes any node from the list.
Find Finds any node in the list.
Print Prints the list.
A Java Program:
import java.lang.*;
import java.util.*;
import java.io.*;
class SLinkedCircularList
{
private int data;
private SLinkedCircularList next;
public SLinkedCircularList()
{
data = 0;
next = this;
}
public SLinkedCircularList(int value)
{
data = value;
next = this;
}
public SLinkedCircularList InsertNext(int value)
{
SLinkedCircularList node = new SLinkedCircularList(value);
if (this.next == this) // only one node in the circular list
{
// Easy to handle, after the two lines of executions,
// there will be two nodes in the circular list
node.next = this;
this.next = node;
N@ru Mtech Cse(1-1) Study Material
(R13)
}
else
{
// Insert in the middle
SLinkedCircularList temp = this.next;
node.next = temp;
this.next = node;
}
return node;
}
public int DeleteNext()
{
if (this.next == this)
{
System.out.println("\nThe node can not be deleted as there is only one node in the circular
list");
return 0;
}
SLinkedCircularList node = this.next;
this.next = this.next.next;
node = null;
return 1;
}
public void Traverse()
{
Traverse(this);
}
public void Traverse(SLinkedCircularList node)
{
if (node == null)
node = this;
System.out.println("\n\nTraversing in Forward Direction\n\n");
SLinkedCircularList startnode = node;
do
{
System.out.println(node.data);
node = node.next;
}
while (node != startnode);
}
public int GetNumberOfNodes()
{
return GetNumberOfNodes(this);
}
public int GetNumberOfNodes(SLinkedCircularList node)
N@ru Mtech Cse(1-1) Study Material
(R13)
{
if (node == null)
node = this;
int count = 0;
SLinkedCircularList startnode = node;
do
{
count++;
node = node.next;
}
while (node != startnode);
System.out.println("\nCurrent Node Value: " + node.data);
System.out.println("\nTotal nodes in Circular List: " + count);
return count;
}
public static void main(String[] args)
{
SLinkedCircularList node1 = new SLinkedCircularList(1);
node1.DeleteNext(); // Delete will fail in this case.
SLinkedCircularList node2 = node1.InsertNext(2);
node1.DeleteNext(); // It will delete the node2.
node2 = node1.InsertNext(2); // Insert it again
SLinkedCircularList node3 = node2.InsertNext(3);
SLinkedCircularList node4 = node3.InsertNext(4);
SLinkedCircularList node5 = node4.InsertNext(5);
node1.GetNumberOfNodes();
node3.GetNumberOfNodes();
node5.GetNumberOfNodes();
node1.Traverse();
node3.DeleteNext(); // delete the node "4"
node2.Traverse();
node1.GetNumberOfNodes();
node3.GetNumberOfNodes();
node5.GetNumberOfNodes();
}
}
8) Explain in detail about Sparse Matrices?
Data structures used to maintain sparse matrices must provide access to the nonzero elements of
a matrix in a manner which facilitates efficient implementation of the algorithms that are
examined in Section 8. The current sparse matrix implementation also seeks to support a high
degree of generality both in problem size and the definition of a matrix element. Among other
N@ru Mtech Cse(1-1) Study Material
(R13)
things, this implies that the algorithms must be able to solve problems that are too large to fit into
core.
Simply put, the fundamental sparse matrix data structure is:
Each matrix is a relation in a data base, and
Each nonzero element of a matrix is a tuple in a matrix relation.
Matrix tuples have the structure indicated in the following figure
The row and column domains of each tuple constitute a compound key to the matrix relation.
Their meaning corresponds to the standard dense matrix terminology.
The description of a matrix element is left intentionally vague. Its definition varies with the
application. Matrix elements must include a real number, double precision real number, complex
number, or any other entity for which the arithmetical operations of addition, subtraction,
multiplication, and division are reasonably defined.
In this context, matrix elements are accessed through high level data base operations:
Get retrieves a random tuple.
Next retrieves tuples sequentially. You will recall that the scan operator is used extensively by
sparse matrix algorithms in Section 8. Scan is implemented by embellishing the next primitive.
Put updates the non-key portions of an existing tuple.
Insert adds a new tuple to a relation.
Delete removes an existing tuple from a relation.
This data structure places few constraints on the representation of a matrix. However, several
conventions are adopted to facilitate consistent algorithms and efficient cache access:
Matrices have one-based indexing, i.e. the row and column indices of an nn matrix range from
1 to n.
Column zero exists for each row of an asymmetric matrix. Column zero serves as a row header
and facilitates row operations. It does not enter into the calculations.
A symmetric matrix matrix is stored as an upper triangular matrix. In this representation, the
diagonal element anchors row operations as well as entering into the computations. Column zero
is not used for symmetric matrices.
Example of sparse matrix
[ 11 22 0 0 0 0 0 ]
[ 0 33 44 0 0 0 0 ]
[ 0 0 55 66 77 0 0 ]
[ 0 0 0 0 0 88 0 ]
[ 0 0 0 0 0 0 99 ]
The above sparse matrix contains only 9 nonzero elements of the 35,with 26 of those elements as
zero.
UNIT 2
Contents:
N@ru Mtech Cse(1-1) Study Material
(R13)
1.
A stack is a linear data structure in which insertion and deletion takes place at same end. This end
is called the top. The other end of the list is called bottom. A stack is usually visualized not
horizontally but vertically. A stack is container of objects that are inserted and deleted according
to LIFO (Last in First Out) .i.e., the last inserted element in the stack is deleted first.
The process of inserting a new element on the top of the stack is known as Push operation.
After the push operation the top is incremented by one and new element resets at the top. When
the array is full and this condition is known as stack overflow. In such case no element is
inserted.
The process of removing an element from the top of the stack is called pop operation. After
every pop operation, top is decremented by one. If there is no new element in the stack, then the
stack is called as empty stack or stack underflow. In such case, the pop operation cannot be
applicable.
}
Pop() - deletes the top element from the stack
Algorithm pop()
{
If isempty() then Throw StackEmptyException;
a<- s[t];
t<- t-1;
return a;
}
Push(x) - add the element x to top of the stack.
Algorithm Push(a)
{
if size() = N then
Throw StackEmptyException;
Else
t<-t+1;
s[t]<-a;
}
The time complexity of stack to perform all its operations is O(1).
a) Array Implementation of Stack ADT
A Stack Interface
public interfaceStack
{ public void push(Object ob);
public Object pop();
public Object peek();
public boolean isEmpty();
public int size();
}
An ArrayStack Class
public class ArrayStack implements Stack
{
private Object a[];
private int top; // stack top
public ArrayStack(int n) // constructor
{ a = new Object[n]; // create stack array
top = -1; // no items in the stack
}
public void push(Object item) // add an item on top of stack
{
if(top == a.length-1)
{ System.out.println("Stack is full");
return;
}
top++; // increment top
a[top] = item; // insert an item
}
public Object pop() // remove an item from top of stack
{
if( isEmpty() )
{ System.out.println("Stack is empty");
return null;
}
Object item = a[top]; // access top item
N@ru Mtech Cse(1-1) Study Material
(R13)
2.
A queue is a linear, sequential list of items that are accessed in the order First in First out (FIFO)
i.e., the first item inserted in a queue is also one to be deleted. The insertion of element to queue
is done at one end called rear, and deletion or access of element from the queue will be done at
other end called front.
n=Integer.parseInt(get.readLine());
a=new int[n];
}
catch(Exception e)
{
System.out.println(e.getMessage());
}
}
void enqueue()
{
try
{
if(count<n)
{
System.out.println("Enter the element to be added:");
item=Integer.parseInt(get.readLine());
a[rear]=item;
rear++;
count++;
}
else
System.out.println("QUEUE IS FULL");
}
catch(Exception e)
{
System.out.println(e.getMessage());
}
}
void dequeue()
{
if(count!=0)
{
System.out.println("The item deleted is:"+a[front]);
front++;
count--;
}
else
System.out.println("QUEUE IS EMPTY");
if(rear==n)
rear=0;
}
void display()
{
int m=0;
if(count==0)
System.out.println("QUEUE IS EMPTY");
else
{
for(i=front;m<count;i++,m++)
System.out.println(" "+a[i%n]);
}
}
}
class myclrqueue
N@ru Mtech Cse(1-1) Study Material
(R13)
{
public static void main(String arg[])
{
DataInputStream get=new DataInputStream(System.in);
int ch;
clrqueue obj=new clrqueue();
obj.getdata();
try
{
do
{
System.out.println(" 1.Enqueue 2.Dequeue 3.Display 4.Exit");
System.out.println("Enter the choice");
ch=Integer.parseInt(get.readLine());
switch (ch)
{
case 1:
obj.enqueue();
break;
case 2:
obj.dequeue();
break;
case 3:
obj.display();
break;
}
}
while(ch!=4);
}
catch(Exception e)
{
System.out.println(e.getMessage());
}
}
}
b) Linked List Representation of QUEUE ADT
import java.io.*;
class Node
{
public int item;
public Node next;
public Node(int val)
{
item = val;
}
}
class LinkedList
{
private Node front,rear;
public LinkedList()
{
front = null;
N@ru Mtech Cse(1-1) Study Material
(R13)
rear = null;
}
public void insert(int val)
{
Node newNode = new Node(val);
if (front == null) {
front = rear = newNode;
}
else {
rear.next = newNode;
rear = newNode;
}
}
public int delete()
{
if(front==null)
{
System.out.println("Queue is Empty");
return 0;
}
else
{
int temp = front.item;
front = front.next;
return temp;
}
}
public void display()
{
if(front==null)
{
System.out.println("Queue is Empty");
}
else
{
System.out.println("Elements in the Queue");
Node current = front;
while(current != null)
{
System.out.println("[" + current.item + "] ");
current = current.next;
}
System.out.println("");
}
}
}
class QueueLinkedList
{
public static void main(String[] args) throws IOException
{
LinkedList ll = new LinkedList();
System.out.println("1INSERT\n2.DELETE\n3.DISPLAY");
N@ru Mtech Cse(1-1) Study Material
(R13)
while(true)
{
System.out.println("Enter the Key of the Operation");
int c=Integer.parseInt((new BufferedReader(new InputStreamReader(System.in))).readLine());
switch(c)
{
case 1:
System.out.println("Enter the Element");
int val=Integer.parseInt((new BufferedReader(new
InputStreamReader(System.in))).readLine());
ll.insert(val);
break;
case 2:
int temp=ll.delete();
if(temp!=0)
System.out.println("Element deleted is [" + temp + "] ");
break;
case 3:
ll.display();
break;
case 4:
System.exit(0);
default:
System.out.println("You have entered invalid Key.\n Try again");
}
}
}
}
3. Explain in detail about Circular Queue?
A circular queue is a particular implementation of a queue. It is very efficient. It is also quite
useful in low level code, because insertion and deletion are totally independent, which means
that you dont have to worry about an interrupt handler trying to do an insertion at the same time
as your main code is doing a deletion.
A circular queue consists of an array that contains the items in the queue, two array indexes and
an optional length. The indexes are called the head and tail pointers.
Is the queue empty or full?
There is a problem with this: Both an empty queue and a full queue would be indicated by
having the head and tail point to the same element. There are two ways around this: either
maintain a variable with the number of items in the queue, or create the array with one more
element that you will actually need so that the queue is never full.
Operations
Insertion and deletion are very simple.
To insert, write the element to the tail index and increment the tail, wrapping if necessary.
To delete, save the head element and increment the head, wrapping if necessary.
A circular buffer first starts empty and of some predefined length. For example, this is a 7element buffer:
Assume that a 1 is written into the middle of the buffer (exact starting location does not matter in
a circular buffer):
Then assume that two more elements are added 2 & 3 which get appended after the 1:
If two elements are then removed from the buffer, the oldest values inside the buffer are
removed. The two elements removed, in this case, are 1 & 2 leaving the buffer with just a 3:
A consequence of the circular buffer is that when it is full and a subsequent write is performed,
then it starts overwriting the oldest data. In this case, two more elements A & B are added
and they overwrite the 3 & 4:
Alternatively, the routines that manage the buffer could prevent overwriting the data and return
an error or raise an exception. Whether or not data is overwritten is up to the semantics of the
buffer routines or the application using the circular buffer.
Finally, if two elements are now removed then what would be returned is not 3 & 4 but 5 & 6
because A & B overwrote the 3 & the 4 yielding the buffer with:
A double-ended queue or dequeue is simply a combination of a stack and a queue in that items
can be inserted or removed from both ends.
Functions
int size() Returns how many items are in the deque.
bool isEmpty() Returns whether the deque is empty (i.e. size is 0).
Object removeFirst() Removes the object from the front and returns it.
Object removeLast() Removes the object from the back and returns it.
Doubly Linked List implementation each node has prev and next link, hence all
operations run at O(1) time
array[++rear]=item;
numberOfItems++;
}
public Item removeFirst()
{
Item temp=array[front++];
if(front==maxSize)
front=0;
numberOfItems--;
return temp;
}
public Item removeLast()
{
Item temp=array[rear--];
if(rear==-1)
rear=maxSize-1;
numberOfItems--;
return temp;
}
public int getFirst()
{
return front;
}
public int getLast()
{
return rear;
}
public static void main(String[] args)
{
Deque<String> element1=new Deque<String>();
Deque<String> element2=new Deque<String>();
for(int i=0;i<args.length;i++)
element1.addFirst(args[i]);
try {
for(;element1.numberOfItems+1>0;)
{
String temp=element1.removeFirst();
System.out.println(temp);
}
}
catch(Exception ex)
{
System.out.println("End Of Execution due to remove from empty queue");
}
System.out.println();
for(int i=0;i<args.length;i++)
element2.addLast(args[i]);
try {
for(;element2.numberOfItems+1>0;)
{
String temp=element2.removeLast();
System.out.println(temp);
}
}
N@ru Mtech Cse(1-1) Study Material
(R13)
catch(Exception ex)
{
System.out.println("End Of Execution due to remove from empty queue");
}
}
5. Write a java program for Infix To Postfix Conversion Using Stack
class InfixToPostfix
{
java.util.Stack<Character> stk =
new java.util.Stack<Character>();
public String toPostfix(String infix)
{
infix = "(" + infix + ")"; // enclose infix expr within parentheses
String postfix = "";
/* scan the infix char-by-char until end of string is reached */
for( int i=0; i<infix.length(); i++)
{
char ch, item;
ch = infix.charAt(i);
if( isOperand(ch) ) // if(ch is an operand), then
postfix = postfix + ch; // append ch to postfix string
if( ch == '(' ) // if(ch is a left-bracket), then
stk.push(ch); // push onto the stack
if( isOperator(ch) ) // if(ch is an operator), then
{
item = stk.pop(); // pop an item from the stack
/* if(item is an operator), then check the precedence of ch and item*/
if( isOperator(item) )
{
if( precedence(item) >= precedence(ch) )
{
stk.push(item);
stk.push(ch);
}
else
{ postfix = postfix + item;
stk.push(ch);
}
}
else
{ stk.push(item);
stk.push(ch);
}
} // end of if(isOperator(ch))
if( ch == ')' )
{
item = stk.pop();
while( item != '(' )
{
postfix = postfix + item;
item = stk.pop();
}
}
} // end of for-loop
return postfix;
} // end of toPostfix() method
public boolean isOperand(char c)
{ return(c >= 'A' && c <= 'Z'); }
public boolean isOperator(char c)
N@ru Mtech Cse(1-1) Study Material
(R13)
{
return( c=='+' || c=='-' || c=='*' || c=='/' );
}
public int precedence(char c)
{
int rank = 1; // rank = 1 for '* or '/'
if( c == '+' || c == '-' ) rank = 2;
return rank;
}
}
///////////////////////// InfixToPostfixDemo.java ///////////////
class InfixToPostfixDemo
{
public static void main(String args[])
{
InfixToPostfix obj = new InfixToPostfix();
String infix = "A*(B+C/D)-E";
System.out.println("infix: " + infix );
System.out.println("postfix:"+obj.toPostfix(infix) );
}
}
6. Explain in detail about Priority Queue ADT?
A priority queue is a collection of zero or more elements. Each element has a priority or
value. There are two types of priority queues:
Ascending Priority Queue (Min)
Descending Priority Queue (Max)
Ascending/Min Priority Queue:
It is a collection of elements, which are inserted in any order but the smallest i.e., element
with minimum priority can be removed.
Descending/Min Priority Queue:
It is a collection of elements, which are inserted in any order but the largest element i.e.,
element with maximum priority can be removed.
Implementation of Priority Queue:
The Priority Queue is implemented by using arrays, linked list and heap data structure. The
Heap data structure. The heap data structure is the best way for implementing the priority queue
efficiently.
Operations on Priority queue ADT:
Empty() - return true iff the queue is empty
Size() - return number of elements in the queue
Top() - return element with maximum priority
Pop() - remove the element with largest priority from the queue or lowest priority
from the queue
Push(x) - insert the element x into the queue
Applications:
7.
Heap is a complete binary tree, whose elements is stored at internal nodes and has the keys
satisfying the heap order property. Heap Data structure is of two types:
Max Heap
Min Heap
Max Heap: A max heap is a tree in which the values in each node are greater than or equal to
these in its children.
Min Heap: A min heap is a tree in which the values in each node are less than or equal to these
in
its children.
Heap Operations
Insertion (Push)
To insert an element x into heap, we create a hole in the next available location, since
otherwise tree will not be complete. If x can be placed in the hole without violating the heap
order,
then we do so and are done. Otherwise we slide the element that is in the holes parent node in
the hole.
The following fig shows that to insert 14. We create a hole in the next available heap location.
Inserting 14 in the hole would violate the heap order property. So 31 is slide down into the hole.
This
strategy is continued until the correct location for 14 is found.
This general strategy is known as Precolate up. The new element is percolated up the heap
until the correct location is found.
Child = *2
}
Heap[currentnode]=lastelemnt;
}
The time complexity to perform above operations is O(logn).
8.
Java.Util Package-Arraylist
It implements all optional list operations and it also permits all elements, includes null.
It provides methods to manipulate the size of the array that is used internally to store the
list.
The constant factor is low compared to that for the LinkedList implementation.
Class declaration
Following is the declaration for java.util.ArrayList class:
public class ArrayList<E>
extends AbstractList<E>
implements List<E>, RandomAccess, Cloneable, Serializable
Here <E> represents an Element. For example, if you're building an array list of Integers then
you'd initialize it as
ArrayList<Integer> list = new ArrayList<Integer>();
Class constructors
S.N. Constructor & Description
1
ArrayList()
This constructor is used to create an empty list with an initial capacity sufficient to hold
10 elements.
2
ArrayList(Collection<? extends E> c)
This constructor is used to create a list containing the elements of the specified collection.
3
ArrayList(int initialCapacity)
This constructor is used to create an empty list with an initial capacity.
Class methods
S.N. Method & Description
1
boolean add(E e)
This method appends the specified element to the end of this list.
2
void add(int index, E element)
This method inserts the specified element at the specified position in this list.
3
boolean addAll(Collection<? extends E> c)
This method appends all of the elements in the specified collection to the end of this list,
in the order that they are returned by the specified collection's Iterator
4
boolean addAll(int index, Collection<? extends E> c)
This method inserts all of the elements in the specified collection into this list, starting at
the specified position.
void clear()
This method removes all of the elements from this list.
Object clone()
This method returns a shallow copy of this ArrayList instance.
5
6
Java.Util Package-Linkedlist
The java.util.LinkedList class operations perform we can expect for a doubly-linked list.
Operations that index into the list will traverse the list from the beginning or the end, whichever
is closer to the specified index.
Class declaration
Following is the declaration for java.util.LinkedList class:
public class LinkedList<E>
extends AbstractSequentialList<E>
implements List<E>, Deque<E>, Cloneable, Serializable
Parameters
Following is the parameter for java.util.LinkedList class:
E -- This is the type of elements held in this collection.
Field
Fields inherited from class java.util.AbstractList.
Class constructors
S.N. Constructor & Description
1
LinkedList()
This constructs constructs an empty list.
Class methods
S.N. Method & Description
1
boolean add(E e)
This method appends the specified element to the end of this list.
void addFirst(E e)
This method returns inserts the specified element at the beginning of this list..
void addLast(E e)
N@ru Mtech Cse(1-1) Study Material
(R13)
This method returns appends the specified element to the end of this list.
7
void clear()
This method removes all of the elements from this list.
Object clone()
This method returns returns a shallow copy of this LinkedList.
boolean contains(Object o)
This method returns true if this list contains the specified element.
10
Iterator<E> descendingIterator()
This method returns an iterator over the elements in this deque in reverse sequential order.
The size of a Vector can grow or shrink as needed to accommodate adding and removing
items.
As of the Java 2 platform v1.2, this class was retrofitted to implement the List interface.
Class declaration
Following is the declaration for java.util.Vector class:
public class Vector<E>
extends AbstractList<E>
implements List<E>, RandomAccess, Cloneable, Serializable
Here <E> represents an Element, which could be any class. For example, if you're building an
array list of Integers then you'd initialize it as follows:
ArrayList<Integer> list = new ArrayList<Integer>();
Class constructors
S.N. Constructor & Description
1
Vector()
This constructor is used to create an empty vector so that its internal data array has size
10 and its standard capacity increment is zero.
2
Vector(Collection<? extends E> c)
This constructor is used to create a vector containing the elements of the specified
collection, in the order they are returned by the collection's iterator.
3
Vector(int initialCapacity)
This constructor is used to create an empty vector with the specified initial capacity and
with its capacity increment equal to zero.
4
Vector(int initialCapacity, int capacityIncrement)
N@ru Mtech Cse(1-1) Study Material
(R13)
This constructor is used to create an empty vector with the specified initial capacity and
capacity increment.
Class methods
S.N.
1
2
3
Vector.
4
boolean addAll(int index, Collection<? extends E> c)
This method inserts all of the elements in the specified Collection into this Vector at the
specified position.
5
void addElement(E obj)
This method adds the specified component to the end of this vector, increasing its size by
one.
6
int capacity()
This method returns the current capacity of this vector.
7
void clear()
This method removes all of the elements from this vector.
8
clone clone()
This method returns a clone of this vector.
9
boolean contains(Object o)
This method returns true if this vector contains the specified element.
Java.Util Package-Stacks
The java.util.Stack class represents a last-in-first-out (LIFO) stack of objects.
When a stack is first created, it contains no items.
In this class, the last element inserted is accessed first.
Class declaration
Following is the declaration for java.util.Stack class:
public class Stack<E>
extends Vector<E>
Class constructors
S.N.
1
Class methods
S.N.
1
2
3
4
5
____________________________________________________________________________________
Java.util.Interfaces
N@ru Mtech Cse(1-1) Study Material
(R13)
The java.util.Interfaces contains the collections framework, legacy collection classes, event
model, date and time facilities, internationalization, and miscellaneous utility classes (a string
tokenizer, a random-number generator, and a bit array).
S.N.
Deque<E>
This is a linear collection that supports element insertion and removal at both ends.
2
Enumeration<E>
This is an object that implements the Enumeration interface generates a series of
elements, one at a time.
3
EventListener
This is a tagging interface that all event listener interfaces must extend.
4
Formattable
This is the Formattable interface must be implemented by any class that needs to
perform custom formatting using the 's' conversion specifier of Formatter.
5
Iterator<E>
This is an iterator over a collection.
6
Queue<E>
This is a collection designed for holding elements prior to processing.
[MARCH 2010]
[SEPT 2010] [APR 2011][APR
[NOV
UNIT 3
CONTENTS
1)
Linear search or sequential search is a method for finding a particular value in a list that consists
of checking every one of its elements, one at a time and in sequence, until the desired one is
found.
Linear search is the simplest search algorithm; it is a special case of brute-force search. Its worst
case cost is proportional to the number of elements in the list; and so is its expected cost, if all
list elements are equally likely to be searched for.
Analysis
For a list with n items, the best case is when the value is equal to the first element of the list, in
which case only one comparison is needed. The worst case is when the value is not in the list (or
occurs only once at the end of the list), in which case n comparisons are needed.
If the value being sought occurs k times in the list, and all orderings of the list are equally likely,
the expected number of comparisons is
For example, if the value being sought occurs once in the list, and all orderings of the list are
equally likely, the expected number of comparisons is
However, if it is known that it
occurs once, then at most n - 1 comparisons are needed, and the expected number of comparisons
is
Application
Linear search is usually very simple to implement, and is practical when the list has only a few
elements, or when performing a single search in an unordered list.
When many values have to be searched in the same list, it often pays to pre-process the list in
order to use a faster method. For example, one may sort the list and use binary search, or build
any efficient search data structure from it. Should the content of the list change frequently,
repeated re-organization may be more trouble than it is worth.
As a result, even though in theory other search algorithms may be faster than linear search (for
instance binary search), in practice even on medium sized arrays (around 100 items or less) it
might be infeasible to use anything else. On larger arrays, it only makes sense to use other, faster
search methods if the data is large enough, because the initial time to prepare (sort) the data is
comparable to many linear searches
Program in Java
import java.util.Scanner;
class LinearSearch
{
public static void main(String args[])
{
int c, n, search, array[];
Scanner in = new Scanner(System.in);
System.out.println("Enter number of elements");
n = in.nextInt();
array = new int[n];
System.out.println("Enter " + n + " integers");
for (c = 0; c < n; c++)
array[c] = in.nextInt();
System.out.println("Enter value to find");
search = in.nextInt();
for (c = 0; c < n; c++)
{
if (array[c] == search) /* Searching element is present */
{
System.out.println(search + " is present at location " + (c + 1) + ".");
break;
}
}
if (c == n) /* Searching element is absent */
System.out.println(search + " is not present in array.");
}
}
2)
A Binary search or half-interval search algorithm finds the position of a specified input value
(the search "key") within an array sorted by key value. In each step, the algorithm compares the
search key value with the key value of the middle element of the array. If the keys match, then a
matching element has been found and its index, or position, is returned. Otherwise, if the search
key is less than the middle element's key, then the algorithm repeats its action on the sub-array to
the left of the middle element or, if the search key is greater, on the sub-array to the right. If the
N@ru Mtech Cse(1-1) Study Material
(R13)
remaining array to be searched is empty, then the key cannot be found in the array and a special
"not found" indication is returned.
A binary search halves the number of items to check with each iteration, so locating an item (or
determining its absence) takes logarithmic time. A binary search is a dichotomic divide and
conquer search algorithm.
Performance
With each test that fails to find a match at the probed position, the search is continued with one
or other of the two sub-intervals, each at most half the size. More precisely, if the number of
items, N, is odd then both sub-intervals will contain (N1)/2 elements, while if N is even then the
two sub-intervals contain N/21 and N/2 elements.
If the original number of items is N then after the first iteration there will be at most N/2 items
remaining, then at most N/4 items, at most N/8 items, and so on. In the worst case, when the
value is not in the list, the algorithm must continue iterating until the span has been made empty;
this will have taken at most log2(N)+1 iterations, where the notation denotes the floor
function that rounds its argument down to an integer. This worst case analysis is tight: for any N
there exists a query that takes exactly log2(N)+1 iterations. When compared to linear search,
whose worst-case behaviour is N iterations, we see that binary search is substantially faster as N
grows large. For example, to search a list of one million items takes as many as one million
iterations with linear search, but never more than twenty iterations with binary search. However,
a binary search can only be performed if the list is in sorted order.
Program in Java
import java.util.Scanner;
class BinarySearch
{
public static void main(String args[])
{
int c, first, last, middle, n, search, array[];
Scanner in = new Scanner(System.in);
System.out.println("Enter number of elements");
n = in.nextInt();
array = new int[n];
System.out.println("Enter " + n + " integers");
for (c = 0; c < n; c++)
array[c] = in.nextInt();
System.out.println("Enter value to find");
search = in.nextInt();
first = 0;
last = n - 1;
middle = (first + last)/2;
while( first <= last )
{
if ( array[middle] < search )
first = middle + 1;
N@ru Mtech Cse(1-1) Study Material
(R13)
A hash function is any algorithm that maps data of a variable length to data of a fixed length. The
values returned by a hash function are called hash values, hash codes, hash sums, checksums or
simply hashes.
Hash functions are primarily used to generate fixed-length output data that acts as a shortened
reference to the original data. This is useful when the original data is too cumbersome to use in
its entirety.
One practical use is a data structure called a hash table where the data is stored associatively.
Searching linearly for a person's name in a list becomes cumbersome as the length of the list
increases, but the hashed value can be used to store a reference to the original data and retrieve
constant time (barring collisions). Another use is in cryptography, the science of encoding and
safeguarding data. It is easy to generate hash values from input data and easy to verify that the
data matches the hash, but hard to 'fake' a hash value to hide malicious data. This is the principle
behind the Pretty Good Privacy algorithm for data validation.
Hash functions are also frequently used to accelerate table lookup or data comparison tasks such
as finding items in a database, detecting duplicated or similar records in a large file and finding
similar stretches in DNA sequences.
A hash function should be deterministic: when it is invoked twice on pieces of data that should
be considered equal (e.g., two strings containing exactly the same characters), the function
should produce the same value. This is crucial to the correctness of virtually all algorithms based
on hashing. In the case of a hash table, the lookup operation should look at the slot where the
insertion algorithm actually stored the data that is being sought for, so it needs the same hash
value.
Hash functions are typically not invertible, meaning that it is not possible to reconstruct the input
datum x from its hash value h(x) alone. In many applications, it is common that several values
hash to the same value, a condition called a hash collision. Since collisions cause "confusion" of
objects, which can make exact hash-based algorithm slower approximate ones less precise, hash
functions are designed to minimize the probability of collisions. For cryptographic uses, hash
functions are engineered in such a way that is impossible to reconstruct any input from the hash
alone without expending great amounts of computing time (see also One-way function).
Hash functions are related to (and often confused with) checksums, check digits, fingerprints,
randomization functions, error-correcting codes, and cryptographic. Although these concepts
overlap to some extent, each has its own uses and requirements and is designed and optimized
N@ru Mtech Cse(1-1) Study Material
(R13)
differently. The Hash Keeper database maintained by the American National Drug Intelligence
Center, for instance, is more aptly described as a catalog of file fingerprints than of hash values.
Hash tables
Hash functions are primarily used in hash tables, to quickly locate a data record (e.g., a
dictionary definition) given its search key (the headword). Specifically, the hash function is used
to map the search key to an index; the index gives the place in the hash table where the
corresponding record should be stored. Hash tables, in turn, are used to implement associative
arrays and dynamic sets.
Typically, the domain of a hash function (the set of possible keys) is larger than its range (the
number of different table indexes), and so it will map several different keys to the same index.
Therefore, each slot of a hash table is associated with (implicitly or explicitly) a set of records,
rather than a single record. For this reason, each slot of a hash table is often called a bucket, and
hash values are also called bucket indices.
Thus, the hash function only hints at the record's locationit tells where one should start looking
for it. Still, in a half-full table, a good hash function will typically narrow the search down to
only one or two entries.
Hash Table Operations
Keys that have the same home bucket are called synonyms
25 and 33 are synonyms with respect to the hash function that is in use
A collision occurs when the home bucket for a new pair is occupied by a pair with
different key
An overflow occurs when there is no space in the home bucket for the new pair
When a bucket can hold only one pair, collisions and overflows occur together
Need a method to handle overflows
Hash Table Issues
The choice of hash function
Overflow handling
The size (number of buckets) of hash table
Hash Functions
Two parts
1. Convert key into an integer in case the key is not
2. Map an integer into a home bucket
Double Hashing
Eliminate overflows by permitting each bucket to keep a list of all pairs for which it is home
bucket
-
(R13)
Linear probing:
In general, our collision resolution strategy is to generate a sequence of hash table slots (probe
sequence) that can hold the record; test each slot until find empty one (probing)
For example, D=8, keys a,b,c,d have hash values h(a)=3, h(b)=0, h(c)=4, h(d)=3
Where do we insert d? 3 already filled
Probe sequence using linear hashing:
h1(d) = (h(d)+1)%8 = 4%8 = 4
h2(d) = (h(d)+2)%8 = 5%8 = 5*
h3(d) = (h(d)+3)%8 = 6%8 = 6
etc.
7, 0, 1, 2
Wraps around the beginning of the table!
all buckets in table will be candidates for inserting a new record before the probe
sequence returns to home position
records with adjacent home buckets will not follow same probe sequence
Quadratic probing
Quadratic probing is an open addressing scheme in computer programming for resolving
collisions in hash tableswhen an incoming data's hash value indicates it should be stored in an
already-occupied slot or bucket. Quadratic probing operates by taking the original hash index
and adding successive values of an arbitrary quadratic polynomial until an open slot is found.
For a given hash value, the indices generated by linear probing are as follows:
N@ru Mtech Cse(1-1) Study Material
(R13)
This method results in primary clustering, and as the cluster grows larger, the search for those
items hashing within the cluster becomes less efficient.
An example sequence using quadratic probing is:
Quadratic probing can be a more efficient algorithm in a closed hash table, since it better avoids
the clustering problem that can occur with linear probing, although it is not immune. It also
provides good memory caching because it preserves some locality of reference; however, linear
probing has greater locality and, thus, better cache performance.
Quadratic probing is used in the Berkeley Fast File System to allocate free blocks. The allocation
routine chooses a new cylinder-group when the current is nearly full using quadratic probing,
because of the speed it shows in finding unused cylinder-groups.
Let h(k) be a hash function that maps an element k to an integer in [0,m-1], where m is the size
of the table. Let the ith probe position for a value k be given by the function
where c2 0. If c2 = 0, then h(k,i) degrades to a linear probe. For a given hash table, the values of
c1 and c2 remain constant.
Examples:
If
For m = 2n, a good choice for the constants are c1 = c2 = 1/2, as the values of h(k,i) for i in
[0,m-1] are all distinct. This leads to a probe sequence of
where the values increase by 1, 2, 3, ...
For prime m > 2, most choices of c1 and c2 will make h(k,i) distinct for i in [0, (m-1)/2].
Such choices include c1 = c2 = 1/2, c1 = c2 = 1, and c1 = 0, c2 = 1. Because there are only
about m/2 distinct probes for a given element, it is difficult to guarantee that insertions
will succeed when the load factor is > 1/2.
Double Hashing
(1) Use one hash function to determine the first slot
(2) Use a second hash function to determine the increment for the probe sequence
h(k,i) = (h1(k) + i h2(k) ) mod m, i=0,1,...
Hash table can handle overflows using chaining. Each bucket keeps a chain of all pairs for which
it is the home bucket
break;
} else {
boolean result = table.containsKey (word);
}
} while (true);
long finishTime = System.currentTimeMillis ( );
System.out.println ("Time to hash " + wordCount + " words is "
+ (finishTime-startTime) + " milliseconds.");
table.printStatistics ( );
}
}
Hashtable.java
public class Hashtable {
private static boolean DEBUGGING = false;
private LinkedList [] myTable;
public Hashtable (int size) {
myTable = new LinkedList [size];
for (int k=0; k<size; k++) {
myTable[k] = new LinkedList ( );
}
}
private static long hash (String key) {
// Uncomment one of the following lines to use the corresponding hash function.
// return hash1 (key);
// return hash2 (key);
// return hash3 (key);
// return Math.abs (key.hashCode ( ));
}
// Slight variation on the ETH hashing algorithm
private static int MAGIC1 = 257;
private static long hash1 (String key) {
long h = 1;
for (int k=0; k<key.length(); k++) {
h = ((h % MAGIC1)+1) * (int) key.charAt(k);
}
return h;
}
// Slight variation on the GNU-cpp hashing algorithm
private static int MAGIC2 = 4;
private static long hash2 (String key) {
long h = 0;
for (int k=0; k<key.length(); k++) {
h = MAGIC2 * h + (int) key.charAt(k);
}
return h << 1 >>> 1;
}
// Slight variation on the GNU-cc1 hashing algorithm
private static int MAGIC3 = 613;
private static long hash3 (String key) {
long h = key.length();
for (int k=0; k<key.length(); k++) {
h = MAGIC3 * h + (int) key.charAt(k);
}
return h << 2 >>> 2;
}
N@ru Mtech Cse(1-1) Study Material
(R13)
// Add the key to the table. The value is included just for compatibility with
// the Hashtable class in java.util.
public void put (String key, Integer value) {
int h = (int) (hash (key) % myTable.length);
if (!myTable[h].find (key)) {
myTable[h].insert (new Info (key, value));
if (DEBUGGING) {
System.out.println ("Inserting " + key);
}
} else {
System.err.println (key + " already in table.");
}
}
// Return true if key is in the table, and false otherwise.
public boolean containsKey (String key) {
int h = (int) (hash (key) % myTable.length);
return (myTable[h].find(key));
}
// Print statistics about the table:
//
the maximum length of a collision list;
//
the optimal length of a collision list;
//
the average number of comparisons needed for a successful search;
//
the standard deviation for the number of comparisons needed for
//
a successful search.
public void printStatistics ( ) {
// Code omitted to save paper.
}
}
// Auxiliary classes follow in the file; code is omitted to save paper.
TTThashTest.java
import java.util.*;
public class TTThashTest {
// Measure the time to put all possible Tic-tac-toe boards into the hash table.
public static void main (String [ ] args) {
Hashtable table = new Hashtable ( );
long startTime = System.currentTimeMillis ( );
for (int k=0; k<19683; k++) {
TTTboard b = new TTTboard (k);
table.put (b, new Integer (k));
}
long finishTime = System.currentTimeMillis ( );
System.out.println ("Time to insert all Tic-tac-toe boards = "
+ (finishTime-startTime));
}
}
Java.Util Hashmap
java.util
Class HashMap<K,V>
java.lang.Object
java.util.AbstractMap<K,V>
java.util.HashMap<K,V>
Type Parameters:
K - the type of keys maintained by this map
V - the type of mapped values
All Implemented Interfaces:
Serializable, Cloneable, Map<K,V>
Direct Known Subclasses:
LinkedHashMap, PrinterStateReasons
The iterators returned by all of this class's "collection view methods" are fail-fast: if the map is
structurally modified at any time after the iterator is created, in any way except through the
iterator's own remove method, the iterator will throw aConcurrentModificationException. Thus,
in the face of concurrent modification, the iterator fails quickly and cleanly, rather than risking
arbitrary, non-deterministic behavior at an undetermined time in the future.
Note that the fail-fast behavior of an iterator cannot be guaranteed as it is, generally speaking,
impossible to make any hard guarantees in the presence of unsynchronized concurrent
modification. Fail-fast iterators throwConcurrentModificationException on a best-effort basis.
Therefore, it would be wrong to write a program that depended on this exception for its
correctness: the fail-fast behavior of iterators should be used only to detect bugs.
JAVA.Util Hash Set
java.util
Class HashSet<E>
java.lang.Object
java.util.AbstractCollection<E>
java.util.AbstractSet<E>
java.util.HashSet<E>
Type Parameters:
E - the type of elements maintained by this set
All Implemented Interfaces:
Serializable, Cloneable, Iterable<E>, Collection<E>, Set<E>
Direct Known Subclasses:
JobStateReasons, LinkedHashSet
public class HashSet<E>
extends AbstractSet<E>
implements Set<E>, Cloneable, Serializable
This class implements the Set interface, backed by a hash table (actually a HashMap instance). It
makes no guarantees as to the iteration order of the set; in particular, it does not guarantee that
the order will remain constant over time. This class permits the null element.
This
class
offers
constant
time
performance
for
the
basic
operations
(add, remove, contains and size), assuming the hash function disperses the elements properly
among the buckets. Iterating over this set requires time proportional to the sum of
the HashSet instance's size (the number of elements) plus the "capacity" of the
backing HashMap instance (the number of buckets). Thus, it's very important not to set the initial
capacity too high (or the load factor too low) if iteration performance is important.
Note that this implementation is not synchronized. If multiple threads access a hash set
concurrently, and at least one of the threads modifies the set, it must be synchronized externally.
This is typically accomplished by synchronizing on some object that naturally encapsulates the
set.
If
no
such
object
exists,
the
set
should
be
"wrapped"
using
the Collections.synchronizedSet method. This is best done at creation time, to prevent accidental
unsynchronized access to the set:
Set s = Collections.synchronizedSet(new HashSet(...));
The iterators returned by this class's iterator method are fail-fast: if the set is modified at any
time after the iterator is created, in any way except through the iterator's own remove method, the
N@ru Mtech Cse(1-1) Study Material
(R13)
4)
Bubble Sort
Bubble sort, sometimes incorrectly referred to as sinking sort, is a simple sorting algorithm that
works by repeatedly stepping through the list to be sorted, comparing each pair of adjacent items
and swapping them if they are in the wrong order. The pass through the list is repeated until no
swaps are needed, which indicates that the list is sorted. The algorithm gets its name from the
way smaller elements "bubble" to the top of the list. Because it only uses comparisons to operate
on elements, it is a comparison sort. Although the algorithm is simple, most of the other sorting
algorithms are more efficient for large lists.
Bubble sort has worst-case and average complexity both (n2), where n is the number of items
being sorted. There exist many sorting algorithms with substantially better worst-case or average
complexity of O(n log n). Even other (n2) sorting algorithms, such as insertion sort, tend to
have better performance than bubble sort. Therefore, bubble sort is not a practical sorting
algorithm when n is large.
The only significant advantage that bubble sort has over most other implementations, even
quicksort, but not insertion sort, is that the ability to detect that the list is sorted is efficiently
built into the algorithm. Performance of bubble sort over an already-sorted list (best-case) is
O(n). By contrast, most other algorithms, even those with better average-case complexity,
perform their entire sorting process on the set and thus are more complex. However, not only
does insertion sort have this mechanism too, but it also performs better on a list that is
substantially sorted (having a small number of inversions).
Bubble sort should be avoided in case of large collections. It will not be efficient in case of
reverse ordered collection.
Step-by-step example
Let us take the array of numbers "5 1 4 2 8", and sort the array from lowest number to greatest
number using bubble sort. In each step, elements written in bold are being compared. Three
passes will be required.
First Pass:
( 5 1 4 2 8 ) ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps since 5
> 1.
( 1 5 4 2 8 ) ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm
does not swap them.
Second Pass:
(14258) (14258)
( 1 4 2 5 8 ) ( 1 2 4 5 8 ), Swap since 4 > 2
(12458) (12458)
(12458) (12458)
Now, the array is already sorted, but our algorithm does not know if it is completed. The
algorithm needs one whole pass without any swap to know it is sorted.
N@ru Mtech Cse(1-1) Study Material
(R13)
Third Pass:
(12458)
(12458)
(12458)
(12458)
(12458)
(12458)
(12458)
(12458)
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at
a time. It is much less efficient on large lists than more advanced algorithms such as quicksort,
heapsort, or merge sort. However, insertion sort provides several advantages:
Simple implementation
Efficient for (quite) small data sets
Adaptive (i.e., efficient) for data sets that are already substantially sorted: the time
complexity is O(n + d), where d is the number of inversions
More efficient in practice than most other simple quadratic (i.e., O(n2)) algorithms such
as selection sort or bubble sort; the best case (nearly sorted input) is O(n)
Stable; i.e., does not change the relative order of elements with equal keys
In-place; i.e., only requires a constant amount O(1) of additional memory space
(R13)
37495261
37495261
37495261
34795261
34795261
34579261
23457961
23456791
12345679
6)
Quick sort, or partition-exchange sort, is a sorting algorithm developed by Tony Hoare that, on
average, makes O(n log n) comparisons to sort n items. In the worst case, it makes O(n2)
comparisons, though this behavior is rare. Quicksort is often faster in practice than other
O(n log n) algorithms.[1] Additionally, quicksort's sequential and localized memory references
work well with a cache. Quicksort is a comparison sort and, in efficient implementations, is not a
stable sort. Quicksort can be implemented with an in-place partitioning algorithm, so the entire
sort can be done with only O(log n) additional space used by the stack during the recursion.[2]
Quicksort is a divide and conquer algorithm. Quicksort first divides a large list into two smaller
sub-lists: the low elements and the high elements. Quicksort can then recursively sort the sublists.
The steps are:
1. Pick an element, called a pivot, from the list.
2. Reorder the list so that all elements with values less than the pivot come before the pivot,
while all elements with values greater than the pivot come after it (equal values can go
either way). After this partitioning, the pivot is in its final position. This is called the
partition operation.
3. Recursively apply the above steps to the sub-list of elements with smaller values and
separately the sub-list of elements with greater values.
The base case of the recursion are lists of size zero or one, which never need to be sorted.
first and last are end points of region to sort
Performance: O(n log n) provide pivIndex not always too close to the end
Performance O(n2) when pivIndex always near end
Example:
i++;
j--;
while (j>p && a[j] > x)
j--;
if (i < j)
swap(a, i, j);
else
return j;
}
}
private static void swap(int[] a, int i, int j) {
// TODO Auto-generated method stub
int temp = a[i];
a[i] = a[j];
a[j] = temp;
}
}
7)
Merge Sort
Goal: Combine the two sorted sequences in one larger sorted sequence
Analysis of Merge
Overview:
Example
8)
Heapsort
Heapsort
}
public static void exchange(int i, int j){
int t=a[i];
a[i]=a[j];
a[j]=t;
}
public static void sort(int []a0){
a=a0;
buildheap(a);
for(int i=n;i>0;i){
exchange(0, i);
n=n-1;
maxheap(a, 0);
}
}
public static void main(String[] args) {
int []a1={4,1,3,2,16,9,10,14,8,7};
sort(a1);
for(int i=0;i<a1.length;i++){
System.out.print(a1[i] + " ");
}
}
}
9)
Each key is first figuratively dropped into one level of buckets corresponding to the value of the
rightmost digit. Each bucket preserves the original order of the keys as the keys are dropped into
the bucket. There is a one-to-one correspondence between the number of buckets and the number
of values that can be represented by a digit. Then, the process repeats with the next neighbouring
digit until there are no more digits to process. In other words:
1. Take the least significant digit (or group of bits, both being examples of radices) of each
key.
2. Group the keys based on that digit, but otherwise keep the original order of keys. (This is
what makes the LSD radix sort a stable sort).
3. Repeat the grouping process with each more significant digit.
The sort in step 2 is usually done using bucket sort or counting sort, which are efficient in this
case since there are usually only a small number of digits.
An example
Original, unsorted list:
170, 45, 75, 90, 802, 2,24, 66
Sorting by least significant digit (1s place) gives:
170, 90, 802, 2, 24, 45, 75, 66
Notice that we keep 802 before 2, because 802 occurred before 2 in the original list, and
similarly for pairs 170 & 90 and 45 & 75.
N@ru Mtech Cse(1-1) Study Material
(R13)
Worst Case
Average Case
Selection Sort
n2
n2
Bubble Sort
n2
n2
Insertion Sort
n2
n2
Merge Sort
nlogn
nlogn
Quick Sort
n2
nlogn
Radix Sort
Tree Sort
n2
nlogn
Heap Sort
nlogn
nlogn
[MARCH 2010]
[MARCH 2010]
[SEPT 2010]
[APRIL 2011]
UNIT - 4
Contents:
1)
A binary treet is a finite collection of elements. When a binary tree is not empty, it has root
element and the remaining elements (if any) are positioned into two binary trees, which are
called the left and right sub trees of t.
The essential difference between a binary tree and a tree are
Properties
Suppose we number the elements in a full binary tree of height h using the number through 2h1. We begin at level1 and go down to level h. Within the levels are numbered left to right. The
elements of the full binary tree of the above fig., have been numbered in this way.
Complete Binary Tree: Now suppose we delete the k elements numbered 2h-1, 1 < = i < = 2h.
The resulting binary tree is called a complete binary tree is shown below:
Note that full binary tree is special case of complete binary tree. Also, note that the height of
complete binary tree contains n elements is log 2 (n+1).
Left Skewed Binary Tree: if the right sub tree is missing in every node of a binary tree, then it
is known as left skewed binary tree.
Right Skewed Binary Tree: if the list sub tree is missing in every node of a binary tree, then it
is known as right skewed binary tree.
If i=1, then this is the root of the binary tree, if i>1, then the parent of this element has
been assigned the number [ i/2].
If 2i >1, then this element has no left child. Otherwise, its left child has been assigned the
number 2i.
If 2i+1 > n, then this element has no right child. Otherwise its right child has been the
number 2i +1.
The binary tree is represented in an array by sorting each element at the array position
corresponding to the number assigned to it. The following fig shows the array
representations for its binary trees, missing elements are represented by empty boxes.
A binary tree that has n elements may require an array of size up to 2n (including position 0)
for its representation. This maximum size is needed when each element (except the root) of the
n- element binary tree is right child of its parent.
Linked Representation: The most popular way to represent a binary tree is by using links or
pointers. A node that has exactly two pointer fields represents each element. Let us call these
pointer fields left child and right child. In addition to these two pointer fields, each node has a
field named element.
Each pointer from a parent node to a child node represents an edge drawing of a binary tree.
Since an n-elements binary tree has exactly n-1 edges, we are left with 2n-(n+1)= n+1 pointer
fields that has no value. These pointer fields are set to NULL. The following fig shows the linked
representation of the binary tree is
2)
Pre-order
In-order
Post-order
Level-order
Pre-order: in this type, the root is visited first, followed by the left sub tree followed by right sub
tree. Thus, the pre order traversal of the sample tree shown below is
Postorder(t->rightchild);
Visit(t);
}
}
Level-order: elements are visited by level from the top to bottom with in levels, elements are
visited from left to right. It is quite difficult to write a recursive function for level order traversal
as the correct data structure to use here is a queue and not a stack. Thus the level- order traversal
for the above specified tree is
D- B- E- A- C F
The code to implement this traversal is:
Template <class T>
Void Levelorder (binarytreenode <T> *t)
{
Arrayqueue <binarytreenode <T> *> q;
While(t != NULL)
{
Visit(t);
If( t->leftchild !=NULL);
q.push(t->leftchild);
if(t->rightchild !=NULL)
q.push(t->rightchild);
try
{
t= q. front();
}
Catch(queue empty)
{
Return;
}
q. pop();
}
}
The time complexity of above specified traversals are O(n).
}
}
}
}
}
// All nodes are visited in ascending order
// Recursion is used to go to one node and
// then go to its child nodes and so forth
public void inOrderTraverseTree(Node focusNode) {
if (focusNode != null) {
preorderTraverseTree(focusNode.leftChild);
preorderTraverseTree(focusNode.rightChild);
}
}
public void postOrderTraverseTree(Node focusNode) {
if (focusNode != null) {
postOrderTraverseTree(focusNode.leftChild);
postOrderTraverseTree(focusNode.rightChild);
System.out.println(focusNode);
}
}
theTree.addNode(50, "Boss");
theTree.addNode(25, "Vice President");
theTree.addNode(15, "Office Manager");
theTree.addNode(30, "Secretary");
theTree.addNode(75, "Sales Manager");
theTree.addNode(85, "Salesman 1");
// Different ways to traverse binary trees
// theTree.inOrderTraverseTree(theTree.root);
// theTree.preorderTraverseTree(theTree.root);
// theTree.postOrderTraverseTree(theTree.root);
// Find the node with key 75
System.out.println("\nNode with the key 75");
System.out.println(theTree.findNode(75));
}
}
class Node {
int key;
String name;
Node leftChild;
Node rightChild;
Node(int key, String name) {
this.key = key;
this.name = name;
}
N@ru Mtech Cse(1-1) Study Material
(R13)
/*
* return name + " has the key " + key + "\nLeft Child: " + leftChild +
* "\nRight Child: " + rightChild + "\n";
*/
}
}
3)
When a binary tree is represented using pointers then pointers to empty sub tree are set
to NULL. That is, the left pointer of a node whose left child is an empty sub tree is normally set
to NULL. Similarly, the right pointer of a node whose right child is an empty sub tree is also set
to NULL. Thus, a large number of pointers are set to NULL. These NULL pointers could be used
in different and more effective way.
Assume that the left pointer of a node n is set to NULL as the left child of n is an
empty sub tree. Then the left pointer of n can be set to point to the in order predecessor of n.
Similarly, if the right child of a node m is empty then the right pointer of m is empty then the
right pointer of m can be set to point to in order successor of m. the following fig shows a
threaded binary tree, lines with arrows represents threads.
In the above fig., links with arrows heads indicate links leading to in order successors while
other lines denote the usual links in a binary tree. Note that link with arrows and the
normal links indicate different relationship between nodes and the links are no longer used to
N@ru Mtech Cse(1-1) Study Material
(R13)
G = ( V, E)
V= { 1, 2, 3, 4, 5}
E= { e1, e2, e3, e4, e5, e6}
Applications:
These are used in communication and transportation networks.
N@ru Mtech Cse(1-1) Study Material
(R13)
These are used in shape description in computer aided design and geometric information
system and scheduling system.
Directed Graph (Diagraph)
A graph G consists of finite set V called the vertices or nodes, E- set of ordered pairs, called
edges of G. self loops are allowed in diagraph.
Representation of Graphs:
There are two commonly used methods of representing the graphs
Adjacency Matrix:
In this method the graph can be represented by using matrix of n*n such that
A[i] [j] =
If the diagraph has weights we can store the weights in the matrix.
Adjacency list:
In this method graph can be represented using linked list. In this method there is an array of
linked list for each vertex V in the graph. If the edge have weights then these weights can be
stored in the linked list elements.
5)
A systematically follow up of the edges the graph in order to visit the vertices of the graph is
called graph searching.
There are two basic searching algorithms:
i)
ii)
The difference between those is in order in which they explore the un visited edges of the graph.
Breadth first Search
BFS follows the following rules:
Select an unvisited node V, visit it, have it be the root in a BFS tree being formed. Its level is
called the current level.
For each node x in the current level, in the order in which the level nodes were visited, visit all
unvisited neighbors of x. the newly visited nodes from this level from a new level. This new
level becomes the next current level.
Repeat step2 for all unvisited vertices.
N@ru Mtech Cse(1-1) Study Material
(R13)
a[i][j]=Integer.parseInt(br.readLine());
a[j][i]=a[i][j];
}
a[i][i]=0;
}
System.out.println("\nOrder of accessed nodes : \n");
q[0] = 0; m[0] = 1;
int u;
int node=1;
int beg1=1, beg=0;
while(node>0)
{
u=q[beg];beg++;
System.out.println(" " +(u+1));
node--;
for(j=0;j<n;j++)
{
if(a[u][j]==1)
{
if(m[j]==0)
{
m[j]=1;
q[beg1]=j;
node++;
beg1++;
}
N@ru Mtech Cse(1-1) Study Material
(R13)
}
}
}
}
}
DepthFirstSearch
It begins from then a particular vertex, then one of its child is visited and then the child of child
is visited. This process continuous until we reach the bottom of the graph.
Ex: consider the following graph
}
N@ru Mtech Cse(1-1) Study Material
(R13)
6)
Step 2:
Step 3
Step 4
Step 5
Step 6
Step 7
7)
Dijkstra's algorithm - is a solution to the single-source shortest path problem in graph theory.
Works on both directed and undirected graphs. However, all edges must have nonnegative
weights.
Approach: Greedy
Input: Weighted graph G={E,V} and source vertex vV, such that all edge weights are
nonnegative
Output: Lengths of shortest paths (or the shortest paths themselves) from a given source vertex
vV to all other vertices
Dijkstra's algorithm Pseudocode
dist[s] 0
for all v V{s}
do dist[v]
S
QV
while Q
do u mindistance(Q,dist)
(R13)
SS{u}
(add u to list of visited vertices)
for all v neighbors[u]
do if dist[v] > dist[u] + w(u, v) (if new shortest path found)
then
d[v] d[u] + w(u, v)
(set new value of shortest path)
(if desired, add traceback code)
return dist
An Example:
Step 1:
Step 2:
Step 3:
Step 4:
Step 5:
Step 6:
Step 7:
Step 8:
O(|V|^2 + |E|)
For sparse graphs, or graphs with very few edges and many nodes, it can be implemented more
efficiently storing the graph in an adjacency list using a binary heap or priority queue. This will
produce a running time of
N@ru Mtech Cse(1-1) Study Material
(R13)
[SEPT 2010]
[NOV 2011]
[MAY
[APRIL