Sie sind auf Seite 1von 17

AVL TREE

AVL tree is a binary search tree in which the difference of heights of left and right sub
trees of any node is less than or equal to one. The technique of balancing the height of
binary trees was developed by Adelson, Velskii, and Landi and hence given the short
form as AVL tree or Balanced Binary Tree.

An AVL tree can be defined as follows:

Let T be a non-empty binary tree with TL and TR as its left and right subtrees. The tree is
height balanced if:

• TL and TR are height balanced


• hL - hR <= 1, where hL - hR are the heights of TL and TR

The Balance factor of a node in a binary tree can have value 1, -1, 0, depending on
whether the height of its left subtree is greater, less than or equal to the height of the
right subtree

Priority Queue

In computer science, a priority queue is an abstract data type which is like a


regular queue or stack data structure, but where additionally each element has a "priority"
associated with it. In a priority queue, an element with high priority is served before an element with
low priority. In some implementations, if two elements have the same priority, they are served
according to the order in which they were enqueued, while in other implementations, ordering of
elements with the same priority is undefined.

While priority queues are often implemented with heaps, they are conceptually distinct from heaps. A
priority queue is an abstract concept like "a list" or "a map"; just as a list can be implemented with
a linked list or an array, a priority queue can be implemented with a heap or a variety of other
methods such as an unordered array
Stack Data Structure (Introduction and Program)
Stack is a linear data structure which follows a particular order in which the operations
are performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out).
Mainly the following three basic operations are performed in the stack:
• Push: Adds an item in the stack. If the stack is full, then it is said to be an Overflow
condition.
• Pop: Removes an item from the stack. The items are popped in the reversed order
in which they are pushed. If the stack is empty, then it is said to be an Underflow
condition.
• Peek or Top: Returns top element of stack.
• isEmpty: Returns true if stack is empty, else false.

A hash function
is any function that can be used to map data of arbitrary size onto data of a fixed size.
The values returned by a hash function are called hash values, hash codes, digests,
or simply hashes. Hash functions are often used in combination with a hash table, a
common data structure used in computer software for rapid data lookup. Hash functions
accelerate table or database lookup by detecting duplicated records in a large file. One
such application is finding similar stretches in DNA sequences. They are also useful
in cryptography. A cryptographic hash function allows one to easily verify whether some
input data map onto a given hash value, but if the input data is unknown it is deliberately
difficult to reconstruct it (or any equivalent alternatives) by knowing the stored hash
value. This is used for assuring integrity of transmitted data, and is the building block
for HMACs, which provide message authentication. Hash functions are related to (and
often confused with) checksums, check digits, fingerprints, lossy
compression, randomization functions, error-correcting codes, and ciphers. Although
the concepts overlap to some extent, each one has its own uses and requirements and
is designed and optimized differently. The HashKeeper database maintained by the
American National Drug Intelligence Center, for instance, is more aptly described as a
catalogue of file fingerprints than of hash values.
Describe big 0 notation and S2 notation
Big O notation is a mathematical notation that describes the limiting behavior of
a function when the argument tends towards a particular value or infinity. It is a member
of a family of notations invented by Paul Bachmann,[1] Edmund Landau,[2] and others,
collectively called Bachmann–Landau notation or asymptotic notation.
In computer science, big O notation is used to classify algorithms according to how their
running time or space requirements grow as the input size grows.[3] In analytic number
theory, big O notation is often used to express a bound on the difference between
an arithmetical function and a better understood approximation; a famous example of
such a difference is the remainder term in the prime number theorem.
Big O notation characterizes functions according to their growth rates: different
functions with the same growth rate may be represented using the same O notation.
The letter O is used because the growth rate of a function is also referred to as
the order of the function. A description of a function in terms of big O notation usually
only provides an upper bound on the growth rate of the function. Associated with big O
notation are several related notations, using the symbols o, Ω, ω, and Θ, to describe
other kinds of bounds on asymptotic growth rates
Explain the process of converting a tree into a binary tree
A general tree is an unordered hierarchical data structure with unlimited children nodes for
each parent. A binary tree only has a maximum of two children nodes for each parent,
commonly called left node and right node.

Converting from tree to binary tree would simply require that you start with the root node
and start copying each node into the binary tree. The only rules would be that each node
cannot have more than two children, including the root node

General Trees and Conversion to Binary Trees

General trees are those in which the number of subtrees for any node is not required to
be 0, 1, or 2. The tree may be highly structured and therefore have 3 subtrees per node
in which case it is called a ternary treeHowever, it is often the case that the number of
subtrees for any node may be variable. Some nodes may have 1 or no subtrees, others
may have 3, some

4, or any other combination. The ternary tree is just a special case of a

general tree (as is true of the binary tree)

General trees can be represented as ADT's in whatever form they exist.


Write an algorithm to implement stack,using array

#include<iostream>
#define SIZE 100
#define NO_ELEMENT -999999
using namespace std;

class Stack {
int arr[SIZE]; // array to store Stack elements
int top;
public:
Stack() {
top = -1;
}
void push(int); // push an element into Stack
void pop(); // pop the top element from Stack
int topElement(); // get the top element
void display(); // display Stack elements from top to bottom
};

void Stack :: push(int data) {


if (top == SIZE) { // no space in Stack
cout << "Stack Overflow ... " << endl;
return;
}
else {
top++; // increment top
arr[top] = data; // insert the data onto top of Stack
}
}
void Stack :: pop() {
if (top == -1) { // Stack is empty
cout << "Stack is empty ... " << endl;
}
else {
top--; // decrement top (remove the top element)
}
}
int Stack :: topElement() {
if (top == -1) { // Stack is empty
return NO_ELEMENT;
}
else {
return arr[top];
}
}
void Stack :: display() {
cout << "Displaying Stack Elements :- " << endl;
int i;
for (i = top; i >= 0; i--) {
cout << arr[i] << " ";
}
cout << endl;
}
int main() {
Stack s;
s.push(12);
s.push(5);
s.push(13);
s.display();
cout << "Top Element : " << s.topElement() << endl;
s.pop(); // removing the top element i.e 13
s.push(7);
s.display();
return 0;

What is linear search ? Write linear search


algorithm and find its time complexity

Linear search (known as sequential search) is an algorithm for finding a target value within a list. It
sequentially checks each element of the list for the target value until a match is found or until all the
elements have been searched. This is one of the most basic search algorithms and is directly,
inspired by real-life events.

Algorithm
Steps involved in this algorithm are:

• Step 1: Select the first element as the current element.


• Step 2: Compare the current element with the target element. If matches, then go to step 5.
• Step 3: If there is a next element, then set current element to next element and go to Step 2.
• Step 4: Target element not found. Go to Step 6.
• Step 5: Target element found and return location.
• Step 6: Exit process.

Explain Indexed Sequential File


Organization When there is need to access records sequentially by some key value and also
to access records directly by the same key value, the collection of records may be organized in
an effective manned called Indexes Sequential Organization.
You must be familiar with search process for a word in a language dictionary. The data in the
dictionary is stored in sequential manner. However an index is provided in terms of thumb tabs.
To search for a word we do not search sequentially. We access the index that is the appropriate
thumb tab, locate an approximate location for the word and then proceed to find the word
sequentially.
To implement the concept of indexed sequential file organizations, we consider an approach in
which the index part and data part reside on a separate file. The index file has a tree structure
and data file has a sequential structure. Since the data file is sequenced, it is not necessary for
the index to have an entry for each record Following figure shows a sequential file with a two-
level index.
Level 1 of the index holds an entry for each three-record section of the main file. The level 2
indexes level 1 in the same way.
When the new records are inserted in the data file, the sequence of records need to be
preserved and also the index is accordingly updated.
Two approaches used to implement indexes are static indexes and dynamic indexes.
As the main data file changes due to insertions and deletions, the static index contents may
change but the structure does not change . In case of dynamic indexing approach,
Write Prim's algorithm and construct a minimum cost spanning tree on the following network
using Prim's algorithm

Like Kruskal’s algorithm, Prim’s algorithm is also a Greedy algorithm. It starts with an
empty spanning tree. The idea is to maintain two sets of vertices. The first set contains
the vertices already included in the MST, the other set contains the vertices not yet
included. At every step, it considers all the edges that connect the two sets, and picks
the minimum weight edge from these edges. After picking the edge, it moves the other
endpoint of the edge to the set containing MST.
A group of edges that connects two set of vertices in a graph is called cut in graph
theory. So, at every step of Prim’s algorithm, we find a cut (of two sets, one contains the
vertices already included in MST and other contains rest of the verices), pick the
minimum weight edge from the cut and include this vertex to MST Set (the set that
contains already included vertices).
How does Prim’s Algorithm Work? The idea behind Prim’s algorithm is simple, a
spanning tree means all vertices must be connected. So the two disjoint subsets
(discussed above) of vertices must be connected to make a Spanning Tree. And they
must be connected with the minimum weight edge to make it a Minimum Spanning
Tree.
Algorithm
1) Create a set mstSet that keeps track of vertices already included in MST.
2) Assign a key value to all vertices in the input graph. Initialize all key values as
INFINITE. Assign key value as 0 for the first vertex so that it is picked first.
3) While mstSet doesn’t include all vertices
….a) Pick a vertex u which is not there in mstSet and has minimum key value.
….b) Include u to mstSet.
….c) Update key value of all adjacent vertices of u. To update the key values, iterate
through all adjacent vertices. For every adjacent vertex v, if weight of edge u-v is less
than the previous key value of v, update the key value as weight of u-v
The idea of using key values is to pick the minimum weight edge from cut. The key
values are used only for vertices which are not yet included in MST, the key value for
these vertices indicate the minimum weight edges connecting them to the set of vertices
included in MST.
Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2

What is Minimum Spanning Tree?


Given a connected and undirected graph, a spanning tree of that graph is a subgraph
that is a tree and connects all the vertices together. A single graph can have many
different spanning trees. A minimum spanning tree (MST) or minimum weight spanning
tree for a weighted, connected and undirected graph is a spanning tree with weight less
than or equal to the weight of every other spanning tree. The weight of a spanning tree
is the sum of weights given to each edge of the spanning tree.
How many edges does a minimum spanning tree has?
A minimum spanning tree has (V – 1) edges where V is the number of vertices in the
given graph.
What are the applications of Minimum Spanning Tree?
See this for applications of MST.
Below are the steps for finding MST using Kruskal’s algorithm
1. Sort all the edges in non-decreasing order of their weight.
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so far.
If cycle is not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.
The algorithm is a Greedy Algorithm. The Greedy Choice is to pick the smallest weight
edge that does not cause a cycle in the MST constructed so far. Let us understand it
with an example: Consider the below input graph.

The set mstSet is initially empty and keys assigned to vertices are {0, INF, INF, INF, INF, INF,
INF, INF} where INF indicates infinite. Now pick the vertex with minimum key value. The vertex
0 is picked, include it in mstSet. So mstSet becomes {0}. After including to mstSet, update key
values of adjacent vertices. Adjacent vertices of 0 are 1 and 7. The key values of 1 and 7 are
updated as 4 and 8. Following subgraph shows vertices and their key values, only the vertices
with finite key values are shown. The vertices included in MST are shown in green color.
Write a non-recursive algorithm for inorder traversal of a Binary Tree.
Step 1 Creates an empty stack: S = NULL

Step 2 sets current as address of root: current -> 1

Step 3 Pushes the current node and set current = current->left until
current is NULL
current -> 1
push 1: Stack S -> 1
current -> 2
push 2: Stack S -> 2, 1
current -> 4
push 4: Stack S -> 4, 2, 1
current = NULL

Step 4 pops from S


a) Pop 4: Stack S -> 2, 1
b) print "4"
c) current = NULL /*right of 4 */ and go to step 3
Since current is NULL step 3 doesn't do anything.

Step 4 pops again.


a) Pop 2: Stack S -> 1
b) print "2"
c) current -> 5/*right of 2 */ and go to step 3

Step 3 pushes 5 to stack and makes current NULL


Stack S -> 5, 1
current = NULL

Step 4 pops from S


a) Pop 5: Stack S -> 1
b) print "5"
c) current = NULL /*right of 5 */ and go to step 3
Since current is NULL step 3 doesn't do anything

Step 4 pops again.


a) Pop 1: Stack S -> NULL
b) print "1"
c) current -> 3 /*right of 5 */

Step 3 pushes 3 to stack and makes current NULL


Stack S -> 3
current = NULL

Step 4 pops from S


a) Pop 3: Stack S -> NULL
b) print "3"
c) current = NULL /*right of 3 */

Traversal is done now as stack S is empty and current is NULL


Dijkstra's Algorithm
Dijkstra's algorithm is an algorithm we can use to find shortest distances or minimum costs
depending on what is represented in a graph. You're basically working backwards from the end to
the beginning, finding the shortest leg each time. The steps to this algorithm are as follows:
Step 1: Start at the ending vertex by marking it with a distance of 0, because it's 0 units from the
end. Call this vertex your current vertex, and put a circle around it indicating as such.

Step 2: #Identify all of the vertices that are connected to the current vertex with an edge. Calculate
their distance to the end by adding the weight of the edge to the mark on the current vertex. Mark
each of the vertices with their corresponding distance, but only change a vertex's mark if it's less
than a previous mark. Each time you mark the starting vertex with a mark, keep track of the path that
resulted in that mark.

Step 3: Label the current vertex as visited by putting an X over it. Once a vertex is visited, we won't
look at it again.

Step 4: Of the vertices you just marked, find the one with the smallest mark, and make it your current
vertex. Now, you can start again from step 2.

Step 5: Once you've labeled the beginning vertex as visited - stop. The distance of the shortest path
is the mark of the starting vertex, and the shortest path is the path that resulted in that mark.
Let's now consider finding the shortest path from your house to Divya's house to illustrate this
algorithm.
Write an algorithm for array implementation of a Circular Queue.

#include<iostream>
#define SIZE 5
using namespace std;

class cQueue {
int arr[SIZE];
int front, rear;
public :
cQueue() {
front = rear = -1;
}
void enqueue(int); // insert an element into queue
void dequeue(); // Remove the front element from queue
void display(); // display the queue elements
};

void cQueue :: enqueue(int data) {


if (rear == -1) { // queue is empty
front = rear = 0;
arr[front] = data;
}
else {
int pos = (rear + 1) % SIZE;
if (pos == front) { // queue is full
cout << "No space in queue ..." << endl;
return;
}
else {
rear = pos; // update rear
arr[pos] = data; // insert the data in queue
}
}
}

void cQueue :: dequeue() {


if (front == -1) { // queue is empty
cout << "Queue is empty ... " << endl;
return;
}
else {
if (front == rear) { // only one element in queue
front = rear = -1;
}
else {
front = (front + 1) % SIZE; // shift front by 1 position
}
}
}

void cQueue :: display() {


int i;
cout << "front : " << front << " rear : " << rear << endl;
cout << "Current circular Queue Elements ( front to rear ) :- " << endl;
if (front == -1) {
cout << "Queue is empty ... " << endl;
return;
}
else {
i = front;
do {
cout << arr[i] << " ";
i = (i + 1) % SIZE;
} while(i != rear);
cout << arr[rear];
Depth First Search (DFS) and Breadth First Search (BFS) AlgorithmsInstructionsDFS a
nd BFS are common methods of graph traversal, which is the process of visiting every v
ertex of a graph. Stacks and queues are two additional concepts used in the DFS and B
FS algorithms. A stack is a type of data storage in whichonly the last element added to t
he stack can be retrieved. It is like a stack of plates where only the top plate can be take
n from the stack. The threestacks operations are:Push–put an element on the stackPee
k–look at the top element on the stack, but do not removeitPop–take the top element off
the stackA queue is a type of data storage in whichthe elements are accessed in the or
der they were added. It is like a cafeteria line where the person at the front of the line is
next. Thetwo queues operations are:Enqueue–add an element to the end of the queueD
equeue–remove an element from the start of the queueConsidering a given node as the
parent and connected nodes as children, DFS will visit the child vertices before visiting
siblings using this algorithm:Mark the starting node of the graph as visited and push it o
nto the stackWhilethe stack is not emptyPeekat top node on the stackIfthere is an unvisi
ted child of that nodeMarkthe child as visited and push the child node onto the stackEls
ePopthe top node off the stackBFS will visit the sibling vertices before the child vertices
using this algorithm:Mark the starting node of the graph as visited and enqueue it into th
e queueWhilethe queue is not emptyDequeuethe nextnode from the queueto become th
e current nodeWhilethere is an unvisited child of the current nodeMark the child as visite
d and enqueue the child node into the queueExamples of the DFS and BFS algorithms
are given next

Example of the Depth First Search (DFS)AlgorithmMark the starting node of the graph
as visited and push it onto the stackWhilethe stack is not emptyPeekat top node on the
stackIf there is an unvisited child of that nodeMark the child as visited and push the
child node onto the stackElsePop the top node off the stackExample using the graph to
the right.The stack push, peekand pop accesses the element on the
right.ActionStackUnvisited NodesVisited NodesStart with node 112, 3, 4, 5, 61Peek at
the stackNode 1 has unvisited child nodes 2 and 512, 3, 4, 5, 61Mark node 2 visited1,
23, 4, 5, 61, 2Peek at the stackNode 2 has unvisited child nodes 3 and 51, 23, 4, 5, 61,
2Mark node 3 visited1, 2, 34, 5, 61, 2, 3Peek at the stackNode 3 has unvisited child
node 41, 2, 34, 5, 61, 2, 3Mark node 4 visited1, 2, 3, 45, 61, 2, 3, 4Peek at the
stackNode 4 has unvisited child node 51, 2, 3, 45, 61, 2, 3, 4Mark node 5 visited1, 2, 3,
4, 561, 2, 3, 4, 5Peek at the stackNode 5 has no unvisited children1, 2, 3, 4, 561, 2, 3,
4, 5Pop node 5 off stack1, 2, 3, 461, 2, 3, 4, 5Peek at the stackNode 4 has unvisited
child node 61, 2, 3, 461, 2, 3, 4, 5Mark node 6 visited1, 2, 3, 4, 61, 2, 3, 4, 5, 6There are
no more unvisited nodes so the nodes will be popped from the stack and the algorithm
willterminate.
Floyd-Warshall Algorithm-

• Floyd-Warshall Algorithm is an algorithm for solving All Pairs Shortest path


problem which gives the shortest path between every pair of vertices of the given graph.
• Floyd-Warshall Algorithm is an example of dynamic programming.
• The main advantage of Floyd-Warshall Algorithm is that it is extremely simple and easy to
implement.

Algorithm-
Create a |V| x |V| matrix // It represents the distance between every pair of vertices as
given
For each cell (i,j) in M do-
if i = = j
M[ i ][ j ] = 0 // For all diagonal elements, value = 0
if (i , j) is an edge in E
M[ i ][ j ] = weight(i,j) // If there exists a direct edge between the vertices, value = weight of
edge
else
M[ i ][ j ] = infinity // If there is no direct edge between the vertices, value = ∞
for k from 1 to |V|

Sparse matrix for 3-tuple method using ArrayIn this article, we are going to learn
how to implement a sparse matrix for 3-tuple method using an array in the data
structure?
Submitted by Manu Jemini, on December 19, 2017A sparse matrix is a matrix
in which most of the elements are zero. By contrast, if most of the elements
are nonzero, then the matrix is considered dense. The number of zero-
valued elements divided by the total number of elements is called the
sparsity of the matrix (which is equal to 1 minus the density of the
matrix).Now to keep track of non-zero elements in a sparse matrix we
have 3-tuple method using an array. Elements of the first row represent the
number of rows, columns and non-zero values in the sparse matrix.
Elements of the other rows give information about the location and value of
non-zero elements.
Time Complexity" and "Space Complexity
Sometimes, there are more than one way to solve a problem. We need to learn how to compare the
performance different algorithms and choose the best one to solve a particular problem. While
analyzing an algorithm, we mostly consider time complexity and space complexity. Time complexity
of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length
of the input. Similarly, Space complexity of an algorithm quantifies the amount of space or memory
taken by an algorithm to run as a function of the length of the input.

Time and space complexity depends on lots of things like hardware, operating system, processors,
etc. However, we don't consider any of these factors while analyzing the algorithm. We will only
consider the execution time of an algorithm.

Lets start with a simple example. Suppose you are given an array A and an integer x and you have
to find if x exists in array A.

Simple solution to this problem is traverse the whole array A and check if the any element is equal
to x.

for i : 1 to length of A
if A[i] is equal to x
return TRUE
return FALSE

A matrix is a two-dimensional data object made of m rows and n columns, therefore


having total m x n values. If most of the elements of the matrix have 0 value, then it is
called a sparse matrix.
Why to use Sparse Matrix instead of simple matrix ?
• Storage: There are lesser non-zero elements than zeros and thus lesser memory can be
used to store only those elements.
• Computing time: Computing time can be saved by logically designing a data structure
traversing only non-zero elements..
Example:
0 0 3 0 4
0 0 5 7 0
0 0 0 0 0
0 2 6 0 0
Representing a sparse matrix by a 2D array leads to wastage of lots of memory as
zeroes in the matrix are of no use in most of the cases. So, instead of storing zeroes
with non-zero elements, we only store non-zero elements. This means storing non-zero
elements with triples- (Row, Column, value).
Sparse Matrix Representations can be done in many ways following are two common
representations:

Das könnte Ihnen auch gefallen