Sie sind auf Seite 1von 52

Data Structure & Algirithms

Data Structure
 Array
 Linked List
 Stack
 Queue
 Binary Tree
 Binary Search Tree

Algorithms
 Analysis of algorithms
 Searching algorithms
 Sorting algorithms

Array
An array is a collection of items stored at contiguous memory locations. The
idea is to store multiple items of the same type together. This makes it easier
to calculate the position of each element by simply adding an offset to a base
value, i.e., the memory location of the first element of the array (generally
denoted by the name of the array).

Advantages of using arrays:


 Arrays allow random access of elements. This makes accessing
elements by position faster.
 Arrays have better cache locality that can make a pretty big difference
in performance.

1. Program for Array Rotation

class RotateArray {
    /*Function to left rotate arr[] of size n by d*/
    void leftRotate(int arr[], int d, int n)
    {
        for (int i = 0; i < d; i++)
            leftRotatebyOne(arr, n);
    }
  
    void leftRotatebyOne(int arr[], int n)
    {
        int i, temp;
        temp = arr[0];
        for (i = 0; i < n - 1; i++)
            arr[i] = arr[i + 1];
        arr[i] = temp;
    }
  
    /* utility function to print an array */
    void printArray(int arr[], int n)
    {
        for (int i = 0; i < n; i++)
            System.out.print(arr[i] + " ");
    }
  
    // Driver program to test above functions
    public static void main(String[] args)
    {
        RotateArray rotate = new RotateArray();
        int arr[] = { 1, 2, 3, 4, 5, 6, 7 };
        rotate.leftRotate(arr, 2, 7);
        rotate.printArray(arr, 7);
    }
}

Output :
3456712
Time complexity : O(n * d)
Auxiliary Space : O(1)

Linked List Data Structure


A linked list is a linear data structure, in which the elements are not stored at
contiguous memory locations. The elements in a linked list are linked using
pointers as shown in the below image:

In simple words, a linked list consists of nodes where each node contains a
data field and a reference(link) to the next node in the list.
1. Singly Linked List
2. Doubly Linked List
3. Circular Linked List
Why Linked List?
Arrays can be used to store linear data of similar types, but arrays have the
following limitations.
1) The size of the arrays is fixed: So we must know the upper limit on the
number of elements in advance. Also, generally, the allocated memory is
equal to the upper limit irrespective of the usage.
2) Inserting a new element in an array of elements is expensive because the
room has to be created for the new elements and to create room existing
elements have to be shifted.

Advantages over arrays


1) Dynamic size
2) Ease of insertion/deletion
Drawbacks:
1) Random access is not allowed. We have to access elements sequentially
starting from the first node. So we cannot do binary search with linked lists
efficiently with its default implementation. Read about it here.
2) Extra memory space for a pointer is required with each element of the list.
3) Not cache friendly. Since array elements are contiguous locations, there is
locality of reference which is not there in case of linked lists.
// A simple Java program to introduce a linked list
class LinkedList {
    Node head; // head of list
  
    /* Linked list Node.  This inner class is made static so that
       main() can access it */
    static class Node {
        int data;
        Node next;
        Node(int d)
        {
            data = d;
            next = null;
        } // Constructor
    }
  
    /* method to create a simple linked list with 3 nodes*/
    public static void main(String[] args)
    {
        /* Start with the empty list. */
        LinkedList llist = new LinkedList();
  
        llist.head = new Node(1);
        Node second = new Node(2);
        Node third = new Node(3);
  
        /* Three nodes have been allocated dynamically.
          We have references to these three blocks as head,  
          second and third
  
          llist.head        second              third
             |                |                  |
             |                |                  |
         +----+------+     +----+------+     +----+------+
         | 1  | null |     | 2  | null |     |  3 | null |
         +----+------+     +----+------+     +----+------+ */
  
        llist.head.next = second; // Link first node with the second node
  
        /*  Now next of the first Node refers to the second.  So they
            both are linked.
  
         llist.head        second              third
            |                |                  |
            |                |                  |
        +----+------+     +----+------+     +----+------+
        | 1  |  o-------->| 2  | null |     |  3 | null |
        +----+------+     +----+------+     +----+------+ */
  
        second.next = third; // Link second node with the third node
  
        /*  Now next of the second Node refers to third.  So all three
            nodes are linked.
  
         llist.head        second              third
            |                |                  |
            |                |                  |
        +----+------+     +----+------+     +----+------+
        | 1  |  o-------->| 2  |  o-------->|  3 | null |
        +----+------+     +----+------+     +----+------+ */
    }
}

Linked List Traversal


// A simple Java program for traversal of a linked list
class LinkedList {
    Node head; // head of list
  
    /* Linked list Node.  This inner class is made static so that
       main() can access it */
    static class Node {
        int data;
        Node next;
        Node(int d)
        {
            data = d;
            next = null;
        } // Constructor
    }
  
    /* This function prints contents of linked list starting from head */
    public void printList()
    {
        Node n = head;
        while (n != null) {
            System.out.print(n.data + " ");
            n = n.next;
        }
    }
  
    /* method to create a simple linked list with 3 nodes*/
    public static void main(String[] args)
    {
        /* Start with the empty list. */
        LinkedList llist = new LinkedList();
  
        llist.head = new Node(1);
        Node second = new Node(2);
        Node third = new Node(3);
  
        llist.head.next = second; // Link first node with the second node
        second.next = third; // Link first node with the second node
  
        llist.printList();
    }
}

Linked List vs Array

Both Arrays and Linked List can be used to store linear data of similar types,


but they both have some advantages and disadvantages over each other.

Key Differences Between Array and Linked List


1. An array is the data structure that contains a collection of similar type data
elements whereas the Linked list is considered as non-primitive data structure
contains a collection of unordered linked elements known as nodes.
2. In the array the elements belong to indexes, i.e., if you want to get into the
fourth element you have to write the variable name with its index or location
within the square bracket.
3. In a linked list though, you have to start from the head and work your way
through until you get to the fourth element.
4. Accessing an element in an array is fast, while Linked list takes linear time,
so it is quite a bit slower.
5. Operations like insertion and deletion in arrays consume a lot of time. On
the other hand, the performance of these operations in Linked lists is fast.
6. Arrays are of fixed size. In contrast, Linked lists are dynamic and flexible
and can expand and contract its size.
7. In an array, memory is assigned during compile time while in a Linked list it
is allocated during execution or runtime.
9. Elements are stored consecutively in arrays whereas it is stored randomly
in Linked lists.
10. The requirement of memory is less due to actual data being stored within
the index in the array. As against, there is a need for more memory in Linked
Lists due to storage of additional next and previous referencing elements.
11. In addition memory utilization is inefficient in the array. Conversely,
memory utilization is efficient in the linked list.
Following are the points in favor of Linked Lists.
(1) The size of the arrays is fixed: So we must know the upper limit on the
number of elements in advance. Also, generally, the allocated memory is
equal to the upper limit irrespective of the usage, and in practical uses, the
upper limit is rarely reached.
(2) Inserting a new element in an array of elements is expensive because a
room has to be created for the new elements and to create room existing
elements have to be shifted.
For example, suppose we maintain a sorted list of IDs in an array id[].
id[] = [1000, 1010, 1050, 2000, 2040, …..].
And if we want to insert a new ID 1005, then to maintain the sorted order, we
have to move all the elements after 1000 (excluding 1000).
Deletion is also expensive with arrays until unless some special techniques
are used. For example, to delete 1010 in id[], everything after 1010 has to be
moved.
So Linked list provides the following two advantages over arrays
1) Dynamic size
2) Ease of insertion/deletion
Linked lists have following drawbacks:
1) Random access is not allowed. We have to access elements sequentially
starting from the first node. So we cannot do a binary search with linked lists.
2) Extra memory space for a pointer is required with each element of the list.
3) Arrays have better cache locality that can make a pretty big difference in
performance.
Linked List | Set 2 (Inserting a node)
In this post, methods to insert a new node in linked list are discussed. A node
can be added in three ways
1) At the front of the linked list
2) After a given node.
3) At the end of the linked list.

// A complete working Java program to demonstrate all insertion methods


// on linked list
class LinkedList
{
    Node head;  // head of list
  
    /* Linked list Node*/
    class Node
    {
        int data;
        Node next;
        Node(int d) {data = d; next = null; }
    }
  
    /* Inserts a new Node at front of the list. */
    public void push(int new_data)
    {
        /* 1 & 2: Allocate the Node &
                  Put in the data*/
        Node new_node = new Node(new_data);
  
        /* 3. Make next of new Node as head */
        new_node.next = head;
  
        /* 4. Move the head to point to new Node */
        head = new_node;
    }
  
    /* Inserts a new node after the given prev_node. */
    public void insertAfter(Node prev_node, int new_data)
    {
        /* 1. Check if the given Node is null */
        if (prev_node == null)
        {
            System.out.println("The given previous node cannot be null");
            return;
        }
  
        /* 2 & 3: Allocate the Node &
                  Put in the data*/
        Node new_node = new Node(new_data);
  
        /* 4. Make next of new Node as next of prev_node */
        new_node.next = prev_node.next;
  
        /* 5. make next of prev_node as new_node */
        prev_node.next = new_node;
    }
     
    /* Appends a new node at the end.  This method is 
       defined inside LinkedList class shown above */
    public void append(int new_data)
    {
        /* 1. Allocate the Node &
           2. Put in the data
           3. Set next as null */
        Node new_node = new Node(new_data);
  
        /* 4. If the Linked List is empty, then make the
              new node as head */
        if (head == null)
        {
            head = new Node(new_data);
            return;
        }
  
        /* 4. This new node is going to be the last node, so
              make next of it as null */
        new_node.next = null;
  
        /* 5. Else traverse till the last node */
        Node last = head; 
        while (last.next != null)
            last = last.next;
  
        /* 6. Change the next of last node */
        last.next = new_node;
        return;
    }
  
    /* This function prints contents of linked list starting from
        the given node */
    public void printList()
    {
        Node tnode = head;
        while (tnode != null)
        {
            System.out.print(tnode.data+" ");
            tnode = tnode.next;
        }
    }
  
    /* Driver program to test above functions. Ideally this function
       should be in a separate user class.  It is kept here to keep
       code compact */
    public static void main(String[] args)
    {
        /* Start with the empty list */
        LinkedList llist = new LinkedList();
  
        // Insert 6.  So linked list becomes 6->NUllist
        llist.append(6);
  
        // Insert 7 at the beginning. So linked list becomes
        // 7->6->NUllist
        llist.push(7);
  
        // Insert 1 at the beginning. So linked list becomes
        // 1->7->6->NUllist
        llist.push(1);
  
        // Insert 4 at the end. So linked list becomes
        // 1->7->6->4->NUllist
        llist.append(4);
  
        // Insert 8, after 7. So linked list becomes
        // 1->7->8->6->4->NUllist
        llist.insertAfter(llist.head.next, 8);
  
        System.out.println("\nCreated Linked list is: ");
        llist.printList();
    }
}

Linked List | Set 3 (Deleting a node)

Let us formulate the problem statement to understand the deletion


process. Given a ‘key’, delete the first occurrence of this key in linked list.
To delete a node from linked list, we need to do following steps.
1) Find previous node of the node to be deleted.
2) Change the next of previous node.
3) Free memory for the node to be deleted.

// A complete working Java program to demonstrate deletion in singly


// linked list
class LinkedList
{
    Node head; // head of list
  
    /* Linked list Node*/
    class Node
    {
        int data;
        Node next;
        Node(int d)
        {
            data = d;
            next = null;
        }
    }
  
    /* Given a key, deletes the first occurrence of key in linked list */
    void deleteNode(int key)
    {
        // Store head node
        Node temp = head, prev = null;
  
        // If head node itself holds the key to be deleted
        if (temp != null && temp.data == key)
        {
            head = temp.next; // Changed head
            return;
        }
  
        // Search for the key to be deleted, keep track of the
        // previous node as we need to change temp.next
        while (temp != null && temp.data != key)
        {
            prev = temp;
            temp = temp.next;
        }    
  
        // If key was not present in linked list
        if (temp == null) return;
  
        // Unlink the node from linked list
        prev.next = temp.next;
    }
  
    /* Inserts a new Node at front of the list. */
    public void push(int new_data)
    {
        Node new_node = new Node(new_data);
        new_node.next = head;
        head = new_node;
    }
  
    /* This function prints contents of linked list starting from
        the given node */
    public void printList()
    {
        Node tnode = head;
        while (tnode != null)
        {
            System.out.print(tnode.data+" ");
            tnode = tnode.next;
        }
    }
  
    /* Drier program to test above functions. Ideally this function
    should be in a separate user class. It is kept here to keep
    code compact */
    public static void main(String[] args)
    {
        LinkedList llist = new LinkedList();
  
        llist.push(7);
        llist.push(1);
        llist.push(3);
        llist.push(2);
  
        System.out.println("\nCreated Linked list is:");
        llist.printList();
  
        llist.deleteNode(1); // Delete node at position 4
  
        System.out.println("\nLinked List after Deletion at position 4:");
        llist.printList();
    }
}

Delete a Linked List node at a given position

// A complete working Java program to delete a node in a linked list


// at a given position
class LinkedList
{
    Node head;  // head of list
  
    /* Linked list Node*/
    class Node
    {
        int data;
        Node next;
        Node(int d)
        {
            data = d;
            next = null;
        }
    }
  
    /* Inserts a new Node at front of the list. */
    public void push(int new_data)
    {
        /* 1 & 2: Allocate the Node &
                  Put in the data*/
        Node new_node = new Node(new_data);
  
        /* 3. Make next of new Node as head */
        new_node.next = head;
  
        /* 4. Move the head to point to new Node */
        head = new_node;
    }
  
    /* Given a reference (pointer to pointer) to the head of a list
       and a position, deletes the node at the given position */
    void deleteNode(int position)
    {
        // If linked list is empty
        if (head == null)
            return;
  
        // Store head node
        Node temp = head;
  
        // If head needs to be removed
        if (position == 0)
        {
            head = temp.next;   // Change head
            return;
        }
  
        // Find previous node of the node to be deleted
        for (int i=0; temp!=null && i<position-1; i++)
            temp = temp.next;
  
        // If position is more than number of ndoes
        if (temp == null || temp.next == null)
            return;
  
        // Node temp->next is the node to be deleted
        // Store pointer to the next of node to be deleted
        Node next = temp.next.next;
  
        temp.next = next;  // Unlink the deleted node from list
    }
  
    /* This function prints contents of linked list starting from
        the given node */
    public void printList()
    {
        Node tnode = head;
        while (tnode != null)
        {
            System.out.print(tnode.data+" ");
            tnode = tnode.next;
        }
    }
  
    /* Drier program to test above functions. Ideally this function
       should be in a separate user class.  It is kept here to keep
       code compact */
    public static void main(String[] args)
    {
        /* Start with the empty list */
        LinkedList llist = new LinkedList();
  
        llist.push(7);
        llist.push(1);
        llist.push(3);
        llist.push(2);
        llist.push(8);
  
        System.out.println("\nCreated Linked list is: ");
        llist.printList();
  
        llist.deleteNode(4);  // Delete node at position 4
  
        System.out.println("\nLinked List after Deletion at position 4: ");
        llist.printList();
    }
}

Output:
Created Linked List:
8 2 3 1 7
Linked List after Deletion at position 4:
8 2 3 1

Write a function to delete a Linked List

// Java program to delete a linked list


class LinkedList
{
    Node head; // head of the list
  
    /* Linked List node */
    class Node
    {
        int data;
        Node next;
        Node(int d) { data = d; next = null; }
    }
  
    /* Function deletes the entire linked list */
    void deleteList()
    {
        head = null;
    }
  
    /* Inserts a new Node at front of the list. */
    public void push(int new_data)
    {
        /* 1 & 2: Allocate the Node &
                  Put in the data*/
        Node new_node = new Node(new_data);
  
        /* 3. Make next of new Node as head */
        new_node.next = head;
  
        /* 4. Move the head to point to new Node */
        head = new_node;
    }
  
    public static void main(String [] args)
    {
        LinkedList llist = new LinkedList();
        /* Use push() to construct below list
           1->12->1->4->1  */
  
        llist.push(1);
        llist.push(4);
        llist.push(1);
        llist.push(12);
        llist.push(1);
  
        System.out.println("Deleting the list");
        llist.deleteList();
  
        System.out.println("Linked list deleted");
    }
}
// This code is contributed by Rajat Mishra

Output:
Deleting linked list
Linked list deleted
Time Complexity: O(n)
Auxiliary Space: O(1)

Find Length of a Linked List (Iterative and Recursive)

Write a function to count the number of nodes in a given singly linked list.

For example, the function should return 5 for linked list 1->3->1->2->1.
// Java program to count number of nodes in a linked list
  
/* Linked list Node*/
class Node
{
    int data;
    Node next;
    Node(int d)  { data = d;  next = null; }
}
  
// Linked List class
class LinkedList
{
    Node head;  // head of list
  
    /* Inserts a new Node at front of the list. */
    public void push(int new_data)
    {
        /* 1 & 2: Allocate the Node &
                  Put in the data*/
        Node new_node = new Node(new_data);
  
        /* 3. Make next of new Node as head */
        new_node.next = head;
  
        /* 4. Move the head to point to new Node */
        head = new_node;
    }
  
    /* Returns count of nodes in linked list */
    public int getCount()
    {
        Node temp = head;
        int count = 0;
        while (temp != null)
        {
            count++;
            temp = temp.next;
        }
        return count;
    }
  
    /* Driver program to test above functions. Ideally
       this function should be in a separate user class.
       It is kept here to keep code compact */
    public static void main(String[] args)
    {
        /* Start with the empty list */
        LinkedList llist = new LinkedList();
        llist.push(1);
        llist.push(3);
        llist.push(1);
        llist.push(2);
        llist.push(1);
  
        System.out.println("Count of nodes is " +
                           llist.getCount());
    }
}

Output:
count of nodes is 5

Stack Data Structure


Stack is a linear data structure which follows a particular order in which the
operations are performed. The order may be LIFO(Last In First Out) or
FILO(First In Last Out).
There are many real-life examples of a stack. Consider an example of plates
stacked over one another in the canteen. The plate which is at the top is the
first one to be removed, i.e. the plate which has been placed at the
bottommost position remains in the stack for the longest period of time. So, it
can be simply seen to follow LIFO(Last In First Out)/FILO(First In Last Out)
order.

Stack Data Structure (Introduction and Program)


Stack is a linear data structure which follows a particular order in which the
operations are performed. The order may be LIFO(Last In First Out) or
FILO(First In Last Out).
Mainly the following three basic operations are performed in the stack:
 Push: Adds an item in the stack. If the stack is full, then it is said to be
an Overflow condition.
 Pop: Removes an item from the stack. The items are popped in the
reversed order in which they are pushed. If the stack is empty, then it is
said to be an Underflow condition.
 Peek or Top: Returns top element of stack.
 isEmpty: Returns true if stack is empty, else false.

How to understand a stack practically?


There are many real-life examples of a stack. Consider the simple example of
plates stacked over one another in a canteen. The plate which is at the top is
the first one to be removed, i.e. the plate which has been placed at the
bottommost position remains in the stack for the longest period of time. So, it
can be simply seen to follow LIFO/FILO order.
Time Complexities of operations on stack:
push(), pop(), isEmpty() and peek() all take O(1) time. We do not run any loop
in any of these operations.
Applications of stack:
 Balancing of symbols
 Infix to Postfix /Prefix conversion
 Redo-undo features at many places like editors, photoshop.
 Forward and backward feature in web browsers
 Used in many algorithms like Tower of Hanoi, tree traversals, stock
span problem, histogram problem.
 Other applications can be Backtracking, Knight tour problem, rat in a
maze, N queen problem and sudoku solver
 In Graph Algorithms like Topological Sorting and Strongly Connected
Components
Implementation:
There are two ways to implement a stack:
 Using array
 Using linked list

Implementing Stack using Arrays


/* Java program to implement basic stack
operations */
class Stack {
    static final int MAX = 1000;
    int top;
    int a[] = new int[MAX]; // Maximum size of Stack
  
    boolean isEmpty()
    {
        return (top < 0);
    }
    Stack()
    {
        top = -1;
    }
  
    boolean push(int x)
    {
        if (top >= (MAX - 1)) {
            System.out.println("Stack Overflow");
            return false;
        }
        else {
            a[++top] = x;
            System.out.println(x + " pushed into stack");
            return true;
        }
    }
  
    int pop()
    {
        if (top < 0) {
            System.out.println("Stack Underflow");
            return 0;
        }
        else {
            int x = a[top--];
            return x;
        }
    }
  
    int peek()
    {
        if (top < 0) {
            System.out.println("Stack Underflow");
            return 0;
        }
        else {
            int x = a[top];
            return x;
        }
    }
}
  
// Driver code
class Main {
    public static void main(String args[])
    {
        Stack s = new Stack();
        s.push(10);
        s.push(20);
        s.push(30);
        System.out.println(s.pop() + " Popped from stack");
    }
}
Pros: Easy to implement. Memory is saved as pointers are not involved.
Cons: It is not dynamic. It doesn’t grow and shrink depending on needs at runtime.
Output :
10 pushed into stack
20 pushed into stack
30 pushed into stack
30 popped from stack

Implementing Stack using Linked List

// Java Code for Linked List Implementation


  
public class StackAsLinkedList {
  
    StackNode root;
  
    static class StackNode {
        int data;
        StackNode next;
  
        StackNode(int data)
        {
            this.data = data;
        }
    }
  
    public boolean isEmpty()
    {
        if (root == null) {
            return true;
        }
        else
            return false;
    }
  
    public void push(int data)
    {
        StackNode newNode = new StackNode(data);
  
        if (root == null) {
            root = newNode;
        }
        else {
            StackNode temp = root;
            root = newNode;
            newNode.next = temp;
        }
        System.out.println(data + " pushed to stack");
    }
  
    public int pop()
    {
        int popped = Integer.MIN_VALUE;
        if (root == null) {
            System.out.println("Stack is Empty");
        }
        else {
            popped = root.data;
            root = root.next;
        }
        return popped;
    }
  
    public int peek()
    {
        if (root == null) {
            System.out.println("Stack is empty");
            return Integer.MIN_VALUE;
        }
        else {
            return root.data;
        }
    }
  
    public static void main(String[] args)
    {
  
        StackAsLinkedList sll = new StackAsLinkedList();
  
        sll.push(10);
        sll.push(20);
        sll.push(30);
  
        System.out.println(sll.pop() + " popped from stack");
  
        System.out.println("Top element is " + sll.peek());
    }
}
Output:
10 pushed to stack
20 pushed to stack
30 pushed to stack
30 popped from stack
Top element is 20
Pros: The linked list implementation of stack can grow and shrink according to the needs at
runtime.
Cons: Requires extra memory due to involvement of pointers.

Queue Data Structure


A Queue is a linear structure which follows a particular order in which the
operations are performed. The order is First In First Out (FIFO). A good
example of a queue is any queue of consumers for a resource where the
consumer that came first is served first. The difference between stacks and
queues is in removing. In a stack we remove the item the most recently
added; in a queue, we remove the item the least recently added.
Queue | Set 1 (Introduction and Array Implementation)
Like Stack, Queue is a linear structure which follows a particular order in
which the operations are performed. The order is First In First Out (FIFO).  A
good example of queue is any queue of consumers for a resource where the
consumer that came first is served first.
The difference between stacks and queues is in removing. In a stack we
remove the item the most recently added; in a queue, we remove the item the
least recently added.
Operations on Queue:
Mainly the following four basic operations are performed on queue:
Enqueue: Adds an item to the queue. If the queue is full, then it is said to be
an Overflow condition.
Dequeue: Removes an item from the queue. The items are popped in the
same order in which they are pushed. If the queue is empty, then it is said to
be an Underflow condition.
Front: Get the front item from queue.
Rear: Get the last item from queue.

Applications of Queue:
Queue is used when things don’t have to be processed immediatly, but have
to be processed in First InFirst Out order like Breadth First Search. This
property of Queue makes it also useful in following kind of scenarios.
1) When a resource is shared among multiple consumers. Examples include
CPU scheduling, Disk Scheduling.
2) When data is transferred asynchronously (data not necessarily received at
same rate as sent) between two processes. Examples include IO Buffers,
pipes, file IO, etc.
See this for more detailed applications of Queue and Stack.
Array implementation Of Queue
For implementing queue, we need to keep track of two indices, front and rear.
We enqueue an item at the rear and dequeue an item from front. If we simply
increment front and rear indices, then there may be problems, front may reach
end of the array. The solution to this problem is to increase front and rear in
circular manner (See this for details)
// Java program for array implementation of queue
   
// A class to represent a queue
class Queue
{
    int front, rear, size;
    int  capacity;
    int array[];
       
    public Queue(int capacity) {
         this.capacity = capacity;
         front = this.size = 0; 
         rear = capacity - 1;
         array = new int[this.capacity];
            
    }
       
    // Queue is full when size becomes equal to 
    // the capacity 
    boolean isFull(Queue queue)
    {  return (queue.size == queue.capacity);
    }
       
    // Queue is empty when size is 0
    boolean isEmpty(Queue queue)
    {  return (queue.size == 0); }
       
    // Method to add an item to the queue. 
    // It changes rear and size
    void enqueue( int item)
    {
        if (isFull(this))
            return;
        this.rear = (this.rear + 1)%this.capacity;
        this.array[this.rear] = item;
        this.size = this.size + 1;
        System.out.println(item+ " enqueued to queue");
    }
       
    // Method to remove an item from queue.  
    // It changes front and size
    int dequeue()
    {
        if (isEmpty(this))
            return Integer.MIN_VALUE;
           
        int item = this.array[this.front];
        this.front = (this.front + 1)%this.capacity;
        this.size = this.size - 1;
        return item;
    }
       
    // Method to get front of queue
    int front()
    {
        if (isEmpty(this))
            return Integer.MIN_VALUE;
           
        return this.array[this.front];
    }
        
    // Method to get rear of queue
    int rear()
    {
        if (isEmpty(this))
            return Integer.MIN_VALUE;
           
        return this.array[this.rear];
    }
}
   
    
// Driver class
public class Test
{
    public static void main(String[] args) 
    {
        Queue queue = new Queue(1000);
            
        queue.enqueue(10);
        queue.enqueue(20);
        queue.enqueue(30);
        queue.enqueue(40);
        
        System.out.println(queue.dequeue() + 
                     " dequeued from queue\n");
        
        System.out.println("Front item is " + 
                               queue.front());
           
        System.out.println("Rear item is " + 
                                queue.rear());
    }
}
  
// This code is contributed by Gaurav Miglani

Output:
10 enqueued to queue
20 enqueued to queue
30 enqueued to queue
40 enqueued to queue
10 dequeued from queue
Front item is 20
Rear item is 40
Time Complexity: Time complexity of all operations like enqueue(), dequeue(), isFull(),
isEmpty(), front() and rear() is O(1). There is no loop in any of the operations.

Binary Tree Data Structure


A tree whose elements have at most 2 children is called a binary tree. Since
each element in a binary tree can have only 2 children, we typically name
them the left and right child.

A Binary Tree node contains following parts.


1. Data
2. Pointer to left child
3. Pointer to right child

Binary Tree | Set 1 (Introduction)


Trees: Unlike Arrays, Linked Lists, Stack and queues, which are linear data
structures, trees are hierarchical data structures.
Tree Vocabulary: The topmost node is called root of the tree. The elements
that are directly under an element are called its children. The element directly
above something is called its parent. For example, ‘a’ is a child of ‘f’, and ‘f’ is
the parent of ‘a’. Finally, elements with no children are called leaves.
tree
----
j <-- root
/ \
f k
/ \ \
a h z <-- leaves
Why Trees?
1. One reason to use trees might be because you want to store information
that naturally forms a hierarchy. For example, the file system on a computer:

file system
-----------
/ <-- root
/ \
... home
/ \
ugrad course
/ / | \
... cs101 cs112 cs113
2. Trees (with some ordering e.g., BST) provide moderate access/search
(quicker than Linked List and slower than arrays).
3. Trees provide moderate insertion/deletion (quicker than Arrays and slower
than Unordered Linked Lists).
4. Like Linked Lists and unlike Arrays, Trees don’t have an upper limit on
number of nodes as nodes are linked using pointers.
Main applications of trees include:
1. Manipulate hierarchical data.
2. Make information easy to search (see tree traversal).
3. Manipulate sorted lists of data.
4. As a workflow for compositing digital images for visual effects.
5. Router algorithms
6. Form of a multi-stage decision-making (see business chess).
Binary Tree: A tree whose elements have at most 2 children is called a
binary tree. Since each element in a binary tree can have only 2 children, we
typically name them the left and right child.
Binary Tree Representation in C: A tree is represented by a pointer to the
topmost node in tree. If the tree is empty, then value of root is NULL.
A Tree node contains following parts.
1. Data
2. Pointer to left child
3. Pointer to right child

/* Class containing left and right child of current


   node and key value*/
class Node
{
    int key;
    Node left, right;
  
    public Node(int item)
    {
        key = item;
        left = right = null;
    }
}
  
// A Java program to introduce Binary Tree
class BinaryTree
{
    // Root of Binary Tree
    Node root;
  
    // Constructors
    BinaryTree(int key)
    {
        root = new Node(key);
    }
  
    BinaryTree()
    {
        root = null;
    }
  
    public static void main(String[] args)
    {
        BinaryTree tree = new BinaryTree();
  
        /*create root*/
        tree.root = new Node(1);
  
        /* following is the tree after above statement
  
              1
            /   \
          null  null     */
  
        tree.root.left = new Node(2);
        tree.root.right = new Node(3);
  
        /* 2 and 3 become left and right children of 1
               1
             /   \
            2      3
          /    \    /  \
        null null null null  */
  
  
        tree.root.left.left = new Node(4);
        /* 4 becomes left child of 2
                    1
                /       \
               2          3
             /   \       /  \
            4    null  null  null
           /   \
          null null
         */
    }
}

Binary Tree | Set 2 (Properties)


We have discussed Introduction to Binary Tree in set 1. In this post,
properties of binary are discussed.

1) The maximum number of nodes at level ‘l’ of a binary tree is 2l-1.


Here level is number of nodes on path from root to the node (including root
and node). Level of root is 1.
This can be proved by induction.
For root, l = 1, number of nodes = 21-1 = 1
Assume that maximum number of nodes on level l is 2l-1
Since in Binary tree every node has at most 2 children, next level would have
twice nodes, i.e. 2 * 2l-1
 
2) Maximum number of nodes in a binary tree of height ‘h’ is 2h – 1.
Here height of a tree is maximum number of nodes on root to leaf path. Height
of a tree with single node is considered as 1.
This result can be derived from point 2 above. A tree has maximum nodes if
all levels have maximum nodes. So maximum number of nodes in a binary
tree of height h is 1 + 2 + 4 + .. + 2h-1. This is a simple geometric series with
h terms and sum of this series is 2h – 1.
In some books, height of the root is considered as 0. In this convention, the
above formula becomes 2h+1 – 1

 
3) In a Binary Tree with N nodes, minimum possible height or minimum
number of levels is  ? Log2(N+1) ?  
This can be directly derived from point 2 above. If we consider the convention
where height of a leaf node is considered as 0, then above formula for
minimum possible height becomes   ? Log2(N+1) ? – 1
 
4) A Binary Tree with L leaves has at least   ? Log2L ? + 1   levels
A Binary tree has maximum number of leaves (and minimum number of
levels) when all levels are fully filled. Let all leaves be at level l, then below is
true for number of leaves L.
L <= 2l-1 [From Point 1]
l = ? Log2L ? + 1
where l is the minimum number of levels.
 
5) In Binary tree where every node has 0 or 2 children, number of leaf
nodes is always one more than nodes with two children.
L=T+1
Where L = Number of leaf nodes
T = Number of internal nodes with two children

Binary Tree | Set 3 (Types of Binary Tree)


We have discussed Introduction to Binary Tree in set 1 and Properties of
Binary Tree in Set 2. In this post, common types of binary is discussed.
Following are common types of Binary Trees.
Full Binary Tree A Binary Tree is full if every node has 0 or 2 children.
Following are examples of a full binary tree. We can also say a full binary tree
is a binary tree in which all nodes except leaves have two children.

18
/ \
15 30
/ \ / \
40 50 100 40

18
/ \
15 20
/ \
40 50
/ \
30 50

18
/ \
40 30
/ \
100 40

In a Full Binary, number of leaf nodes is number of internal nodes plus 1


       L = I + 1
Where L = Number of leaf nodes, I = Number of internal nodes
See Handshaking Lemma and Tree for proof.

Complete Binary Tree: A Binary Tree is complete Binary Tree if all levels are
completely filled except possibly the last level and the last level has all keys
as left as possible
Following are examples of Complete Binary Trees
18
/ \
15 30
/ \ / \
40 50 100 40

18
/ \
15 30
/ \ / \
40 50 100 40
/ \ /
8 7 9
Practical example of Complete Binary Tree is Binary Heap.

Perfect Binary Tree A Binary tree is Perfect Binary Tree in which all internal
nodes have two children and all leaves are at the same level.
Following are examples of Perfect Binary Trees.
18
/ \
15 30
/ \ / \
40 50 100 40

18
/ \
15 30
A Perfect Binary Tree of height h (where height is the number of nodes on the
path from the root to leaf) has 2h – 1 node.
Example of a Perfect binary tree is ancestors in the family. Keep a person at
root, parents as children, parents of parents as their children.

Balanced Binary Tree


A binary tree is balanced if the height of the tree is O(Log n) where n is the
number of nodes. For Example, AVL tree maintains O(Log n) height by
making sure that the difference between heights of left and right subtrees is
atmost 1. Red-Black trees maintain O(Log n) height by making sure that the
number of Black nodes on every root to leaf paths are same and there are no
adjacent red nodes. Balanced Binary Search trees are performance wise
good as they provide O(log n) time for search, insert and delete.

A degenerate (or pathological) tree A Tree where every internal node has
one child. Such trees are performance-wise same as linked list.
10
/
20
\
30
\
40
Binary Search Tree
Binary Search Tree is a node-based binary tree data structure which has the
following properties:
 The left subtree of a node contains only nodes with keys lesser than
the node’s key.
 The right subtree of a node contains only nodes with keys greater than
the node’s key.
 The left and right subtree each must also be a binary search tree.

Binary Search Tree | Set 1 (Search and Insertion)


The following is definition of Binary Search Tree(BST) according to Wikipedia
Binary Search Tree, is a node-based binary tree data structure which has the
following properties:
 The left subtree of a node contains only nodes with keys lesser than
the node’s key.
 The right subtree of a node contains only nodes with keys greater than
the node’s key.
 The left and right subtree each must also be a binary search tree.
There must be no duplicate nodes.
The above properties of Binary Search Tree provide an ordering among keys
so that the operations like search, minimum and maximum can be done fast.
If there is no ordering, then we may have to compare every key to search a
given key.

Searching a key
To search a given key in Binary Search Tree, we first compare it with root, if
the key is present at root, we return root. If key is greater than root’s key, we
recur for right subtree of root node. Otherwise we recur for left subtree.
Illustration to search 6 in below tree:
1. Start from root.
2. Compare the inserting element with root, if less than root, then recurse for
left, else recurse for right.
3. If element to search is found anywhere, return true, else return false.

 
Insertion of a key
A new key is always inserted at leaf. We start searching a key from root till we
hit a leaf node. Once a leaf node is found, the new node is added as a child of
the leaf node.
100 100
/ \ Insert 40 / \
20 500 ---------> 20 500
/ \ / \
10 30 10 30
\
40

// Java program to demonstrate insert operation in binary search tree


class BinarySearchTree {
  
    /* Class containing left and right child of current node and key value*/
    class Node {
        int key;
        Node left, right;
  
        public Node(int item) {
            key = item;
            left = right = null;
        }
    }
  
    // Root of BST
    Node root;
  
    // Constructor
    BinarySearchTree() { 
        root = null; 
    }
  
    // This method mainly calls insertRec()
    void insert(int key) {
       root = insertRec(root, key);
    }
      
    /* A recursive function to insert a new key in BST */
    Node insertRec(Node root, int key) {
  
        /* If the tree is empty, return a new node */
        if (root == null) {
            root = new Node(key);
            return root;
        }
  
        /* Otherwise, recur down the tree */
        if (key < root.key)
            root.left = insertRec(root.left, key);
        else if (key > root.key)
            root.right = insertRec(root.right, key);
  
        /* return the (unchanged) node pointer */
        return root;
    }
  
    // This method mainly calls InorderRec()
    void inorder()  {
       inorderRec(root);
    }
  
    // A utility function to do inorder traversal of BST
    void inorderRec(Node root) {
        if (root != null) {
            inorderRec(root.left);
            System.out.println(root.key);
            inorderRec(root.right);
        }
    }
  
    // Driver Program to test above functions
    public static void main(String[] args) {
        BinarySearchTree tree = new BinarySearchTree();
  
        /* Let us create following BST
              50
           /     \
          30      70
         /  \    /  \
       20   40  60   80 */
        tree.insert(50);
        tree.insert(30);
        tree.insert(20);
        tree.insert(40);
        tree.insert(70);
        tree.insert(60);
        tree.insert(80);
  
        // print inorder traversal of the BST
        tree.inorder();
    }
}

Output:
20
30
40
50
60
70
80

Illustration to insert 2 in below tree:


1. Start from root.
2. Compare the inserting element with root, if less than root, then recurse for
left, else recurse for right.
3. After reaching end,just insert that node at left(if less than current) else right.

Time Complexity: The worst case time complexity of search and insert


operations is O(h) where h is height of Binary Search Tree. In worst case, we
may have to travel from root to the deepest leaf node. The height of a skewed
tree may become n and the time complexity of search and insert operation
may become O(n).
Binary Search Tree | Set 2 (Delete)
We have discussed BST search and insert operations. In this post, delete
operation is discussed. When we delete a node, three possibilities arise.
1) Node to be deleted is leaf: Simply remove from the tree.
50 50
/ \ delete(20) / \
30 70 ---------> 30 70
/ \ / \ \ / \
20 40 60 80 40 60 80
2) Node to be deleted has only one child: Copy the child to the node and
delete the child
50 50
/ \ delete(30) / \
30 70 ---------> 40 70
\ / \ / \
40 60 80 60 80
3) Node to be deleted has two children: Find inorder successor of the node.
Copy contents of the inorder successor to the node and delete the inorder
successor. Note that inorder predecessor can also be used.

50 60
/ \ delete(50) / \
40 70 ---------> 40 70
/ \ \
60 80 80
The important thing to note is, inorder successor is needed only when right
child is not empty. In this particular case, inorder successor can be obtained
by finding the minimum value in right child of the node.
// Java program to demonstrate delete operation in binary search tree
class BinarySearchTree
{
    /* Class containing left and right child of current node and key value*/
    class Node
    {
        int key;
        Node left, right;
  
        public Node(int item)
        {
            key = item;
            left = right = null;
        }
    }
  
    // Root of BST
    Node root;
  
    // Constructor
    BinarySearchTree()
    {
        root = null;
    }
  
    // This method mainly calls deleteRec()
    void deleteKey(int key)
    {
        root = deleteRec(root, key);
    }
  
    /* A recursive function to insert a new key in BST */
    Node deleteRec(Node root, int key)
    {
        /* Base Case: If the tree is empty */
        if (root == null)  return root;
  
        /* Otherwise, recur down the tree */
        if (key < root.key)
            root.left = deleteRec(root.left, key);
        else if (key > root.key)
            root.right = deleteRec(root.right, key);
  
        // if key is same as root's key, then This is the node
        // to be deleted
        else
        {
            // node with only one child or no child
            if (root.left == null)
                return root.right;
            else if (root.right == null)
                return root.left;
  
            // node with two children: Get the inorder successor (smallest
            // in the right subtree)
            root.key = minValue(root.right);
  
            // Delete the inorder successor
            root.right = deleteRec(root.right, root.key);
        }
  
        return root;
    }
  
    int minValue(Node root)
    {
        int minv = root.key;
        while (root.left != null)
        {
            minv = root.left.key;
            root = root.left;
        }
        return minv;
    }
  
    // This method mainly calls insertRec()
    void insert(int key)
    {
        root = insertRec(root, key);
    }
  
    /* A recursive function to insert a new key in BST */
    Node insertRec(Node root, int key)
    {
  
        /* If the tree is empty, return a new node */
        if (root == null)
        {
            root = new Node(key);
            return root;
        }
  
        /* Otherwise, recur down the tree */
        if (key < root.key)
            root.left = insertRec(root.left, key);
        else if (key > root.key)
            root.right = insertRec(root.right, key);
  
        /* return the (unchanged) node pointer */
        return root;
    }
  
    // This method mainly calls InorderRec()
    void inorder()
    {
        inorderRec(root);
    }
  
    // A utility function to do inorder traversal of BST
    void inorderRec(Node root)
    {
        if (root != null)
        {
            inorderRec(root.left);
            System.out.print(root.key + " ");
            inorderRec(root.right);
        }
    }
  
    // Driver Program to test above functions
    public static void main(String[] args)
    {
        BinarySearchTree tree = new BinarySearchTree();
  
        /* Let us create following BST
              50
           /     \
          30      70
         /  \    /  \
        20   40  60   80 */
        tree.insert(50);
        tree.insert(30);
        tree.insert(20);
        tree.insert(40);
        tree.insert(70);
        tree.insert(60);
        tree.insert(80);
  
        System.out.println("Inorder traversal of the given tree");
        tree.inorder();
  
        System.out.println("\nDelete 20");
        tree.deleteKey(20);
        System.out.println("Inorder traversal of the modified tree");
        tree.inorder();
  
        System.out.println("\nDelete 30");
        tree.deleteKey(30);
        System.out.println("Inorder traversal of the modified tree");
        tree.inorder();
  
        System.out.println("\nDelete 50");
        tree.deleteKey(50);
        System.out.println("Inorder traversal of the modified tree");
        tree.inorder();
    }
}

Output:
Inorder traversal of the given tree
20 30 40 50 60 70 80
Delete 20
Inorder traversal of the modified tree
30 40 50 60 70 80
Delete 30
Inorder traversal of the modified tree
40 50 60 70 80
Delete 50
Inorder traversal of the modified tree
40 60 70 80

Algorithms
Analysis of Algorithms | Set 1 (Asymptotic Analysis)
Why performance analysis?
There are many important things that should be taken care of, like user
friendliness, modularity, security, maintainability, etc. Why to worry about
performance?
The answer to this is simple, we can have all the above things only if we have
performance. So performance is like currency through which we can buy all
the above things. Another reason for studying performance is – speed is fun!
To summarize, performance == scale. Imagine a text editor that can load
1000 pages, but can spell check 1 page per minute OR an image editor that
takes 1 hour to rotate your image 90 degrees left OR … you get it. If a
software feature can not cope with the scale of tasks users need to perform –
it is as good as dead.

Given two algorithms for a task, how do we find out which one is better?
One naive way of doing this is – implement both the algorithms and run the
two programs on your computer for different inputs and see which one takes
less time. There are many problems with this approach for analysis of
algorithms.
1) It might be possible that for some inputs, first algorithm performs better
than the second. And for some inputs second performs better.
2) It might also be possible that for some inputs, first algorithm perform better
on one machine and the second works better on other machine for some
other inputs.
Asymptotic Analysis is the big idea that handles above issues in analyzing
algorithms. In Asymptotic Analysis, we evaluate the performance of an
algorithm in terms of input size (we don’t measure the actual running time).
We calculate, how does the time (or space) taken by an algorithm increases
with the input size.
For example, let us consider the search problem (searching a given item) in a
sorted array. One way to search is Linear Search (order of growth is linear)
and other way is Binary Search (order of growth is logarithmic). To
understand how Asymptotic Analysis solves the above mentioned problems in
analyzing algorithms, let us say we run the Linear Search on a fast computer
and Binary Search on a slow computer. For small values of input array size n,
the fast computer may take less time. But, after certain value of input array
size, the Binary Search will definitely start taking less time compared to the
Linear Search even though the Binary Search is being run on a slow machine.
The reason is the order of growth of Binary Search with respect to input size
is logarithmic while the order of growth of Linear Search is linear. So the
machine dependent constants can always be ignored after certain values of
input size.

Does Asymptotic Analysis always work?


Asymptotic Analysis is not perfect, but that’s the best way available for
analyzing algorithms. For example, say there are two sorting algorithms that
take 1000nLogn and 2nLogn time respectively on a machine. Both of these
algorithms are asymptotically same (order of growth is nLogn). So, With
Asymptotic Analysis, we can’t judge which one is better as we ignore
constants in Asymptotic Analysis.
Also, in Asymptotic analysis, we always talk about input sizes larger than a
constant value. It might be possible that those large inputs are never given to
your software and an algorithm which is asymptotically slower, always
performs better for your particular situation. So, you may end up choosing an
algorithm that is Asymptotically slower but faster for your software.
Analysis of Algorithms | Set 2 (Worst, Average and Best
Cases)
In the previous post, we discussed how Asymptotic analysis overcomes the
problems of naive way of analyzing algorithms. In this post, we will take an
example of Linear Search and analyze it using Asymptotic analysis.
We can have three cases to analyze an algorithm:
1) Worst Case
2) Average Case
3) Best Case
Let us consider the following implementation of Linear Search
// Java implementation of the approach 
  
public class GFG {
  
// Linearly search x in arr[].  If x is present then return the index,
// otherwise return -1
    static int search(int arr[], int n, int x) {
        int i;
        for (i = 0; i < n; i++) {
            if (arr[i] == x) {
                return i;
            }
        }
        return -1;
    }
  
    /* Driver program to test above functions*/
    public static void main(String[] args) {
        int arr[] = {1, 10, 30, 15};
        int x = 30;
        int n = arr.length;
        System.out.printf("%d is present at index %d", x, search(arr, n, x));
  
    }
}

Output:
30 is present at index 2
Worst Case Analysis (Usually Done)
In the worst case analysis, we calculate upper bound on running time of an
algorithm. We must know the case that causes maximum number of
operations to be executed. For Linear Search, the worst case happens when
the element to be searched (x in the above code) is not present in the array.
When x is not present, the search() functions compares it with all the
elements of arr[] one by one. Therefore, the worst case time complexity of
linear search would be Θ(n).
Average Case Analysis (Sometimes done)
In average case analysis, we take all possible inputs and calculate computing
time for all of the inputs. Sum all the calculated values and divide the sum by
total number of inputs. We must know (or predict) distribution of cases. For
the linear search problem, let us assume that all cases are uniformly
distributed (including the case of x not being present in array). So we sum all
the cases and divide the sum by (n+1). Following is the value of average case
time complexity.
Average Case Time =

= Θ(n)
Best Case Analysis (Bogus)
In the best case analysis, we calculate lower bound on running time of an
algorithm. We must know the case that causes minimum number of
operations to be executed. In the linear search problem, the best case occurs
when x is present at the first location. The number of operations in the best
case is constant (not dependent on n). So time complexity in the best case
would be Θ(1)
Most of the times, we do worst case analysis to analyze algorithms. In the
worst analysis, we guarantee an upper bound on the running time of an
algorithm which is good information.
The average case analysis is not easy to do in most of the practical cases and
it is rarely done. In the average case analysis, we must know (or predict) the
mathematical distribution of all possible inputs.
Analysis of Algorithms | Set 3 (Asymptotic Notations)
We have discussed Asymptotic Analysis, and Worst, Average and Best
Cases of Algorithms. The main idea of asymptotic analysis is to have a
measure of efficiency of algorithms that doesn’t depend on machine specific
constants, and doesn’t require algorithms to be implemented and time taken
by programs to be compared. Asymptotic notations are mathematical tools to
represent time complexity of algorithms for asymptotic analysis. The following
3 asymptotic notations are mostly used to represent time complexity of
algorithms.

1) Θ Notation: The theta notation bounds a


functions from above and below, so it defines exact asymptotic behavior.
A simple way to get Theta notation of an expression is to drop low order terms
and ignore leading constants. For example, consider the following expression.
3n3 + 6n2 + 6000 = Θ(n3)
Dropping lower order terms is always fine because there will always be a n0
after which Θ(n3) has higher values than Θn2) irrespective of the constants
involved.
For a given function g(n), we denote Θ(g(n)) is following set of functions.
Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such
that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}
The above definition means, if f(n) is theta of g(n), then the value f(n) is
always between c1*g(n) and c2*g(n) for large values of n (n >= n0). The
definition of theta also requires that f(n) must be non-negative for values of n
greater than n0.

2) Big O Notation: The Big O notation defines an


upper bound of an algorithm, it bounds a function only from above. For
example, consider the case of Insertion Sort. It takes linear time in best case
and quadratic time in worst case. We can safely say that the time complexity
of Insertion sort is O(n^2). Note that O(n^2) also covers linear time.
If we use Θ notation to represent time complexity of Insertion sort, we have to
use two statements for best and worst cases:
1. The worst case time complexity of Insertion Sort is Θ(n^2).
2. The best case time complexity of Insertion Sort is Θ(n).
The Big O notation is useful when we only have upper bound on time
complexity of an algorithm. Many times we easily find an upper bound by
simply looking at the algorithm.
O(g(n)) = { f(n): there exist positive constants c and
n0 such that 0 <= f(n) <= c*g(n) for
all n >= n0}

3) Ω Notation: Just as Big O notation provides an


asymptotic upper bound on a function, Ω notation provides an asymptotic
lower bound.
Ω Notation can be useful when we have lower bound on time complexity of an
algorithm. As discussed in the previous post, the best case performance of an
algorithm is generally not useful, the Omega notation is the least used
notation among all three.
For a given function g(n), we denote by Ω(g(n)) the set of functions.
Ω (g(n)) = {f(n): there exist positive constants c and
n0 such that 0 <= c*g(n) <= f(n) for
all n >= n0}.
Let us consider the same Insertion sort example here. The time complexity of
Insertion Sort can be written as Ω(n), but it is not a very useful information
about insertion sort, as we are generally interested in worst case and
sometimes in average case.
Analysis of Algorithms | Set 4 (Analysis of Loops)
We have discussed Asymptotic Analysis,  Worst, Average and Best
Cases  and Asymptotic Notations in previous posts. In this post, analysis of
iterative programs with simple examples is discussed.
1) O(1): Time complexity of a function (or set of statements) is considered as
O(1) if it doesn’t contain loop, recursion and call to any other non-constant
time function.
// set of non-recursive and non-loop statements
For example swap() function has O(1) time complexity.
A loop or recursion that runs a constant number of times is also considered as
O(1). For example the following loop is O(1).

// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}

2) O(n): Time Complexity of a loop is considered as O(n) if the loop variables


is incremented / decremented by a constant amount. For example following
functions have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}

for (int i = n; i > 0; i -= c) {


// some O(1) expressions
}

3) O(nc): Time complexity of nested loops is equal to the number of times the
innermost statement is executed. For example the following sample loops
have O(n2) time complexity

for (int i = 1; i <=n; i += c) {


for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}

for (int i = n; i > 0; i -= c) {


for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}

For example Selection sort and Insertion Sort have O(n2) time complexity.


4) O(Logn) Time Complexity of a loop is considered as O(Logn) if the loop
variables is divided / multiplied by a constant amount.
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}

For example Binary Search(refer iterative implementation) has O(Logn) time


complexity. Let us see mathematically how it is O(Log n). The series that we
get in first loop is 1, c, c2, c3, … ck. If we put k equals to Logcn, we get
cLogcn which is n.
5) O(LogLogn) Time Complexity of a loop is considered as O(LogLogn) if the
loop variables is reduced / increased exponentially by a constant amount.
// Here c is a constant greater than 1
for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 1; i = fun(i)) {
// some O(1) expressions
}
See this for mathematical details.
How to combine time complexities of consecutive loops?
When there are consecutive loops, we calculate time complexity as sum of
time complexities of individual loops.
for (int i = 1; i <=m; i += c) {
// some O(1) expressions
}
for (int i = 1; i <=n; i += c) {
// some O(1) expressions
}
Time complexity of above code is O(m) + O(n) which is O(m+n)
If m == n, the time complexity becomes O(2n) which is O(n).

How to calculate time complexity when there are many if, else
statements inside loops?
As discussed here, worst case time complexity is the most useful among best,
average and worst. Therefore we need to consider worst case. We evaluate
the situation when values in if-else conditions cause maximum number of
statements to be executed.
For example consider the linear search function where we consider the case
when element is present at the end or not present at all.
Analysis of Algorithm | Set 4 (Solving Recurrences)
In the previous post, we discussed analysis of loops. Many algorithms are
recursive in nature. When we analyze them, we get a recurrence relation for
time complexity. We get running time on an input of size n as a function of n
and the running time on inputs of smaller sizes. For example in Merge Sort, to
sort a given array, we divide it in two halves and recursively repeat the
process for the two halves. Finally we merge the results. Time complexity of
Merge Sort can be written as T(n) = 2T(n/2) + cn. There are many other
algorithms like Binary Search, Tower of Hanoi, etc.
There are mainly three ways for solving recurrences.
1) Substitution Method: We make a guess for the solution and then we use
mathematical induction to prove the guess is correct or incorrect.

For example consider the recurrence T(n) = 2T(n/2) + n

We guess the solution as T(n) = O(nLogn). Now we use induction


to prove our guess.

We need to prove that T(n) <= cnLogn. We can assume that it is true
for values smaller than n.
T(n) = 2T(n/2) + n
<= cn/2Log(n/2) + n
= cnLogn - cnLog2 + n
= cnLogn - cn + n
<= cnLogn

2) Recurrence Tree Method: In this method, we draw a recurrence tree and


calculate the time taken by every level of tree. Finally, we sum the work done
at all levels. To draw the recurrence tree, we start from the given recurrence
and keep drawing till we find a pattern among levels. The pattern is typically a
arithmetic or geometric series.
For example consider the recurrence relation
T(n) = T(n/4) + T(n/2) + cn2

cn2
/ \
T(n/4) T(n/2)

If we further break down the expression T(n/4) and T(n/2),


we get following recursion tree.

cn2
/ \
c(n2)/16 c(n2)/4
/ \ / \
T(n/16) T(n/8) T(n/8) T(n/4)
Breaking down further gives us following
cn2
/ \
c(n2)/16 c(n2)/4
/ \ / \
c(n2)/256 c(n2)/64 c(n2)/64 c(n2)/16
/ \ / \ / \ / \

To know the value of T(n), we need to calculate sum of tree


nodes level by level. If we sum the above tree level by level,
we get the following series
T(n) = c(n^2 + 5(n^2)/16 + 25(n^2)/256) + ....
The above series is geometrical progression with ratio 5/16.

To get an upper bound, we can sum the infinite series.


We get the sum as (n2)/(1 - 5/16) which is O(n2)

3) Master Method:
Master Method is a direct way to get the solution. The master method works
only for following type of recurrences or for recurrences that can be
transformed to following type.
T(n) = aT(n/b) + f(n) where a >= 1 and b > 1
There are following three cases:
1. If f(n) = Θ(nc) where c < Logba then T(n) = Θ(nLogba)
2. If f(n) = Θ(nc) where c = Logba then T(n) = Θ(ncLog n)
3.If f(n) = Θ(nc) where c > Logba then T(n) = Θ(f(n))
How does this work?
Master method is mainly derived from recurrence tree method. If we draw
recurrence tree of T(n) = aT(n/b) + f(n), we can see that the work done at root
is f(n) and work done at all leaves is Θ(nc) where c is Logba. And the height
of recurrence tree is Logbn

In recurrence tree method, we calculate total work done. If the work done at
leaves is polynomially more, then leaves are the dominant part, and our result
becomes the work done at leaves (Case 1). If work done at leaves and root is
asymptotically same, then our result becomes height multiplied by work done
at any level (Case 2). If work done at root is asymptotically more, then our
result becomes work done at root (Case 3).
What does ‘Space Complexity’ mean?
Space Complexity:
The term Space Complexity is misused for Auxiliary Space at many
places. Following are the correct definitions of Auxiliary Space and Space
Complexity.
Auxiliary Space is the extra space or temporary space used by an algorithm.
Space Complexity of an algorithm is total space taken by the algorithm with
respect to the input size. Space complexity includes both Auxiliary space and
space used by input.

For example, if we want to compare standard sorting algorithms on the basis


of space, then Auxiliary Space would be a better criteria than Space
Complexity. Merge Sort uses O(n) auxiliary space, Insertion sort and Heap
Sort use O(1) auxiliary space. Space complexity of all these sorting algorithms
is O(n) though.

Searching Algorithms
Linear Search

// Java code for linearly searching x in arr[]. If x 


// is present then return its location, otherwise 
// return -1 
  
class GFG 

public static int search(int arr[], int x)
{
    int n = arr.length;
    for(int i = 0; i < n; i++)
    {
        if(arr[i] == x)
            return i;
    }
    return -1;
}
  
public static void main(String args[])
{
    int arr[] = { 2, 3, 4, 10, 40 }; 
    int x = 10;
      
    int result = search(arr, x);
    if(result == -1)
        System.out.print("Element is not present in array");
    else
        System.out.print("Element is present at index " + result);
}
}
Output:
Element is present at index 3
The time complexity of above algorithm is O(n).

Binary Search
// Java implementation of recursive Binary Search
class BinarySearch {
    // Returns index of x if it is present in arr[l..
    // r], else return -1
    int binarySearch(int arr[], int l, int r, int x)
    {
        if (r >= l) {
            int mid = l + (r - l) / 2;
  
            // If the element is present at the
            // middle itself
            if (arr[mid] == x)
                return mid;
  
            // If element is smaller than mid, then
            // it can only be present in left subarray
            if (arr[mid] > x)
                return binarySearch(arr, l, mid - 1, x);
  
            // Else the element can only be present
            // in right subarray
            return binarySearch(arr, mid + 1, r, x);
        }
  
        // We reach here when element is not present
        // in array
        return -1;
    }
  
    // Driver method to test above
    public static void main(String args[])
    {
        BinarySearch ob = new BinarySearch();
        int arr[] = { 2, 3, 4, 10, 40 };
        int n = arr.length;
        int x = 10;
        int result = ob.binarySearch(arr, 0, n - 1, x);
        if (result == -1)
            System.out.println("Element not present");
        else
            System.out.println("Element found at index " + result);
    }
}
Output :
Element is present at index 3

Iterative implementation of Binary Search


// Java implementation of iterative Binary Search
class BinarySearch {
    // Returns index of x if it is present in arr[],
    // else return -1
    int binarySearch(int arr[], int x)
    {
        int l = 0, r = arr.length - 1;
        while (l <= r) {
            int m = l + (r - l) / 2;
  
            // Check if x is present at mid
            if (arr[m] == x)
                return m;
  
            // If x greater, ignore left half
            if (arr[m] < x)
                l = m + 1;
  
            // If x is smaller, ignore right half
            else
                r = m - 1;
        }
  
        // if we reach here, then element was
        // not present
        return -1;
    }
  
    // Driver method to test above
    public static void main(String args[])
    {
        BinarySearch ob = new BinarySearch();
        int arr[] = { 2, 3, 4, 10, 40 };
        int n = arr.length;
        int x = 10;
        int result = ob.binarySearch(arr, x);
        if (result == -1)
            System.out.println("Element not present");
        else
            System.out.println("Element found at "
                               + "index " + result);
    }
}

Output :
Element is present at index 3
Time Complexity:
The time complexity of Binary Search can be written as
T(n) = T(n/2) + c

Sorting Algorithms
Bubble Sort

// Java program for implementation of Bubble Sort


class BubbleSort
{
    void bubbleSort(int arr[])
    {
        int n = arr.length;
        for (int i = 0; i < n-1; i++)
            for (int j = 0; j < n-i-1; j++)
                if (arr[j] > arr[j+1])
                {
                    // swap arr[j+1] and arr[i]
                    int temp = arr[j];
                    arr[j] = arr[j+1];
                    arr[j+1] = temp;
                }
    }
  
    /* Prints the array */
    void printArray(int arr[])
    {
        int n = arr.length;
        for (int i=0; i<n; ++i)
            System.out.print(arr[i] + " ");
        System.out.println();
    }
  
    // Driver method to test above
    public static void main(String args[])
    {
        BubbleSort ob = new BubbleSort();
        int arr[] = {64, 34, 25, 12, 22, 11, 90};
        ob.bubbleSort(arr);
        System.out.println("Sorted array");
        ob.printArray(arr);
    }
}

Output:
Sorted array:
11 12 22 25 34 64 90

Das könnte Ihnen auch gefallen