Sie sind auf Seite 1von 26

DESIGN AND ANALYSIS

OF
ALGORITHMS

ASSIGNMENT

ASYMPTOTIC NOTATIONS

SUBMITTED BY- SUBMITTED TO:


o SANSKAR MISHRA MR. KESHAV KAUSHIK
SAP ID : 500052531
ROLL NO: 25

Shreya
SAP ID : 500047217
Roll no: 64
TABLE OF CONTENTS-
 What is asymptotic notation
 Why do we need to use asymptotic notation
 Ways to classify asymptotic notation
 Classification of asymptotic notation
 Big Oh O notation
 Big omega(Ω) notation
 Theta notation
 Small o notation
 Small ω notaion
 Multiple choice question about asymptotic notation
 True and False question about asymptotic notation
WHAT IS ASYMPTOTIC NOTATION?

In order to understand asymptotic notation we first need to understand what


asymptote and notation separately means then only we can sort out the real
meaning of asymptotic notation.

An asymptote is theoretically is defined as some value to which you get closer


and closer but you will never reach that value. In mathematics asymptote is a
horizontal, vertical line that a curve approaches as it heads to infinity.

For example: Imagine that you are running in a 100 m race. After a 30m of
sprint, you are exhausted. After about a 10m of sprint which means 40m you
are more than halfway to your finish line, but you notice you are gradually
slowing down. Another 10m later, you have only sprinted another 1/4 of the
way to final line. Your legs continue to feel heavier and heavier. After another
10m, you are only another 1/8 of the way to finish line. The pattern continues.
As you are sprinting exhausted body drags even more and more, it seems like
you will never get there. Each additional minute only brings you halfway to the
finish line from where you were previously.
Now we have understood what asymptote really means so now we will try to
understand what notation means and will recollect both to understand the
meaning of asymptotic notation.

So according to urban dictionary notation means a series or system of


written symbols used to represent numbers, amounts, or elements in
something such as music or mathematics.

So asymptotic notation can now be defined in simple language as some


curve that never reaches to its actual value and the symbols by which we
denote them are known as asymptotic notations.

But to be more clear Asymptotic notations gives us a way to analyse an


algorithm’s running time and its behaviour by identifying the input for the
algorithm increases or decreases. Asymptotic notations are said as the
mathematical tools which is used to represent time complexity of
algorithms.
WHY DO WE NEED TO USE ASYMPTOTIC
NOTATION?

So, till now we have read about asymptotic notation that it is a tool to
analyse an algorithm’s complexities for change in input size. But the
question that arises after reading about what is asymptotic notation that is
what is the need to use asymptotic notation?

These following points may be helpful in giving us a more clear idea about
the need to use Asymptotic notations-

 When we study an algorithm, we are much more interested in


characterizing them in terms of their efficiency.
 It is characterized as how long computer takes to run the lines of program,
and that depends on the speed of the computer, the programming
language, and the compiler that translates the program.
 The area of focus in order to characterize is that how fast a function grows
with input size and this is known as rate of growth.
 Therefore we need to establish a way to talk about rate of growth of
functions so that we can compare algorithms.
 Here comes Asymptotic notation into play that gives us a method for
classifying functions according to their rate of growth.

Hence from this we can conclude that asymptotic notation helps us to


understand an algorithm more clearly and helps us to classify functions
according to their rate of growth.
WAYS TO CLASSIFY ASYMPTOTIC
NOTATION

One can identify a function algorithm with an Asymptotic Notation in several


different ways. Few examples are that one can describe an algorithm by its
best case, worse case, or equivalent case. By these cases we mean-
 Best Case − Minimum time taken by the program to get executed.

 Average Case − Average time taken by the program to get executed.

 Worst Case − Maximum time taken by the program to get executed.

The most common way is to analyse an algorithm by its worst case. We


typically don’t evaluate by best case because those conditions aren’t what
we’re planning for by that we means that if our program is ready for worst
case that can happen we automatically can safeguard it from the average
and best cases, we always should be prepared for worst thing that could
happen even in our daily life. A very good example of this is sorting
algorithms specifically, adding elements to a tree structure. Best case for
most of the algorithms can be as low as a single operation. However, in most
cases, the element we’re adding will need to be sorted appropriately through
the tree, which could mean examining an entire branch. This is the worst
case, and this is what we plan for.

CLASSIFICATION OF ASYMPTOTIC
NOTATIONS

Asymptotic notation can be classified as follows –

 O-Notation (Big Oh).

 𝛺-Notation (Big Omega).


 ϴ-Notation (Theta).

 o-Notation (Little Oh).

 𝜔-Notation (Little Omega).

Now , let’s take a look at each of them step by step that what they are and
how they work and what are their uses and what do we usual take into
consideration more.

BIG O NOTATION-
Some key points about Big O are-

 Big O notation commonly written as O and said as Oh.


 The Big O notation is an upper bound of an algorithm and it bounds a
function only from the above.
 It is said as Asymptotic Notation for the worst case or ceiling of growth
for a given function.
 In simple language it can be said as a way to estimate upper limit of a
function when the input given is very large.

Mathematically it can be written as,

O(g(n)) = {
f(n): there exist positive constants c and
k such that 0 ≤ f(n) ≤ cg(n) for
all n ≥ k
}
In above mathematical format, we have chosen an arbitrary constant ‘k’ as a
standard for large input meaning if our input is considered to be large only
when n ≥ k. After that, we have defined the upper bound for the function f(n)
with the help of a function g(n) and constant c. It means that our function
f(n)’s value will always be less than or equal to c * g(n) where g(n) could be
any non-negative function for all sufficiently large value of n.

Big Oh notation is one of the most widely used notation for finding time
complexity because it usually deals with the upper bound of an algorithm
which helps us to decide which algorithm would be better. For Example:
algorithm of complexity O(n) will do better than algorithm of complexity O(n²)
for sufficiently large input(it may not be true when input is small).

When we say that the running time is "big-O of f(n)” or just "O of f(n)” We
use big-O notation for asymptotic upper bounds, since it is meant to
bound the growth of the running time from above for large enough input
sizes.
BIG OMEGA (Ω) NOTATION-

Some key points about big omega notations are-

 Big Omega notation commonly written as Ω.


 The Big Omega notation is an lower bound of an algorithm and it
provides an asymptotic lower bound.
 It is said as Asymptotic Notation for the best case or floor of growth for a
given function.
 It is nothing but the reverse of Big Oh.

Mathematically it can be written as,

Ω (g(n)) =

{f(n): there exist positive constants c and

k such that 0 <= c*g(n) <= f(n) for

all n >= k}.

In above mathematical format we can see that the value of function is always
greater than or equal to lower bound which means c * g(n).

Omega notation is widely used when one is interested in the least amount of
time required for the function/algorithm which makes it one of the least
used notation in comparison to other notations.

In simple language it can be defined as an algorithm that takes at least a


certain amount of time, without even providing an upper bound. We use big-
Ω notation and is derived from greek letter "omega."
We would say that the running time is "big-Ω of f(n).We are using big-Ω
notation for asymptotic lower bounds, as it bounds the growth of the
running time from below for large enough input sizes.

One can also make accurate but imprecise statements from big-Ω notation.
For example: if we have one lakh rupees in our pocket, we can truthfully say
"I have an amount of money in my pocket, and it's at least 10 rupees." That
is correct, but certainly not very precise.

𝜃-NOTATION(THETA):

Some key points about big omega notations are-

 Theta notation commonly written as 𝜃.


 The theta notation is used to describe the asymptotic efficiency of
algorithms. ... For set membership, we write h(n)=Θ(f(n)) and not
h(n)∈Θ(f(n)) .
Mathematically it can be written as:

Θ(g(n)) = {f(n): there exist positive constants k1, k2 and n0 such

that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}

The theta notation bounds a functions from above and below, so it defines
exact asymptotic behaviour .A simple way to get Theta notation of an
expression is to drop low order terms and ignore leading constants. For
example, consider the following expression.

3n3 +6n2 +6000=Θ(n3)


Dropping lower order terms is always fine because there will always be a n0
after which Θ(n3) has higher values than Θn2) irrespective of the constants
involved.
For a given function g(n), we denote Θ(g(n)) is following set of functions.

The above definition means, if f(n) is theta of g(n), then the value f(n) is
always between k1*g(n) and k2*g(n) for large values of n (n >= n0). The
definition of theta also requires that f(n) must be non negative for values of n
greater than n0.

Example:
2 2
If n +3n+4=Θ(n )
Proof:
When n ≥ 1, n2 + 3n + 4 ≤ n2+ 3n2+ 4n2 ≤ 8n2
• When n ≥ 0, n2 ≤ n2 + 3n + 4
• Thus, when n ≥ 1 1n 2 ≤ n2 + 3n + 3 ≤ 8n2

2
Thus, we have shown that n2+3n+4= Θ(n )

(by definition of Big-Θ, with n0 = 1, k1 = 1, and k2 = 8.)

SMALL o NOTATION:

Some key points about o notation:

 Small-o, commonly written as o, is an Asymptotic Notation to denote


the upper bound (that is not asymptotically tight) on the growth rate
of runtime of an algorithm.
 f(n) is o(g(n)), if for all real constants c (c > 0) and n0 (n0 > 0), f(n) is < c
g(n) for every input size n (n > n0).
 The definitions of O-notation and o-notation are similar. The main
difference is that in f(n) = O(g(n)), the bound f(n) <= g(n) holds
for some constant c > 0, but in f(n) = o(g(n)), the bound f(n) < c g(n)
holds for all constants c > 0.

Let f(n) and g(n) be functions that map positive integers to positive real
numbers. We say that f(n) is ο(g(n)) (or f(n) Ε ο(g(n))) if for any real constant c
> 0, there exists an integer constant n0 ≥ 1 such that 0 ≤ f(n) < c*g(n).

Its means little o() means loose upper-bound of f(n)


Mathematically it can be written as:

f(n)=o(g(n))

means
limf(n)/g(n)=0
n→∞

Examples:

Is 6n + 7 ∈ o(n2)?
In order for that to be true, for any c, we have to be able to find an n0 that
makes
f(n) < c * g(n) asymptotically true.
If c = 100,we check the inequality is clearly true. If c = 1/100 , we’ll have to use
a little more imagination, but we’ll be able to find an n0. (Try n0 = 1000.) From
these examples, the conjecture appears to be correct.
then check limits,
lim f(n)/g(n) = lim (6n + 7)/(n2) = lim 6/2n = 0 (l’hospital)
n→∞ n→∞ n→∞

SMALL ω NOTATION:

Some key points about small Ω notation:

 Small-omega, commonly written as ω, is an Asymptotic Notation to


denote the lower bound (that is not asymptotically tight) on the growth
rate of runtime of an algorithm.
 f(n) is ω(g(n)), if for all real constants c (c > 0) and n0 (n0 > 0), f(n) is > c
g(n) for every input size n (n > n0).
 The definitions of Ω-notation and ω-notation are similar. The main
difference is that in f(n) = Ω(g(n)), the bound f(n) >= g(n) holds
for some constant c > 0, but in f(n) = ω(g(n)), the bound f(n) > c g(n)
holds for all constants c > 0.
Mathematically it can be written as:

if f(n) ∈ ω(g(n))
then,
lim f(n)/g(n)=∞

n→∞
Let f(n) and g(n) be functions that map positive integers to positive real
numbers. We say that f(n) is ω(g(n)) (or f(n) ∈ ω(g(n))) if for any real constant
c > 0, there exists an integer constant n0 ≥ 1 such that f(n) > c * g(n) ≥ 0 for
every integer n ≥ n0.
f(n) has a higher growth rate than g(n) so main difference between Big
Omega (Ω) and little omega (ω) lies in their definitions.In the case of Big
Omega f(n)=Ω(g(n)) and the bound is 0<=cg(n)<=f(n), but in case of little
omega, it is true for 0<=c*g(n)<f(n).
Example:

3n+5 ∈ ω(1);

the little omega(ο) running time can be proven by applying limit formula
given below.

if lim f(n)/g(n) = ∞ then functions f(n) is ω(g(n))


n→∞

here,we have functions f(n)=3n+5 and g(n)=1


lim (3n+5)/(1) = ∞

n→∞
and,also for any c we can get n0 for this inequality 0 <= c*g(n) < f(n), 0 <=
c*1 < 3n+5
Hence proved.

COMMON GRAPH OF ALL NOTATIONS:


MULTIPLE CHOICE QUESTIONS ABOUT ASYMPTOTIC
NOTATION:

1.For the functions, n^k and c^n, what is the asymptotic relationship
between these functions? Assume that k >= 1 and c > 1 are constants.

a.) n^k is O(c^n)

b.) n^k is Ω(c^n)

c.) n^k is Θ(c^n)


2.) For the functions, lgn and log_8n, what is the asymptotic relationship
between these functions?

a.)lgn is O(log_8n)

b.)lgn is Ω(log_8n)

c.)lgn is Θ(log_8n)

3.) For the functions log2 n and log8 n what is the asymptotic relationship
between these functions?

a.) log2 n is O(log8 n)

b.) log2 n is Ω(log8 n)

c.) (log8 n) is Θ(log8 n)

4.) Space complexity of an algorithm is the maximum amount of _______


required by it during execution.
a.) Time
b.) operations
c.) Memory space
d.) None of the above

5.) Frequently, the memory space required by an algorithm is a multiple of


the size of input. State if the statement is True or False or Maybe.
a.) True
b) false
c) maybe
d) none of the above
6.)For many problems such as sorting, there are many choices of algorithms
to use, some of which are extremely___________.
a.) space efficient
b.) time efficient
c.) both a and b
d.) None of them

7.) In the analysis of algorithms, what plays an important role?


a.) Text analysis
b.) Growth factor
c.) Time
d.) None of the above

8.) To verify whether a function grows faster or slower than the other
function, we have some asymptotic or mathematical notations, which
is_________.
a.) Big Omega Ω (f)
b.) Big Theta θ (f)
c.) Big Oh O (f)
d.) All of the above

9.) A function in which f(n) is Ω(g(n)), if there exist positive values k and c
such that f(n)>=c*g(n), for all n>=k. This notation defines a lower bound for a
function f(n):
a.) Big Omega Ω (f)
b.) Big Theta θ (f)
c.) Big Oh O (f)
d.) All of the above
10.) An algorithm that indicates the amount of temporary storage required
for running the algorithm, i.e., the amount of memory needed by the
algorithm to run to completion is termed as_____.
a.) Big Omega Ω (f)
b.) Big Theta θ (f)
c.) Big Oh O (f)
d.) All of the above

TRUE AND FALSE QUESTIONS ABOUT ASYMPTOTIC


NOTATION

1.)An algorithm performs lesser number of operations when the size of input
is small, but performs more operations when the size of input gets larger.
State if the statement is True or False or Maybe.
a.) True
b) false
c) maybe
d) none of the above

2.) Frequently, the memory space required by an algorithm is a multiple of


the size of input. State if the statement is True or False or Maybe.
a.) True
b) false
c) maybe
d) none of the above

Decide whether these statements are True or False. You must briefly justify
all your answers to receive full credit.

4.) If f(n) = Θ(g(n)) and g(n) = Θ(h(n)), then h(n) = Θ(f(n))


a.) True
b) false
c) maybe
d) none of the above

4.) If f(n) = O(g(n)) and g(n) = O(h(n)), then h(n) = Ω(f(n))


a.) True
b) false
c) maybe
d) none of the above

5.) If f(n) = O(g(n)) and g(n) = O(f(n)) then f(n) = g(n)


a.) True
b) false
c) maybe
d) none of the above

6.) If n/100 = Ω(n)


a.) True
b) false
c) maybe
d) none of the above

7.) If f(n) = Θ(n 2 ),


where f(n) is defined to be the running time of the program A(n): def A(n):
atuple = tuple(range(0, n)) # a tuple is an immutable version of a # list, so
we can hash it S = set() for i in range(0, n): for j in range(i+1, n):
S.add(atuple[i:j]) # add tuple (i,...,j-1) to set S
a.) True
b) false
c) maybe
d) none of the above