Sie sind auf Seite 1von 3

Overview

We generally use computers to process large volumes of data in an efficient manner since the manual processing can take a long time which can be too long when comparing with the time available to complete a given task. In order for a given computer program to be useful it should do the specified task in a reasonable amount of time. The time a program requires to terminate can be represented by the following format

Equation 1: Program Execution Time

Programming Language (PL):- Is defined as the inherent factors of the language we use which we cannot modify or reduce. Hence this is a constant Methodology: - Is defined as the programming model we use to write our program such as procedural or functional. This has a much lesser effect when compared with the programming language. Choice of Algorithm: - This is the highest contributor for the overall time and the only factor as programmers/computer scientists can reduce to create better performance. Before looking at how to achieve this lets look at what is meant by an algorithm.

What is an Algorithm?
Algorithm can be defined as a clearly specified set of steps (computational pattern) which converts a given input into the specified output.

Input

Algorithm
Figure 1: Function of an Algorithm

Output

A given algorithm is accepted as correct if it produces the input output relationship for all inputs. This can be achieved with a nave system or a very well optimized system. The only difference is the time which the algorithm takes to execute. Algorithms can be specified in any form as long as its essence is kept. This means it can be specified in languages such as LISP, Prolog, Java, C++ etc or in plain English. Generally for study purposes algorithms are defined in a form called pseudo code.

SCS 1006: Introduction to Data Structures & Algorithms

Page 1

Pseudo code only describes the computational steps clearly which can sometimes be in English as it can be the best expressive method available. The interesting thing about pseudo code is that it does not concern with software engineering principles such as data abstraction and modularity.

Why do we need Algorithms?


For small number of inputs (around 50) any algorithm would work without trouble since the amount of data is non-significant. But as the number of inputs or data elements grow so does the time taken to complete the computation. Therefore the efficiency of the computation matters for large quantities of data and the amount varies by the methodology used. Algorithms can be classified according to there efficiencies in solving a given problem. The following figure illustrates this fact.

Efficiency goes down and the computation time increases

Figure 2: Efficiency and Time Relationship of Algorithms

If the measurement of efficiency is measured in the execution time the following equation can be used to obtain the time spent by the given algorithm.

Equation 2: Calculation of Efficiency (Time based)

Efficiency can also be calculated for entities other than time such as memory requirements etc... Hence the most efficient algorithm is a one that minimizes all or

SCS 1006: Introduction to Data Structures & Algorithms

Page 2

most entities for a given solution. Determining the amount of resources required by an algorithm is called algorithm analysis.

Algorithm Analysis
As stated above the running time of the algorithm always depends on the amount of inputs supplied to it. Therefore processing ten thousand elements take more time compared to processing ten elements. The time taken for a given algorithm is known as the running time of that algorithm. The running time of an algorithm can be expressed as a function of the number of input elements. Not only the number of inputs the running time depends on a number of factors such as speed of computer, speed of compiler, quality of the code etc. Therefore we measure the functions rate of growth. Primary reasons for using the growth of function is given below When the input is sufficiently large the total value of the function is determined by the dominant term When we measure the growth rate the leading values of the dominant terms is not meaningful across several machine.

However this approximation of the function is sufficiently close for large quantities of data. This measure of efficiency is called the asymptotic complexity and is used for evaluating algorithms where large quantities of inputs are present and when calculating the original function is difficult or impossible.

Big-O notation
This is the most commonly used notation for specifying asymptotic complexity for estimating the rates of growth of a function. This notation only considers the worst case analysis and specifies that a given functions rate of growth is approximately equal or below the value estimated by the notation. Definition: - f(n) is O(g(n)) if there exist positive number c and N such that f(n) cg(n) for all n N

Example:- Calculate the growth rate of the function Answer:- The big-oh value of the function is . This can be easily identified since the most dominant term is the quadratic term; when the value of n increases this term contributes a higher percentage to the overall value. Big-O is not the only notation available there many notations which look at the best case, average case etc Big-, Big- and Little-o are few of the other notations used in analyzing algorithms. For this course we will stick to Big-O notations and will consider about other when they are required in analysis.

SCS 1006: Introduction to Data Structures & Algorithms

Page 3

Das könnte Ihnen auch gefallen