Sie sind auf Seite 1von 21

Chapter 2

Getting Started

Insertion Sort
Insertion-Sort(A)
„ sort an array A[1..n] containing a sequence of
length n (length[A])
Š input array A contains the sorted output sequence when
finished
„ input numbers are
sorted in place
Š the numbers are arranged
within the array A, with
at most a constant
number of them stored
outside the array at any
time

1
(i.e., n)

Pseudo Code Notations


Liberal use of English
„ employs whatever expressive method is most clear
and concise to specify a given algorithm
Š algorithms are expressed to humans, not to computers

Use of indentation for block structure


Omission of error handling and other such
things needed in real programs
„ not concerned with issues of software engineering
in order to convey the essence of the algorithm
more concisely

2
Loop Invariants
Technique for proofs of the correctness of
algorithms
Conditions and relationships that are
satisfied by the variables and data
structures at the end of each iteration of
the loop
Established by induction on the number of
passes through the loop
„ similar to mathematical induction
Š to prove a property holds, prove a base case and an
inductive step

Properties of Loop
Invariants
Initialization
„ it is true prior to the first iteration of the loop
Maintenance
„ if it is true before an iteration of the loop, it
remains true before the next iteration
Termination
„ when the loop terminates, the invariant gives us a
useful property that helps show that the algorithm is
correct

3
Loop Invariants of
Insertion Sort
Initialization
„ show that the loop invariants holds before the first
loop iteration
Š j = 2 ⇒ A[1 .. j - 1] = A[1] is in sorted order

Maintenance
„ show that each iteration maintains the loop invariant
Š at the start of each iteration of the ”outer” for loop, the
subarray A[1 .. j - 1] consists of the elements originally in
A[1 .. j - 1] but in sorted order
Termination
„ examine what happens when the loop terminates
Š j = n + 1 ⇒ A[1 .. j - 1] = A[1 .. n]
7

Analyzing Algorithms
To predict the resources that the algorithm
requires
„ resource examples
Š memory, communication bandwidth, computer hardware,
computational time (measured most often)
Computational model
„ random-access machine (RAM) model
Š instructions are executed one after another
„ each instruction takes a constant amount of time
Š no concurrent operations
Š data types are integer and floating point
„ the size of each word of data does not grow arbitrarily
Š no caches or virtual memory
„ no memory-hierarchy effects
8

4
Primitive Instructions
Basic computations performed by an
algorithm
„ exact definition is not important
„ assumed to take a constant amount of time in the
RAM model
Examples
„ arithmetic: add, subtract, multiply, divide,
remainder, floor, ceiling), shift left/shift right
„ data movement: load, store, copy
„ control: conditional/unconditional branch,
subroutine call and return
9

Running Time
The number of primitive operations (steps)
executed
„ steps are machine-independent
„ each line of pseudo code requires a constant
amount of time.
Š one line may take a different amount of time than another,
but each execution of line i takes the same amount of
time ci

10

5
Running Time
Depends on
„ input size
Š the time generally grows with the size of the input
Š e.g., 6 elements vs. 6000 elements
„ input itself
Š may take different amounts of time on two inputs of the
same size
Š e.g., sorted vs. reversely sorted

11

Running Time Analysis


Worst case
„ maximum time on any input of size n
„ usually used
Average case
„ average time over all inputs of size n
„ occasionally used
Best case
„ minimum time on any input of size n
„ hardly used

12

6
execute tj times
for iteration j

Running Time: depends on the values of tj


• tj : the number of times the while loop test in line 5 is executed for the value of j

13

Best Case of Insertion Sort


Array already sorted
„ always A[i] ≤ key upon the first time the while loop
test is run (when i = j - 1)
„ all tj are 1
n n

Š ∑ t = ∑1 = n − 1
j =2
j
j =2
n n

Š ∑ t j − 1 = ∑1 − 1 = 0
j =2 j =2

14

7
Best Case Running Time

= an + b for constants a and b (that depend on the


statement costs ci )
⇒ T(n) is a linear function of n

15

Worst Case of Insertion Sort


Array in reverse sorted order (i.e., decreasing
order)
„ always A[i] > key in while loop test
„ compared with j - 1 elements
„ one additional test after the j - 1 tests ⇒ tj = j
n n
n(n + 1)
Š ∑t = ∑ j =
j =2
j
j =2 2
−1

n n
n(n − 1)
Š ∑t j −1 = ∑ j −1 =
j =2 j =2 2

16

8
Worst Case Running Time
n
n( n + 1)
∑j=
2 2
−1

n
n( n − 1)
∑ ( j − 1) =
2 2

= an2+bn+c for constants a, b and c (that depend on


the statement costs ci )
⇒ T(n) is a quadratic function of n

17

Worst-Case Analysis
Worst-case running time is usually preferred
„ the longest running time for any input of size n
Reasons
„ the worst-case running time is an upper bound on
the running time for any input
„ for some algorithms, the worst case occurs fairly
often
Š e.g., the worst case of searching often occurs when
„ the item being searched for is not present
„ searches for absent items may be frequent
„ the average case is often roughly as bad as the
worst case

18

9
Order of Growth
Use abstraction to ease analysis and focus on
the important features
„ consider only the leading term of formula and
drop lower-order terms
Š e.g., an2 instead of an2 + bn + c
„ ignore the constant coefficient in the leading
term
Š e.g., n2 instead of an2
„ use Θ-notation as the running time
Š e.g., Θ(n2) ⇒ the worst-case running time T (n) grows
like n2, but it does not equal n2

19

Algorithm Efficiency
One algorithm is considered to be more
efficient than another if its worst case
running time has a smaller order of growth
„ the evaluation may be in error for small inputs
Š due to constant factors and lower order terms
„ the evaluation will hold for large enough inputs
Š e.g., Θ(n2) algorithm will run more quickly in the worst case
than a Θ(n3) algorithm

20

10

Das könnte Ihnen auch gefallen