Beruflich Dokumente
Kultur Dokumente
Getting Started
Insertion Sort
Insertion-Sort(A)
sort an array A[1..n] containing a sequence of
length n (length[A])
input array A contains the sorted output sequence when
finished
input numbers are
sorted in place
the numbers are arranged
within the array A, with
at most a constant
number of them stored
outside the array at any
time
1
(i.e., n)
2
Loop Invariants
Technique for proofs of the correctness of
algorithms
Conditions and relationships that are
satisfied by the variables and data
structures at the end of each iteration of
the loop
Established by induction on the number of
passes through the loop
similar to mathematical induction
to prove a property holds, prove a base case and an
inductive step
Properties of Loop
Invariants
Initialization
it is true prior to the first iteration of the loop
Maintenance
if it is true before an iteration of the loop, it
remains true before the next iteration
Termination
when the loop terminates, the invariant gives us a
useful property that helps show that the algorithm is
correct
3
Loop Invariants of
Insertion Sort
Initialization
show that the loop invariants holds before the first
loop iteration
j = 2 ⇒ A[1 .. j - 1] = A[1] is in sorted order
Maintenance
show that each iteration maintains the loop invariant
at the start of each iteration of the ”outer” for loop, the
subarray A[1 .. j - 1] consists of the elements originally in
A[1 .. j - 1] but in sorted order
Termination
examine what happens when the loop terminates
j = n + 1 ⇒ A[1 .. j - 1] = A[1 .. n]
7
Analyzing Algorithms
To predict the resources that the algorithm
requires
resource examples
memory, communication bandwidth, computer hardware,
computational time (measured most often)
Computational model
random-access machine (RAM) model
instructions are executed one after another
each instruction takes a constant amount of time
no concurrent operations
data types are integer and floating point
the size of each word of data does not grow arbitrarily
no caches or virtual memory
no memory-hierarchy effects
8
4
Primitive Instructions
Basic computations performed by an
algorithm
exact definition is not important
assumed to take a constant amount of time in the
RAM model
Examples
arithmetic: add, subtract, multiply, divide,
remainder, floor, ceiling), shift left/shift right
data movement: load, store, copy
control: conditional/unconditional branch,
subroutine call and return
9
Running Time
The number of primitive operations (steps)
executed
steps are machine-independent
each line of pseudo code requires a constant
amount of time.
one line may take a different amount of time than another,
but each execution of line i takes the same amount of
time ci
10
5
Running Time
Depends on
input size
the time generally grows with the size of the input
e.g., 6 elements vs. 6000 elements
input itself
may take different amounts of time on two inputs of the
same size
e.g., sorted vs. reversely sorted
11
12
6
execute tj times
for iteration j
13
∑ t = ∑1 = n − 1
j =2
j
j =2
n n
∑ t j − 1 = ∑1 − 1 = 0
j =2 j =2
14
7
Best Case Running Time
15
n n
n(n − 1)
∑t j −1 = ∑ j −1 =
j =2 j =2 2
16
8
Worst Case Running Time
n
n( n + 1)
∑j=
2 2
−1
n
n( n − 1)
∑ ( j − 1) =
2 2
17
Worst-Case Analysis
Worst-case running time is usually preferred
the longest running time for any input of size n
Reasons
the worst-case running time is an upper bound on
the running time for any input
for some algorithms, the worst case occurs fairly
often
e.g., the worst case of searching often occurs when
the item being searched for is not present
searches for absent items may be frequent
the average case is often roughly as bad as the
worst case
18
9
Order of Growth
Use abstraction to ease analysis and focus on
the important features
consider only the leading term of formula and
drop lower-order terms
e.g., an2 instead of an2 + bn + c
ignore the constant coefficient in the leading
term
e.g., n2 instead of an2
use Θ-notation as the running time
e.g., Θ(n2) ⇒ the worst-case running time T (n) grows
like n2, but it does not equal n2
19
Algorithm Efficiency
One algorithm is considered to be more
efficient than another if its worst case
running time has a smaller order of growth
the evaluation may be in error for small inputs
due to constant factors and lower order terms
the evaluation will hold for large enough inputs
e.g., Θ(n2) algorithm will run more quickly in the worst case
than a Θ(n3) algorithm
20
10