Sie sind auf Seite 1von 22

Dynamic programming

Dynamic Programming is a general algorithm design


technique for solving problems defined by or formulated as
recurrences with overlapping sub instances.
Invented by American mathematician Richard Bellman in
the 1950s to solve optimization problems .
Main idea:
- set up a recurrence relating a solution to a larger
instance to solutions of some smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table

1
• Applicable when sub problems are not independent
– Sub problems share sub problems
E.g.: Combinations:

n n-1 n-1
= + k-1
k k

n n
=1 =1
1 n
A divide and conquer approach would repeatedly solve
the common sub problems
Dynamic programming solves every sub problem just
once and stores the answer in a table

2
• Used for optimization problems
 A set of choices must be made to get an optimal
solution
 Find a solution with the optimal value (minimum
or maximum)
 There may be many solutions that lead to an
optimal value
 Our goal: find an optimal solution

3
Dynamic Programming Algorithm
1. Characterize the structure of an optimal
solution
2. Recursively define the value of an optimal
solution
3. Compute the value of an optimal solution in
a bottom-up fashion
4. Construct an optimal solution from
computed information (not always necessary)
4
• An algorithm design method that can be
used when the solution to a problem may
be viewed as the result of a sequence of
decision.
• Dynamic Programming algorithm store
results, or solutions, for small subproblems
and looks them up, rather than
recomputing them, when it needs later to
solve larger subproblems
• Typically applied to optimiation problems

5
Principle of Optimalilty
• An optimal sequence of decisions has the
property that whatever the initial state an
decision are, the remaining decisions must
constitute an optimal decision sequence with
regard to the state resulting from the first
decision.
• Essentially, this principles states that the
optimal solution for a larger subproblem
contains an optimal solution for a smaller
subproblem.
6
Dynamic Programming VS. Greedy Method

Greedy Method
 only one decision sequence is ever
genertated.
Dynamic Programming
 Many decision sequences may be
genertated.

7
Dynamic Programming VS. Divide-and Conquer
Divide-and-Conquer
 partition the problem into independent
subproblems, solve the subproblems recursively,
and then combine their solutions to solve the
original problem
Dynamic Programming
 applicable when the subproblems are not
independent, that is, when subproblems share
subsubproblems.
 Solves every subsubproblem just once and then
saves its answer in a table, thereby avoiding the
work of recomputing the answer every time the
subsubproblem is encountered
8
Knapsack Problem

• In a knapsack problem or rucksack problem,


we are given a set of 𝑛 items, where each item
𝑖 is specified by a size 𝑠𝑖 and a value 𝑣𝑖 . We
are also given a size bound 𝑆, the size of our
knapsack.
Item # Size Value
1 1 8
2 3 6
3 5 5 9
The Knapsack Problem
• The famous knapsack problem:
– A thief breaks into a museum. Fabulous paintings,
sculptures, and jewels are everywhere. The thief has
a good eye for the value of these objects, and knows
that each will fetch hundreds or thousands of dollars
on the clandestine art collector’s market. But, the
thief has only brought a single knapsack to the scene
of the robbery, and can take away only what he can
carry. What items should the thief take to maximize
the haul?

10
Knapsack Problem – in Short
• A thief considers taking W pounds of loot. The
loot is in the form of n items, each with weight
wi and value vi. Any amount of an item can be
put in the knapsack as long as the weight limit
W is not exceeding.

11
Knapsack Problem – 2 types

1. 0-1 Knapsack Problem


(Dynamic Programming
Solution)
2. Fractional Knapsack Problem
(Greedy Approach Solution)

12
0-1 Knapsack problem

• Given a knapsack with maximum capacity W,


and a set S consisting of n items
• Each item i has some weight wi and benefit
value bi (all wi , bi and W are integer values)
• Problem: How to pack the knapsack to achieve
maximum total value of packed items?
• Solution: Dynamic Programming.

13
Fractional Knapsack Problem

• Fractional Knapsack Problem: we are given n


objects and a knapsack. Object i has a weight
wi and the knapsack has a capacity m. If a
fraction xi, 0<=xi <=1, of object i is placed into
the knapsack, then a profit pi xi is earned. The
objective is to obtain a filling of the knapsack
that maximizes the total profit earned.

14
Fractional Knapsack

max imize  px
1i  n
i i

subject to  w x  m and 0  x  1, 1  i  n
1i  n
i i i

Greedy Algorithm gives the optimal solution


for the Fractional Knapsack Problem.
15
Fractional Knapsack-Greedy Solution
• Greedy-fractional-knapsack (w, v, W)
1. for i =1 to n
2. do x[i] =0
3. weight = 0
4. while weight < W
5. do i = best remaining item
6. if weight + w[i] ≤ W
7. then x[i] = 1
8. weight = weight + w[i]
9. else
10. x[i] = (w - weight) / w[i]
11. weight = W
12. return x
16
Analysis
• If the items are already sorted into decreasing
order of vi / wi, then the while-loop takes a time in
O(n);

• Therefore, the total time including the sort is in


O(n log n).
• If we keep the items in heap with largest vi/wi at the root.
Then creating the heap takes O(n) time
• while-loop now takes O(log n) time (since heap property
must be restored after the removal of root)

17
The edit distance problem
• 3 edit operations: insertion, deletion, replacement
• e.g string A=‘vintner’, string B=‘writers’
v intner
wri t ers
RIMDMDMMI
M: match, I: insert, D:delete, R: replace

• The edit cost of each I, D, or R is 1.


• The edit distance between A and B: 5.

7 -18
The edit distance between two strings
 The permitted edit operations are: Insertion, Deletion,
Replacement.
 Definition: A string over the alphabet I,D,R,M that
describes a transformation of one string to another
is called edit transcript, or transcript for short, of
the two strings.

Match R I M D M D M M I

v i n t n e r

w r i t e r s
19
Seminar in Structural Bioinformatics - Pairwise sequence alignment algorithms
The edit distance between two strings

 Definition: The edit distance between two strings is


defined as the minimum number of edit operations–
insertion, deletion, and substitutions – needed to
transform the first string into the second.
For emphasis, note that matches are not counted.

20
Seminar in Structural Bioinformatics - Pairwise sequence alignment algorithms
String alignment
 Definition: A (global) alignment of two
strings S1 and S2, is obtained by first
inserting chosen spaces (or dashes), either
into or at the ends of S1 and S2, and then
placing the two resulting strings one above
the other so that every character or space in
either string is opposite a unique character
or a unique space in the other string.

21
Seminar in Structural Bioinformatics - Pairwise sequence alignment algorithms
The edit distance algorithm
• Let A = a1 a2  am and B = b1 b2  bn
• Let Di,j denote the edit distance of a1 a2  ai and
b1 b2  bj.

Di,0 = i, 0  i  m
D0,j = j, 0  j  n
Di,j = min{Di-1, j + 1, Di, j-1 + 1, Di-1, j-1 + ti, j}, 1  i 
m, 1  j  n
where ti, j = 0 if ai = bj and ti, j = 1 if ai  bj.
7 -22

Das könnte Ihnen auch gefallen