Sie sind auf Seite 1von 29

Dynamic Programming: Part 1 Classical Algorithms

Wilan Wong wilanw@gmail.com

1

Contents

1 Introduction

3

2 Fibonacci Sequence

3

3 The Knapsack Problem

8

3.1 Introduction

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

8

3.2 Modelling a Dynamic Programming Algorithm

.

.

.

.

.

.

.

.

.

.

10

3.3 Other Knapsack variants

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

15

4 Edit Distance

18

4.1 Modelling and Implementation

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

18

4.2 Performance Differences

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

23

5 Overview

25

6 Problems

25

6.1 Problem 1: Longest Increasing Subsequence

.

.

.

.

.

.

.

.

.

.

.

.

25

6.2 Problem 2: Binomial Coefficients

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

26

6.3 Problem 3: Longest Common Subsequence

 

27

6.4 Problem 4: Handshaking Problem

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

28

2

1 Introduction

Dynamic programming is one of the major techniques used within combinato- rial optimisation. Combinatorial optimisation algorithms solve problems that are believed to be hard in general, by searching the large solution space of these problems. However, due to the large solution space many problems become too unwieldy to be solve with brute force based algorithms, so combinatorial op- timisation algorithms achieve better performance by reducing the search space by discarding portions of the solution space that can be deemed non-optimal, these algorithms also usually explore the space efficiently so they don’t recal- culate identical problems. Simply put, dynamic programming is a method to exploit inherit properties of a problem, so the algorithm runs much faster than simpler straightforward brute-force algorithms.

The basis behind dynamic programming is what’s known as a functional equa- tion. You can think of this as some sort of recurrence relation which relates the problem we are trying to solve with smaller subproblems of the same problem. This in effect, means that dynamic programming is recursive in nature – a solid understanding of recursion is required to understand dynamic programming even in it’s most basic form. There consists of many deep mathematical theo- ries behind dynamic programming, so we will only consider the fundamentals whose coverage should be sufficient enough for you to develop a basic dynamic programming formulation skill. We now cover basic recursion skills, show it’s limitations when used naively and consequently provide a simple method to translate recursion to top-down dynamic programming.

2 Fibonacci Sequence

Let us consider a trivial problem that you probably already know of: Fibonacci numbers. If you haven’t seen Fibonacci numbers before, basically it’s a sequence of numbers where the next term is defined by the sum of the previous two terms. You start off with the first two terms being 1, so the sequence looks like: 1, 1, 2, 3, 5, 8, 13, 21 etc. The goal of this problem is to compute the n-th Fibonacci number, so the first Fibonacci number would be 1, the third would be 2, the sixth would be 8 and so on.

How do we formulate a recursive algorithm for this problem? Let’s assume we know all Fibonacci numbers up to the (n-1 )th term, now we ask ourselves how do we obtain the n -th term. If we knew all the Fibonacci numbers up to the (n-1 )th one then by the definition of a Fibonacci sequence: F n = F n1 + F n2 we can calculate the next one. This itself forms the basis of the recursion, it defines the current problem (F n ) in terms of two smaller subproblems (F n1 and F n2 ) in which they are related by the addition of the two subproblems. As you may know from recursion – it is crucial to have a base case, and the

3

recursion itself should move closer and closer to this base case otherwise it might not terminate for certain inputs.

So our second task after defining the recurrence relation (F n = F n1 + F n2 ) is to determine suitable “stopping” conditions (or base cases). Since the recurrence relation relies on the previous two terms – we must provide at least 2 consecutive base cases otherwise the relation would be undefined. If we look back at the sequence we see that the first two terms are just 1. So we can define F 0 = F 1 = 1 as the base case.

Looking back we can formally define our functional or recursive equation as:

F(n) =

1

F (n 1) + F (n 2)

if n = 0, 1 otherwise

Now, we simply just translate the above to a programming language. We will be demonstrating our algorithms in C++ for this tutorial, most readers should have little difficulty transferring it to their preferred language of choice. See Listing 1 for the implementation.

Listing 1: Fibonacci Naive Recursion Approach

1

#include <iostream>

 

2

3

using namespace s t d ;

 

4

5

long long

f i b

( int

n)

{

6

i f

(n <=

1)

return

1;

7

return

 

f i b ( n1)+ f i b ( n 2);

 

8

}

9

10

int

main

( )

{

11

int

n ;

12

while

(

ci n

>>

n )

c ou t <<

f i b ( n )

<<

"\n" ;

13

return

0 ;

14

}

The first thing to note if you have compiled and ran this algorithm, is that it gets very slow for even modest sized inputs. One thing that may come to mind is that recursion is just plain slow – or is it? Certainly, there is additional overhead for recursion due to stack modifications for function calls (i.e. pushing and popping off local and function parameters on the stack) – however it certainly shouldn’t have an exponential impact. If we draw out the recursion tree of the Fibonacci numbers, we yield the source of our problem. The problem lies with multiple calls to the same functions that we have already calculated previously – the diagram below illustrates this better.

4

Figure 1: Fibonacci Call Tree As you can see above when we compute the 4th

Figure 1: Fibonacci Call Tree

As you can see above when we compute the 4th Fibonacci number using our naive recursion, we call F(2) twice. For such a small recursion tree it may not matter - but for larger numbers significant re-calculation work is done. If you are not convinced from this small tree, try sketching a recursion tree for the F(10) and you’ll quickly see the point. This property is called overlapping subproblems which as the name implies, refers to when our recursion keeps recalling the same subproblems over and over hence the term overlapping.

Now another property the Fibonacci problem exhibits is referred to as optimal substructure. This means that an optimal solution can be constructed from solutions to its subproblems. Referring back to Figure 1, if the two calls to F(2) had different results each time we called them then this is when the problem does not exhibit optimal substructure. However, for all our cases we can be sure that the 2nd Fibonacci number will not change after we have calculated it by definition of the Fibonacci sequence, so we can effectively and safely assume it for our algorithm. In these situations, the problem is said to exhibit opti- mal substructure. The two properties: optimal substructure and overlapping subproblems are required for a dynamic programming algorithm to work both correctly and efficiently. If the problem does not have any overlapping subprob- lems, then just using the simple recursion would be as efficient as we could get because we aren’t doing any extra work by re-calculating subproblems again and again – so using dynamic programming only adds additional overhead. If the problem does not have optimal substructure, then our dynamic program- ming algorithm may run quickly but will produce incorrect results because it incorrectly assumes false information – the further examples would illustrate this much clearer.

So that’s a whole heap of talk about the properties required for a dynamic programming algorithm – but how do we actually formulate and code it? A simple way is to note that because our naive Fibonacci recursive algorithm recalculates the same subproblems over and over, why not make a simple look- up array to “check” if we have already calculated that value before we go diving

5

further into the recursion tree? So this logically works by having a “special” value that denotes that we have not calculated that problem before (usually negative numbers but may change depending on the nature of the problem), then for the recursive function, we check if the current value in the look-up array corresponding to the function input is equal to the “special” value. If it is, then we know that it still hasn’t been calculated, otherwise if it isn’t, then we already know the result of that input – and we can just return the value of the look-up array in that specific index. It may sound complex, but augmenting it is relatively easy and is usually universal across many dynamic programming algorithms. (See Listing 2 for implementation)

Listing 2: Fibonacci Memoization Approach

1

#include <iostream>

 

2

3

using namespace s t d ;

 

4

5

#define MAX TERMS 100

 

6

7

long

long memo[MAX TERMS+ 1];

 

8

9

long

long

f i b (

int

n)

{

10

i

f

( n

<=

1 )

return

1 ;

11

i f

(memo[ n ]

!=

1)

return

memo[ n ] ;

 

12

return memo[ n ]

=

f i b ( n1)

+

f i b

( n 2);

 

13

}

14

15

int

main

( )

{

16

int

n ;

17

memset (memo, 1 , s i z e o f (memo ) ) ;

 

18

while

( ci n

>> n && n <= MAX TERMS)

c ou t <<

f i b ( n )

<<

"\n" ;

19

return

0 ;

20

}

You should compare the running times between our first attempt and after we applied a caching mechanism to the recursion. In fact, there’s an exponential difference in running time between the two algorithms even though they are fundamentally similar. Also note, that you will need to use arbitrary-precision integers very soon above the Fibonacci term of 100 because the numbers growth quickly and exceed the 64-bit integer data-type limit.

You may have coded another iterative algorithm for the Fibonacci sequence before in an introduction to programming course (see Listing 3), which is to keep track of the last two terms and keep generating the next term based on a for-loop or a similar construct. In effect, that method is dynamic programming – it differs in the order of execution because it approaches the problem bottom-up, i.e. it solves the simplest cases first and gradually builds up to larger cases. The

6

Listing 3: Fibonacci Iterative Variant

1

#include <iostream>

 

2

3

using namespace s t d ;

 

4

5

int

main ()

{

6

int

n ;

7

while

( cin

>> n)

{

8

if

(n <=

1)

{

9

cout <<

" 1\ n" ;

 

10

continue ;

 

11

}

12

long long

prev =

 

1 ,

prev2 =

1 ;

13

for

( int

i

=

2 ;

i

<=

n ;

i++) {

14

long

long

temp = prev ;

 

15

prev += prev2 ;

 

16

prev2 = temp ;

 

17

}

18

c ou t <<

prev << "\n" ;

 

19

}

20

return

0 ;

21

}

recursion + look-up method is called memoization and is also known as top-down dynamic programming, it’s called top-down because it seeks to “solve” larger instances of the problem first and gradually moves down until it hits the base cases (which are usually the simplest cases of the problem). Both methods have similar run-time performance but there are subtle differences to note. Bottom- up is usually considered more efficient because it doesn’t have the stack overhead of recursive algorithms, however memoization does have an advantage in that it only calculates the subproblems that it needs whereas bottom-up needs to solve all of the subproblems up to the current input.

It’s somewhat easier to learn dynamic programming using the top-down ap- proach because when you use bottom-up dynamic programming you have to ensure the order in which you calculate the subproblems is correct (i.e. you have to make sure you have already calculated all the required subproblems when you reach a given problem) otherwise the whole dynamic programming approach breaks up. Top-down dynamic programming achieves this implicitly through the recursive call structure so you only need to worry about setting up the recursive algorithm. That being said, all top-down dynamic programming algorithms can be converted to bottom-up and vice versa. Nevertheless, we will cover bottom-up dynamic programming in our next problem which is slightly more complicated than the one we have seen so far.

7

3 The Knapsack Problem

3.1

Introduction

Let’s consider a more complex example than the Fibonacci problem discussed previously. The next problem we will be looking at is called the Knapsack problem. It’s a classic Computer Science problem which entails the following:

You’re a thief that has successfully infiltrated inside a bank. However to your surprise there are only items of questionable value as opposed to money. You bought a bag that can only hold a certain weight (W), you want to pack items into the bag so that you collect the best possible value of items whilst not packing more than the limit of W. Each item has a weight and a value. Your task would be to return the highest value you can get from this bank robbery.

An intuitive approach might be to pack items by the best value per weight ratio until the bag is full or cannot hold anymore items. This is called a greedy algorithm, in which each decision stage you choose the most locally optimal choice (i.e. in this case, the best value/weight ratio that hasn’t been taken and also fits in the bag). However, without a correctness proof we wouldn’t be sure it would work for all cases. In fact, this greedy strategy does not work for the knapsack problem in general. How do we show this? We simply need to find a counter-example.

Let’s consider an example, where the greedy strategy does indeed get the optimal solution:

Bag

Weight

(W)

=

10

kgs

Item

Configurations

(Weight,

Value)

Pairs:

(5,5),

(3,2),

(1,0),

(10,6),

(8,5)

Using the greedy strategy, we first calculate all the value per weight ratios. These are listed as:

Item

Configuration

Ratios:

1.0,

0.66,

0.0,

0.6,

0.625

respectively. Now we begin to pack the items. Keep in mind, there are different variations of the Knapsack problem, one variation is that there is an unlimited number of items of each type, and another variation consists of only allowing a limited amount of each type of item. We will consider the unlimited variation first and later discuss on the limited version. Now back to the greedy strategy, we see that the item of 5kg and value of $5 is the best in terms of ratio. We pack one of them inside our bag, we have 5kgs remaining and we repeat the same process (huge hint for recursion). Again we choose the same item type

8

because it can still fit, now we have 0kg remaining – so we halt. In the end, we chose 2 items of (5,5) which gave us a total value of $10. Convince yourself this is the best you can get from any combination of the items. Does this mean that our greedy strategy works? The simple answer, no!

Let’s consider an example that breaks the intuitive strategy,

Bag

Weight

(W)

=

13

kgs

Item

Configurations

(Weight,

Value)

Pairs:

(10,8),

(8,6),

(5,3),

(10,6),

Item

(1,0)

Configuration

Ratios:

0.8,

0.75,

0.6,

0.6,

0.0

If we repeat the same process again, we end up choosing 1 x (10,8) because it has the highest ratio. So we are left with 3kg, however none of the items fit (except for 3 x (1,0) which gives a value of $0). So the best value we can get from our greedy approach is $8. However, you can observe that you can get $9 of value from a combination of (8,6) and (5,3). Hence, this works as a counter-example against our greedy strategy and hence the greedy algorithm isn’t correct.

So how else can we approach this problem? One sure way to get a correct answer

would be to brute force every single item configuration, check if it fits inside

the bag and update it if it’s better than the current maximum. However, it’s

horribly inefficient to do it this way. A very naive approach based on this would

be to generate subsets of items and test each one, however as you may know the

number of subsets in a set is exponential (2 n ). In fact, this problem is classified

as NP-hard which means the best way we currently know how to do it is in ex-

ponential time (without getting too rigorous). However, dynamic programming

provides a pseudo-polynomial time algorithm that correctly solves the Knapsack problem. It’s important to distinguish between a “pseudo-polynomial” time al- gorithm and a polynomial algorithm. Loosely speaking, a polynomial algorithm

is one where it’s time complexity is bounded by a constant power that isn’t

related to the input numerical values (not sizes) in any way – examples would

be

O(n 2 ), O(n 3 ), O(n 100 ).

A “pseudo-polynomial” time algorithm grows in proportion to the input’s nu-

merical value, so for example O(nm) would be a pseudo-polynomial time com- plexity if either n and m are related to the input’s numercial values. For ex- ample, if an algorithm runs in O(nm) where m is some constant and n varied with the largest input value, then the algorithm would be pseudo-polynomial. However, if the input values remain fairly small – then the overall algorithm is still efficient. A dynamic programming Knapsack algorithm belongs to such a class as we will see shortly.

9

3.2 Modelling a Dynamic Programming Algorithm

However how do we approach this problem in a dynamic programming way? There are a few basic general guidelines used when formulating any dynamic programming algorithm:

1. Specify the state

2. Formulate the functional/recursive equation in terms of the state

3. Solve/code the solution using Step 2

4. Backtrack the solution (if necessary)

Step 1: Specifying the state

A state is simply information that we need in each step of the recursion to

optimally decide and solve the problem. In the Fibonacci example, the state was simply the term number – this alone allowed us to decide what the subproblems were and if they were base cases or not. For this example, it isn’t as easy to come up with. In fact, for most dynamic programming algorithms coming up

with a suitable and efficient state is the hardest part.

For the Knapsack problem, when we try to look for the state – a good indicator

is what the recursion function parameters would look like. For simple problems,

you can solely base your state on these, however for more complex problems where the state space can be huge – you will need to do some fancy tricks and optimisations to ensure the state is as condensed as possible whilst still

representing the problem in a correct fashion.

So ask yourself, what are the common links between a specific problem and its

subproblems? The knapsack subproblems were simply ones where the weight remaining was smaller. All the other factors remained the same (the available choice of items). Can we base our state solely on the weight? When you ask yourself this, just think if someone gave you a single parameter (the weight) and

a list of possible items, could you decide the optimal decision to choose assuming you know the optimal values of all the weights beneath it? The answer to this

is, yes! Thinking recursively is usually the best strategy, for example, if we were

given a weight W to fill in the bag and we knew the best values we could get from weight 0 up to W 1, then a simple method would be to iterate through each of the lesser weights and try adding the best item to it so it can exactly fill

weight W . We then choose the greatest value of these configurations. Convince yourself this is sufficient by drawing some examples – this is one of the two main properties of dynamic programming: optimal substructure.

Step 2: Formulating the recursive equation Now we have a state which in this case is just weight remaining in the bag. We need to build a recursive relationship between states, i.e. link up a state to

10

another state. Usually this is obvious because it works in one direction: towards the base case. The base case for our problem would simply be the weight of 0. We realistically assume you cannot fit anything in a bag that can’t store anything – hence the value of a bag weight of 0 is just simply 0. Since we are working towards this goal, we want recursive calls to generate successively lower weights.

To determine a relationship between the states we take note that, we can add a single item at a time since order does not matter. So we can relate a lower weight state by adding items so it becomes exactly W . To determine which lower weight states are relevant, we just iterate through the item list of weights and subtract it from W , i.e. W W i (where W i is the weight of item i). If this value is less than 0 obviously we don’t want it because that would violate the weight constraints (you are storing a heavier item than what could fit in the remaining bag). So we simply choose the best out of these item additions. Hence the recurrence/functional equation may look like:

F(w) =

max {F (w w i ) + v i }

0

if w w i 0, i in items

if w = 0

What do we initialise F (w) before we begin comparisons? One option would be to set it to 0 – however if we were to do this then we need to loop through all weight values at the end and choose the maximum. Another option would be to set F (w) to be equal to F (w 1) initially – this makes sense because if an item configuration can fit in w 1 then it should be able to fit in w, doing it this way saves us having to loop through all the weight values at the end. Either option produces the correct output, so it’s just a matter of taste. Another alternative would be to add a “dummy” item with weight 1 and value 0 - this implicitly sets F (w) to be at least as much as F (w 1).

So for step 2, we have the slightly modified recursive definition:

F(w) =

0

max {F (w w i ) + v i , F (w 1)}

if w = 0

if w w i 0, i in items

The recursion implies that F (w) should be set to F (w 1) value without the need to fulfill the w w i 0 requirement.

Step 3: Coding the recursion equation Coding the recursive algorithm is usually fairly straightforward and simple. A C++ memoized solution is shown in Listing 4.

If you look at the source code and compare it with our original recurrence formulation in step 2, you will see there are a lot of similarities in the recursive function f (). For non-C++ users, we have used a pair class which is basically a

11

Listing 4: Knapsack Top-Down Dynamic Programming

1

#include <iostream>

 

2

#include <vector>

3

#include

<u t i l i t y >

4

5

#def ine MAX WEIGHT

1000

 

6

7

using namespace s t d ;

 

8

9

int

memo[MAX WEIGHT+ 1];

10

v e c t o r <p ai r <int , int>

>

data ;

//

 

( we ig h t , v a l u e )

p a i r

11

12

int

f

( int

w)

{

13

i

f

(w ==

0 )

return

0 ;

14

i f

(memo[w]

!=

1)

return memo[w ] ;

 

15

int

v al

=

f

(w1);

16

for

( int

i

=

0 ;

i

<

data . s i z e

(

)

;

i++) {

17

i f

( data [ i ] . f i r s t

<= w)

{

18

v al = max( val

,

f

(wdata [

i

] . f i r s t )+data [ i ] . sec ond ) ;

19

}

20

}

21

return memo[w]

=

v al

;

22

}

23

24

int

main ( )

{

25

int

weight , v al u e

;

26

int

bagWeight ;

27

ci n >> bagWeight ;

28

i f ( bagWeight > MAX WEIGHT)

return

1 ;

 

29

while (

ci n >> wei gh t >> v al u e )

 

{

30

data

.

push back (

m ake

p ai r (

weight , v al u e )

)

;

31

 

32

33

} memset (memo, 1 , s i z e o f (memo ) ) ; "

c ou t << " Optimal

value :

<<

f ( bagWeight

)

<<

"\n" ;

34

return

0 ;

35

}

simple structure consisting of two members (both integers in our case) that serve to hold the weight and value of each item. We have also used the vector class which is basically a memory managed dynamic array, it should be intutitive to follow even if you have no knowledge of C++. The program also requires a structured input, taking in the bag weight as the first integer to be read, follow by pairs of items (terminated by EOF). Note that if we take out our caching mechanism (memoization principle), we are left with just a simple recursive program (which is also inefficent – test this out yourself for large inputs).

12

You can see from the implementation above, there is a limitation of dynamic programming. What happens if the weight of 1 item was as large as 1 billion? We would have to make a caching array of 1 billion integers representing each of the weights that are possible. Now we have a memory limit situation - this algo- rithm doesn’t scale well to large value inputs! We also have an execution time problem, in the worst case we have to recurse through 1 billion f () recursive functions (which will definitely overflow the stack before it finishes it’s calcu- lations). The point is that dynamic programming isn’t a silver bullet for hard problems – it only runs well under limited conditions. So you should definitely check whether the state space is too large before you begin formulating/coding the problem!

Step 4: Backtrack the solutions It’s all nice and good to know the best value we can get from the knapsack problem. However, the thief still scratches his head because although he knows the best value he can come out with, what items should he actually choose to obtain the optimal value? It’s important to realise whether you need this step or not, if you do – then you may need to take additional measures when you code the recursive algorithm. After the memoization has finished, we know have an array that details the optimal value for many different weights – we need to use this (and usually some other bookkeeping information) to backtrack the items we have used to get there.

How do we do this? At the moment it contains neither “markers” nor any bookkeeping to allow us to deduce which items where added at each stage of the dynamic programming process. So we need to change our implementation to do this.

What markings do we need? We need some way to know which subproblem produced the best result for a particular weight w, and we need to know which item was added on top of that subproblem. To no surprise, this backtracking is also recursive in nature, you start off at the weight W (the bag weight), assume we know which item was added and which was the optimal subproblem, then we print out the item that was added and recurse to the optimal subproblem and repeat the process until we reach weight 0. The source code below demonstrates this approach using an extra integer auxiliary array that keep track of which item was added (the index of the item) – we can directly deduce the weight of the subproblem since we know the weights of each item, we just need to subtract the item weight off the current weight. You can store it in an array instead of printing it, or do other processing – it’s entirely up to you. See Listing 5 and 6 for implementation details.

13

Listing 5: Knapsack Top-Down Dynamic Programming with Backtracking

1

#include <iostream>

 

2

#include <vector>

 

3

#include

<u t i l i t y >

4

5

#def ine MAX WEIGHT

 

1000

 

6

7

using namespace s t d ;

 

8

9

int

memo[MAX WEIGHT+ 1];

 

10

v e c t o r <p ai r <int , int> >

 

data ;

 

//

( we ig h t ,

v a l u e )

p a i r

11

int

indexT able [MAX WEIGHT+ 1];

//

f o r

b a c k t r a c k i n g

12

v e c t o r <int> itemsUsed ;

 

//

used

used f o r

b a c k t r a c k i n g

 

13

14

int

f

( int

w)

{

 

15

i

f

(w ==

0 )

return

0 ;

 

16

i f

(memo[w]

!=

1)

return memo[w ] ;

 

17

int

v al

=

f

(w1);

 

18

for

( int

i

=

0 ;

i

<

data . s i z e ( ) ;

i++)

 

{

19

i f

( data [ i ] . f i r s t

<= w)

{

20

int

k

=

f

(wdata [ i ] . f i r s t )+data [ i ] . sec ond ;

 

21

i f

( v al

<

k )

{

22

v al

=

k ;

23

indexT able [w]

=

i

+1;

//

b o o k k e e p i n g

i n f o rm a t i o n

24

}

25

}

26

}

27

return memo[w]

=

v al

;

28

}

29

30

void

 

b a c k t r a c k ( int

 

w)

{

31

i f

(w ==

0 )

return ;

32

itemsUsed . push back ( indexT able [w ] ) ;

 

33

b a c k t r a c k

(w

data [

indexT able [w] 1]. f i r s t ) ;

 

34

}

35

36

int

main ( )

 

{

37

int

weight ,

v al u e ;

 

38

int

bagWeight ;

 

39

ci n >> bagWeight ;

 

40

i f ( bagWeight > MAX WEIGHT)

return

1 ;

41

while

( ci n >> wei gh t >> v al u e )

{

42

data

. push back

(

m ake p ai r

(

weight , v al u e

) )

;

43

c ou t <<

" Item :

"

<<

data

. s i z e ( )

<<

"

Weight :

"

<<

44

wei gh t <<

"

Value :

"

<<

v al u e

<<

"\n" ;

 

45

}

14

Listing 6: Knapsack Top-Down Dynamic Programming with Backtracking (con- tinued)

46

47

memset (memo, 1 , s i z e o f (memo ) ) ; "

c ou t << " Optimal

value :

<<

f ( bagWeight ) << "\n" ;

48

b a c k t r a c k ( bagWeight ) ;

49

c ou t <<

" Items

Used :\ n" ;

50

for

( int

i

=

0 ;

i

< itemsUsed

.

s i z e ( ) ;

i++) {

51

c ou t <<

" Item

"

<<

itemsUsed

[

i

]

<<

"\n" ;

52

}

53

return

0 ;

54

}

3.3 Other Knapsack variants

Now we have a dynamic programming algorithm that completely solves the

knapsack problem. Or have we? Let’s consider another variant where the items cannot be re-used. How do we keep track of which items have been used in

a concise manner? Informally, many people call the knapsack variant we just

completed a 1 dimensional dynamic programming algorithm. This is because the state lies in a 1D state array (the weight). The variant we consider now is in fact, a 2 dimensional dynamic programming algorithm. This should serve as

a

hint in what the state would be!

If

you haven’t guessed it, the state for this variant is (weight, item). How does

this work? Instead of keeping track whether or not we have used an item or not, we implicitly enforce the rule by iteratively considering a growing set of available items. For example, if we were given 8 items to pack, we then start off by considering the first item. Calculate the optimal values for each of the weights for this item. Then we expand to the second item, we use the values for the first item to decide optimal values for each of the weights including this item. By “including” this item, we really mean introducing it to the decision space – we can actually choose to reject this item if it proves non-optimal.

More formally we can set the state to be equal to:

Let

S(W,I)

=

the

optimal

value

of

a

bag

of

weight

W

using

item

1

up

to

item

I.

We build the recurrence relation by using the binary (two) decisions we have when we “include” an item – do we include it or do we reject it? If we include the item i then we effectively consume w i worth of weight, if we choose not to include the item then we don’t consume any weight. However we need to consider the base cases for the recurrence relation. We can re-use our previous base case where any w = 0 has a value of 0. Also if we choose to use 0 items or have 0 items

15

left to choose from, then we can assign a value of 0. Hence F (w, 0) = F (0, i) = 0 are the base cases for the recursion. The resulting recursive definition is as follows:

F (w, i) = max

{F(w w i , i 1) + v i } F (w, i 1)

if w w i 0, i in items

F (0, i) = F (w, 0) = 0 (base cases)

If you’re still unsure then try to think recursively. Dynamic programming is especially hard in the beginning but the underlying principles behind it are really just recursion. Implementation is usually the easiest part because it’s just merely translating the definition into code. You can do it top-down (memoization) or you could do it with a bottom-up approach. For this variant, we will implement it bottom-up – it will really look like magic. The top-down approach will be an exercise for the reader – which shouldn’t be too hard to implement.

So how do we go about implementing it bottom-up? What are the differences between this and the top-down approach? The major difference is that the order in which we compute the algorithm is important and crucial. However, for simple examples such as this – the order is already implied by the recurrence relation. How do we deduce the order? What is the order? Imagine a double nested for loop going through a 2 dimensional matrix, you have two integer variables keeping track of the two indexes. The order is simply the order in which the “coordinates” (i.e. the two index pairs) are visited. Of course, it’s more abstract in dynamic programming but the principle is the same. To deduce the order, we look at the recurrence relation and see what a particular state depends on.

This shows that (w, i) depends on (w w i , i 1) and (w, i 1) to be available. If you are a visual thinker, then draw a 2D matrix on a piece of paper. Label the horizontal axis as the weights (w) and the vertical axis as the items available (i). The top-left corner denotes (0,0) and the bottom-right corner denotes (W, I ) then arbitrarily pick a cell in the matrix and label it (w, i). Next, draw 2 arrows depicting the direction of where (w w i , i 1) and (w, i 1) would lie. This allows you to visually see the dependencies! So what you want is an ordering so the dependencies will always be fulfilled by the time you reach that subproblem.

Here we have two options, we can calculate in the direction of increasing w (outer loop) and then increasing i (inner loop) or with the direction of increasing i (outer loop) and increasing w (inner loop). An example of the first order would be (0,0),(0,1), (0,2),(1,0),(1,1),(2,1) etc. An example of the second order would be (0,0),(1,0),(2,0),(3,0),(0,1),(1,1),(2,1),(3,1),(0,2) etc. These assume W = 3 and I (number of items)= 2.

Once we decide a valid ordering, implementation is usually straightforward.

16

Figure 2: Bottom-up Dynamic Programming Order Dependency Diagram Have nested for-loops that loop through the

Figure 2: Bottom-up Dynamic Programming Order Dependency Diagram

Have nested for-loops that loop through the order and use an array to cache results much like what we did with memoization. This can be seen in the Code Listing 7.

How do we back-track our results from this? You could do what we did last time with the first variant. However, there exists a more elegant solution where we simply just use our 2-D cache DP/array. How would do this work? We start off at dp(W, I ) now we check if dp(W, I ) is equal to dp(W, I 1). If it is, then we implicitly deduce that item I is not in the best item configuration. If dp(W, I ) is not equal to dp(W, I 1) then we can deduce that item I is in the best item configuration, so we print that item out (or store it) and we backtrack/recurse to dp(W w i , I 1) where w i = weight of item i. A simple implementation of this backtracking algorithm can be seen in Listing 8 and 9.

There are many other different variations of the knapsack problem – one vari- ant allows you to split objects into fractions. This variant is sometimes called fractional knapsack here a greedy strategy suffices, convince yourself either in- formally or formally with a proof. There are also more complicated knapsack problems involving multiple criteria and constraints usually called multidimen- sional knapsack problems. However these are beyond the scope of this tutorial and usually employ different algorithmic techniques to solve.

17

Listing 7: 0-1 Knapsack Bottom-up Dynamic Programming

1

#include <iostream>

 

2

#include <vector>

3

#include

<u t i l i t y >

4

5

#def ine

MAX WEIGHT 1000

 

6

#define

MAX ITEMS 50

7

8

using namespace s t d ;

 

9

10

int

dp [MAX WEIGHT+ 1][MAX ITEMS+ 1];

 

11

12

int

main ( )

{

13

int

weight ,

v al u e ;

 

14

int

bagWeight ;

 

15

ci n >> bagWeight ;

 

16

i f

( bagWeight > MAX WEIGHT)

return

1 ;

17

v e c t o r <p ai r <int , int> >

data ;

 

18

while (

ci n

>> wei gh t >>

v al u e )

{

19

data .

push back ( m ake p ai r ( weight , v al u e )

)

;

20

}

21

i f

( data . s i z e ( )

> MAX ITEMS)

return

1 ;

22

//

s t a r t

bottomup dynamic

programming

s o l u t i o n

 

23

for

(

int

w

=

1 ;

w <= bagWeight ;

w++) {

 

24

for

( int

i

=

1 ;

i

<=

data . s i z e ( ) ;

i++)

{

25

dp [w ] [ i ]

 

=

dp [w ] [

i 1];

 

26

i f

( data

[

i 1]. f i r s t

<= w)

 

27

dp

[w ] [ i

]

=

max( dp [w ] [ i ]

, dp [ wdata [ i 1]. f i r s t ] [

i 1]+

28

data

[

i

1]. sec ond ) ;

 

29

}

30

}

31

//

p r i n t

ou t

 

s o l u t i o n

 

32

c ou t << " Optimal

value :

"

<<

dp [ bagWeight ] [ data . s i z e

( ) ]

33

<<

"\n" ;

 

34

return

0 ;

 

35

}

4

Edit Distance

 

4.1 Modelling and Implementation

You have now experienced most of the aspects involving implementing and mod- elling a dynamic programming algorithm. We have seen the two different ap- proaches of implementation, namely top-down dynamic programming (memo- ization) and bottom-up dynamic programming. We have also seen how recursion

18

Listing 8: 0-1 Knapsack Bottom-up Dynamic Programming with Backtracking

1

#include <iostream>

 

2

#include <vector>

 

3

#include

<u t i l i t y >

 

4

5

#def ine

MAX WEIGHT 1000

 

6

#define

MAX ITEMS 50

 

7

8

using namespace s t d ;

 

9

10

int

dp [MAX WEIGHT+ 1][MAX ITEMS+ 1];

 

11

v e c t o r <int> itemsUsed ;

//

used

f o r

b a c k t r a c k i n g

 

12

v e c t o r

<p ai r <int

, int> >

data ;

 

13

14

void

b a c k t r a c k

( int

w,

int

i )

{

15

i

f

(w ==

 

0

|

|

i

==

0 )

return ;

 

16

i

f

( dp

[w

]

[

i

]

==

dp [w ] [

i 1])

b a c k t r a c k

(w,

i 1);

 

17

e l s e

 

{

18

b a c k t r a c k

(wdata [ i 1].

f i r s t

, i 1);

 

19

itemsUsed . push back ( i ) ;

 

20

}

21

}

22

23

int

main ( )

 

{

24

int

weight ,

v al u e ;

 

25

int

bagWeight ;

 

26

ci n >> bagWeight ;

 

27

i f ( bagWeight > MAX WEIGHT)

return

1 ;

28

while

 

( ci n

 

>>

 

wei gh t >>

v al u e )

{

 

29

30

data

c ou t <<

. push back ( m ake p ai r

" Item

"

<<

data .

weight , v al u e ) ) ; "

(

s i z e ( )

<<

Weight :

"

<<

wei gh t <<

31

"

Value :

 

"

<<

v al u e

<<

"\n"

;

32

}

33

i f

( data . s i z e ( )

> MAX ITEMS)

return

1 ;

34

//

s t a r t

 

bottomup dynamic

programming

s o l u t i o n

 

35

for

(

int

w

=

1 ;

w <= bagWeight ;

w++) {

 

36

for

 

( int

 

i

=

1 ;

i

<=

data . s i z e ( ) ;

i++)

{

37

dp [w ] [ i ]

 

=

dp [w ] [ i

1];

 

38

i f

( data

 

[

i 1]. f i r s t

<= w)

 

39

dp [w ] [ i

]

=

max( dp [w ] [ i ]

, dp [ wdata [ i 1]. f i r s t ] [

i 1]+

40

data

 

[

i

1]. sec ond ) ;

 

41

}

42

}

43

//

p r i n t

 

ou t

 

s o l u t i o n

 

44

c ou t << " Optimal

value :

"

<<

dp [

bagWeight ] [ data . s i z e

( ) ]

45

<<

"\n" ;

 

19

Listing 9: 0-1 Knapsack Bottom-up Dynamic Programming with Backtracking (continued)

46

b a c k t r a c k

( bagWeight

, data . s i z e

( )

)

;

47

c ou t <<

" Items

Used :\ n" ;

 

48

for

(

int

i

=

0 ;

i < itemsUsed

.

s i z e ( ) ;

i++) {

49

c ou t <<

" Item

" << itemsUsed

[

i

]

<<

"\n" ;

50

}

51

return

0 ;

52

}

fits into the modelling and formulation of dynamic programming algorithms and the processes involved in creating one. We have also seen a classical example where dynamic programming is useful (knapsack), the reasons why it is used (Fibonacci example) as well as backtracking useful and meaningful information from a computation. We now go to a more advanced example to reinforce the ideas that have been introduced so far.

The next problem we will look at is called the Edit Distance problem or more specifically it’s called Levenshtein’s distance. Basically it’s a metric to measure the differences between two strings. The metric usually consists of how many “edits” we need to make on a string to turn it into the other string. Edits can be either substituting a letter for something else, inserting a new character anywhere in the string or deleting a character.

How do we create a dynamic programming algorithm for this problem? Let’s see if we can construct any subproblems. If we can then we should be able to translate it to a dynamic programming with ease because it’ll give us an insight into the decision state of the problem as well as give hints towards the relationship between them (recurrence relation).

First, let’s create some examples to get a feel of the problem. Consider the following two strings:

N

O

T

H

I

N

G

S

O

M

E

T

H

I

N

G

How do we turn one to the other with the minimal amount of moves? A rough idea would suggest somehow discarding counting “THING” at the end of each of the words. So if we wanted to change “NO” to “SOME” how would we do it? We have three possible decisions or choices (inserting, deleting and substituting). We need to try a systematic approach instead of trying to solve it by looking (because computers can’t solve by looking yet). Since we want subproblems, what could possible subproblems be? In fact, if we look closely we have already made a subproblem! “NO” and “SOME” is a subproblem to “NOTHING” and

20

“SOMETHING”. How did we get there? We looked at the ends and decided that since they were equal, we skipped them. This is a sufficient and good starting point. Let’s try making the state being the last characters of the first and second string we are looking at. For example at the start we let x = 6 and y = 8 (assuming 0-based index), the last characters of their respective strings. Now we compare the characters current in position x and y. If they are equal, we will do what we did implicitly, skip them. So what happens on x = 1 and y = 3 (“O” and “E”)? Well we have multiple choices:

1. We use substitution; this will make O become E. This will incur a cost of 1. Now the subproblems for this decision would be x = 0 and y = 2 because we made them the same.

2. We use deletion; we can delete O and hope to find a better match from string 1 later on. This will incur a cost of 1. Now the subproblems for this decision would be x = 0 and y = 3 because we “ignore” the O because we deleted it, however we still need to match it to E because we didn’t delete that.

3. We use insertion; we insert E after O, so there will be a new match. The second string will decrease by 1 position because the E was matched by the insertion. However, the first string stays in the same position because we still need to do something with the O, we merely “put it off” by doing an insertion. So the new subproblem for this decision would be x = 1 and y = 2.

Those were our three choices; in fact, they make perfect subproblems because firstly, it’s systematic and easily implementable. Second, it considers all possi- ble cases where we can modify the string. Third, subproblems depend only on strictly smaller parts of the string and no overlapping occurs, so the order/de- pendencies can be implemented correctly. So now we have decided our state to be the two positions relative to the positions we are up to in the two strings. Our recurrence relation can be deduced directly from the three cases:

F (x, y) = min

  F(x 1, y) + 1 F (x, y 1) + 1

F (x 1, y 1) + d(x, y)

F (x, y) = 0 if x < 0 or y < 0

d(x, y) =

0

1

if a[x] = b[y] otherwise

Note that d(x, y) evaultes to a cost of 1 if the characters in the current string positions don’t match, otherwise d(x, y) evaluates to a cost of 0 (because there

21

is no need to substitute since they have matching characters). Once our recur- rence relation is defined we can implement it. Here we will demonstrate both top-down and bottom-up implementations – it’s also a good time to practice implementation skills, so it’s a good idea to try and implement your own code based on the above recurrence before looking at the supplied source code.

See Listing 10 for a memoization approach to the Edit Distance problem. See Listing 11 for a bottom-up dynamic programming approach to the Edit Distance problem.

Listing 10: Edit Distance using Memoization Approach

1

#include <iostream>

 

2

#include <vector>

 

3

#include

<string >

4

#include <cctype>

5

6

#define MAX STRING LEN 5000

 

7

#define

INF INT MAX

 

8

9

using namespace s t d ;

 

10

11

int memo[MAX STRING LEN+ 1][MAX STRING LEN+ 1];

 

12

s t r i n g

s ,

t ;

 

13

14

int

d (

int

x

,

int

 

y )

{

15

i f

(

s [

x

1]

==

 

t [ y 1])

 

return

0 ;

16

return

1 ;

17

}

18

19

int

func

(

int

 

x

,

int

y )

 

{

20

i f

( x

==

0

|

|

y ==

0 )

 

return max( x , y ) ;

 

21

int&

r e s

=

memo[ x ] [ y ] ;

 

22

i

f

( r e s

!= 1) return

 

r e s ;

23

r e s

=

INF ;

 

24

r e s

=

min (

r e s

,

func ( x 1,y1)+d ( x , y ) ) ;

 

25

r

e

s

=

min

(

r e s

,

func ( x 1,y )+ 1 );

 

26

r

e

s

= min

(

r e s

,

func ( x , y 1)+1);

27

return

r e s

;

28

}

29

30

int

main

( )

{

31

while (

ci n

>>

 

s

>>

t

)

{

32

memset (memo, 1 , s i z e o f (memo ) ) ;

 

33

c ou t <<

 

" Edit

distance

is :

"

<<

func ( s . s i z e ( ) , t . s i z e ( ) )

<<

"\n" ;

34

}

35

return

0 ;

36

}

22

Listing 11: Edit Distance using Bottom-up Dynamic Programming

1 #include <iostream> 2 #include <string > 3 4 #define MAX STRING LEN 5000 5
1
#include <iostream>
2
#include
<string >
3
4
#define MAX STRING LEN 5000
5
6
using namespace s t d ;
7
8
int
dp [MAX STRING LEN+ 1][MAX STRING LEN+ 1];
9
string
s ,
t ;
10
11
int
d (
int
x
,
int
y )
{
12
i f
(
s [
x − 1]
==
t
[ y − 1])
return
0 ;
13
return
1 ;
14
}
15
16
int
main
( )
{
17
while (
ci n
>>
s
>>
t )
{
18
memset ( dp , 0 , s i z e o f
( dp ) ) ;
19
for
(
int
i
=
0 ;
i
<=
s
.
( )
;
i++)
dp [
i
]
[ 0 ]
=
i
;
20
for
(
int
j
=
0 ;
j
<=
t
.
(
)
;
j++)
dp [ 0 ] [ j
]
=
j
;
21
for
(
int
i
=
1 ;
i
<=
s
.
s i z e
s i z e
s i z e
( )
;
i++) {
22
for
(
int
j
=
1 ;
j
<=
t . s i z e
( )
;
j++)
{
23
dp [
i
]
[
j
]
= min ( dp
[
i
− 1][
j
]+1 ,min ( dp
[
i
] [ j −1]+1,
24
dp [
i − 1][ j −1]+d ( i
,
j
)
) )
;
25
}
26
}
27
c ou t <<
" Edit
Distance
is :
"
<<
dp [ s . s i z e
(
) ]
[
t .
s i z e
( ) ]
28
<<
"\n" ;
29
}
30
return
0 ;
31
}

4.2 Performance Differences

We will now discuss the performance differences between the two approaches – should you opt for one over the other? Are there any rough indicators to suggest when one should be used over the other?

We will now to a mini-benchmark on the two approaches. After writing a random string generator that generates two strings based on a specified length – we feed a collection of such strings into our program (i.e. by piping). After doing a small test of roughly 20 random strings of possible size up to 1000 length, we have the following numbers: